Amazon Elastic Block Store (EBS)

Amazon Elastic Block Store (Amazon EBS) is high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. Persistent storage means the storage is independent outside the lifespan of an EC2 instance. EBS volumes provide durable block-level storage for use with Amazon EC2 instances. EBS provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. AWS customers can mount these volumes as devices on their instances. 

  • Amazon EBS volumes behave like raw, unformatted block devices, that can be mounted in block level storage volumes which use EC2 instances as devices on users instances.
  • Amazon EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. 
  • Amazon EBS volumes are available in a variety of types that differ in performance characteristics and price. 
  • Although multiple Amazon EBS volumes can be attached to a single Amazon EC2 instance, a volume can only be attached to a single instance at a time.
  • EBS is designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. AWS clients can use EBS Snapshots with automated lifecycle policies to back up their volumes in Amazon S3.
Amazon EBS

Amazon EBS Benefits

EBS volumes are performant for users most demanding workloads, including mission-critical applications such as SAP, Oracle, and Microsoft products. SSD-backed options include a volume designed for high performance applications and a general-purpose volume that offers strong price/performance for most workloads.

EBS is built to be secure for data compliance. Newly created EBS volumes can be encrypted by default with a single setting in users account. EBS volumes support encryption of data at-rest, data in-transit, and all volume backups. EBS encryption is supported by all volume types, includes built-in key management infrastructure, and has zero impact on performance.

Amazon EBS enables users to increase storage without any disruption to the critical workloads. Build applications that require as little as a single GB of storage, or scale up to petabytes of data — all in just a few clicks. Snapshots can be used to quickly restore new volumes across a region’s Availability Zones, enabling rapid scale.

Amazon EBS architecture offers reliability for mission-critical applications. EBS volumes are designed to protect against failures by replicating within the Availability Zone (AZ), offering 99.999% availability. EBS offers a high durability volume (io2) for customers that need 99.999% durability, especially for their business-critical applications.

EBS Usage Patterns

Amazon EC2 Block-Level Storage Options

There are two block-level storage options for EC2 instances. The first option is an instance store, which consists of one or more instance store volumes exposed as block I/O devices. An instance store volume is a disk that is physically attached to the host computer that runs the EC2 virtual machine.

  • Users need to specify instance store volumes when they launch the EC2 instance. Data on instance store volumes will not persist if the instance stops, terminates, or if the underlying disk drive fails.

The second option is an EBS volume, which provides off-instance storage that can persist independently from the life of the instance. The data on the EBS volume will persist even if the EC2 instance that the volume is attached to shuts down or there is a hardware failure on the underlying host. The data persists on the volume until the volume is deleted explicitly. Due to the immediate proximity of the instance to the instance store volume, the I/O latency to an instance store volume tends to be lower than to an EBS volume.

  • Use cases for instance store volumes include acting as a layer of cache or buffer, storing temporary database tables or logs, or providing storage for read replicas. 
Terminology
  • IOPS: Input/output (I/O) operations per second (Ops/s)
  • Throughput: Read/write transfer rate to storage (MB/s)
  • Latency: Delay between sending an I/O request and receiving an acknowledgement (ms) 
  • Block size: Size of each I/O (KB)
  • Page size: Internal basic structure to organize the data in the database files (KB)
  • Amazon Elastic Block Store (EBS) Volume: Persistent block storage volumes for use with Amazon Elastic Compute Cloud (EC2) instances
  • Amazon EBS General Purpose SSD (gp2) Volume: General purpose SSD volume that balances price and performance for a wide variety of transactional workloads
  • Amazon EBS Provisioned IOPS SSD (io1) Volume: Highest performance SSD volume designed for latency-sensitive transactional workloads
  • Amazon EBS Throughput Optimized HDD (st1) Volume: Lowcost HDD volume designed for frequently accessed, throughputintensive workloads

Amazon EBS is meant for data that changes relatively frequently and needs to persist beyond the life of EC2 instance. Amazon EBS is well-suited for use as the primary storage for a database or file system, or for any application or instance (operating system) that requires direct access to raw block-level storage. Amazon EBS provides a range of options that allow users to optimize storage performance and cost for the workload. These options are divided into two major categories:

  1. Solid-state drive (SSD)-backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS) and
  2. Hard disk drive (HDD)-backed storage for throughput-intensive workloads such as big data, data warehouse, and log processing (performance depends primarily on MB/s).

Temporary storage: Consider using local instance store volumes for needs such as scratch disks, buffers, queues, and caches.

Multi-instance: storage Amazon EBS volumes can only be attached to one EC2 instance at a time. If users need multiple EC2 instances accessing volume data at the same time, consider using Amazon EFS as a file system.

Highly durable: storage If users need very highly durable storage, use S3 or Amazon EFS. Amazon S3 Standard storage is designed for 99.999999999 percent (11 nines) annual durability per object. Users can even decide to take a snapshot of the EBS volumes. Such a snapshot then gets saved in Amazon S3, thus providing the durability of Amazon S3. EFS is designed for high durability and high availability, with data stored in multiple Availability Zones within an AWS Region.

Static data or web content: If users data doesn’t change that often, Amazon S3 might represent a more cost-effective and scalable solution for storing this fixed information. Also, web content served out of Amazon EBS requires a web server running on Amazon EC2; in contrast, users can deliver web content directly out of Amazon S3 or from multiple EC2 instances using Amazon EFS.

Amazon EFS Amazon Elastic File System (Amazon EFS) delivers a simple, scalable, elastic, highly available, and highly durable network file system as a service to EC2 instances. It supports Network File System versions 4 (NFSv4) and 4.1 (NFSv4.1), which makes it easy to migrate enterprise applications to AWS or build new ones.

  • Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes, which allows massively parallel access from EC2 instances to Amazon Web Services – AWS Storage Services Overview users data within a Region.

Amazon EBS Features

EBS Monitoring

Amazon EBS automatically sends data points to Amazon CloudWatch at fiveminute intervals for General Purpose SSD (gp2), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes. Provisioned IOPS SSD (io1) volumes send data points to CloudWatch at one-minute intervals. The EBS metrics can be viewed by selecting the monitoring tab of the volume in the Amazon EC2 console.

Elastic Volumes is a feature that allows users to easily adapt the volumes as the needs of the applications change. Elastic Volumes allows users to dynamically increase capacity, tune performance, and change the type of any new or existing current generation volume with no downtime or performance impact. 

  • By create a volume with the capacity and performance needed, users have the ability to modify the volume configuration in the future, saving hours of planning cycles.
  • By using Amazon CloudWatch with AWS Lambda users can automate volume changes to meet the changing needs of the applications.
Amazon EBS Snapshots

Amazon EBS provides the ability to save point-in-time snapshots of users volumes to Amazon S3. Amazon EBS Snapshots are stored incrementally. Snapshots can be used to instantiate multiple new volumes, expand the size of a volume, or move volumes across Availability Zones. The following are key features of Amazon EBS Snapshots:

  • Direct read access of EBS Snapshots – EBS direct APIs for Snapshots enable backup partners to track incremental changes on EBS volumes more efficiently, providing faster backup times and more granular recovery point objectives (RPOs) to customers at a lower cost. 
  • Creating EBS snapshots from any block storage – Using EBS direct APIs, users can create EBS snapshots directly from any block storage data, regardless of where it resides, including data on-premises, and quickly recover into EBS volumes. 
  • Immediate access to Amazon EBS volume data – After a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to users Amazon EBS volume before your attached instance can start accessing the volume. 
  • Instant full performance on EBS volumes restored from snapshots – By enabling Fast Snapshot Restore (FSR) capability for low latency access to data restored from snapshots, EBS volumes restoresnap shots instantly receive their full performance. 
  • Resizing Amazon EBS volumes – There are two methods that can be used to resize an Amazon EBS volume. If you create a new volume based on a snapshot, you can specify a larger size for the new volume. 
  • Sharing Amazon EBS Snapshots – Amazon EBS Snapshots’ shareability makes it easy for you to share data with your co-workers or others in the AWS community. For more information about how to share snapshots, see Modifying Snapshot Permissions.
  • Copying Amazon EBS Snapshots across AWS regions – Amazon EBS’s ability to copy snapshots across AWS regions makes it easier to leverage multiple AWS regions for geographical expansion, data center migration and disaster recovery
Elastic Volumes

The Elastic Volumes feature of EBS SSD volumes allows users to dynamically change the size, performance, and type of EBS volume in a single API call or within the AWS Management Console without any interruption of MySQL operations. This simplifies some of the administration and maintenance activities of MySQL workloads running on current generation EC2 instances.

  • Elastic Volumes allows users to dynamically increase capacity, tune performance, and change the type of any new or existing current generation volume with no downtime or performance impact. 
  • Users can call the ModifyVolume API to dynamically increase the size of the EBS volume if the MySQL database is running low on usable storage capacity.
  • Users can monitor the progress of the volume modification either through the AWS Management Console or CLI. 
  • The Elastic Volumes feature makes it easier to adapt users resources to changing application demands, that allow modifications in the future as the business needs change.
EBS Durability and Availability

Durability in the storage subsystem for MySQL is especially important if you are storing user data, valuable production data, and individual data points. EBS volumes are designed for reliability with a 0.1% to 0.2% annual failure rate (AFR) compared to the typical 4% of commodity disk drives. EBS volumes are backed by multiple physical drives for redundancy that is replicated within the Availability Zone to protect your MySQL workload from component failure.

  • EBS volumes are created in a specific Availability Zone, and can then be attached to any instances in that same Availability Zone. To make a volume available outside of the Availability Zone, users can create a snapshot and restore that snapshot to a new volume anywhere in that Region.

  • Users can copy snapshots to other Regions and then restore them to new volumes there, making it easier to leverage multiple AWS Regions for geographical expansion, data center migration, and disaster recovery.

  • Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. 
Amazon EBS-Optimized instances

Performance metrics, such as bandwidth, throughput, latency, and average queue length, are available through the AWS Management Console. These metrics, provided by Amazon CloudWatch, allow users to monitor the performance of the volumes which provides enough performance for the applications without paying for resources they don’t need.

  • EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 and 60,000 Megabits per second (Mbps).
  • The dedicated throughput minimizes contention between Amazon EBS I/O and other traffic from your EC2 instance, providing the best performance for your EBS volumes.
  • EBS-optimized instances are designed for use with all Amazon EBS volume types.
EBS Security

Amazon EBS supports several security features users can use from volume creation to utilization. These features prevent unauthorized access to your MySQL data. Users can use tags and resource-level permissions to enforce security on the volumes upon creation.

  • Tags are key/value pairs  are typically used to track resources, control cost, implement compliance protocols and control access to resources via AWS Identity and Access Management (IAM) policies.
  • Users can assign tags on EBS volumes during creation time, which allows them to enforce the management of the volume as soon as it is created. 
  • Users can have granular control on who can create or delete tags through the IAM resource-level permissions. This granularity of control extends to the RunInstances and CreateVolume APIs where users can write IAM policies that requires the encryption of the EBS volume upon creation.
  • Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes and snapshots, eliminating the need to build and manage a secure key management infrastructure.
  • EBS encryption enables data at rest security by encrypting users data volumes, boot volumes and snapshots using Amazon-managed keys or keys you create and manage using the AWS Key Management Service (KMS). In addition, the encryption occurs on the servers that host EC2 instances, providing encryption of data as it moves between EC2 instances and EBS data and boot volumes. 
  • To encrypt a data at rest, users can enable volume encryption during creation time. The new volume will get a unique 256-bit AES key, which is protected by the fully Amazon Web Services – Optimizing MySQL Running on Amazon EC2 Using Amazon EBS  managed AWS Key Management Service.
  • EBS snapshots created from the encrypted volumes are automatically encrypted. The Amazon EBS encryption feature is available on all current generation instance types. For more information on the supported instance types refer to the Amazon EBS Encryption documentation.

EBS volume types

As described previously, Amazon EBS provides a range of volume types that are divided into two major categories: SSD-backed storage volumes and HDD-backed storage volumes. SSD-backed storage volumes offer great price/performance characteristics for random small block workloads, such as transactional applications, whereas HDD-backed storage volumes offer the best price/performance characteristics for large block sequential workloads. You can attach and stripe data across multiple volumes of any type to increase the I/O performance available to your Amazon EC2 applications. The following table presents the storage characteristics of the current generation volume types.

SSD-backed volume is ideal for transactional workloads, such as databases and boot volumes (performance depends primarily on IOPS).

  • SSD-backed volumes include the highest performance io1 for latency-sensitive transactional workloads and gp2, which balances price and performance for a wide variety of transactional data. .
  • The performance of a block storage device is commonly measured and quoted in a unit called IOPS, short for input/output operations per second. 
  • SSD-backed volumes include General Purpose SSD (gp2) that balance price and performance for a wide variety of transactional data and the highest performance Provisioned IOPS SSD (io1) for latency-sensitive transactional workloads.

HDD-backed storage is good for throughput intensive workloads, such as MapReduce and log processing (performance depends primarily on MB/s). 

  • It also optimizes large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS.
  • HDD-backed volumes include Throughput Optimized HDD (st1) for frequently accessed, throughput intensive workloads and the lowest cost Cold HDD (sc1) for less frequently accessed data.

#01

Provisioned IOPS SSD (io1)

Provisioned IOPS SSD volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput.

  • For customers, who have an IO-intense workload such as databases, predictable and consistent IO performance.
    • Customers can use technologies such as RAID on top of multiple EBS volumes to stripe and mirror the data across multiple volumes.
  • I/O-intensive NoSQL and relational databases
  • It provide predictable, high performance and are well suited for: 
    • Critical business applications that require sustained IOPS performance 
    • Large database workloads.
  • Consistently performs at provisioned level, up to 20,000 IOPS maximum
  • I3:– High I/O instances. This family includes the High Storage Instances that provide Non-Volatile Memory Express (NVMe) SSD backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput and provide high IOPS at a low cost.
  • D2:– Dense-storage instances. D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.
  • For workloads requiring greater network performance, many instance types support enhanced networking. 
  • Enhanced networking reduces the impact of virtualization on network performance by enabling a capability called Single Root I/O Virtualization (SR-IOV). This results in:
    • More Packets Per Second (PPS)
    • Lower latency, and 
    • Less skittishness

#02

General Purpose(SSD)

General-purpose SSD volumes offer cost-effective storage that is ideal for a broad range of workloads. They deliver strong performance at a moderate price point that is suitable for a wide range of workloads.

  • General Purpose SSD volume balances price performance for a wide variety of transactional workloads
  • General-purpose SSD volumes are billed based on the amount of data space provisioned, regardless of how much data you actually store on the volume. 
  • Boot volumes, low-latency interactive apps, dev & test.
  • General-Purpose SSD delivers single-digit millisecond latencies, which is actually a good use case for the majority of workloads.
    • gp2s can deliver between 100 to 10,000 IOPS.

Some use cases that are a good fit for gp2 are 

System boot volumes, 

      • Applications requiring 

low latency, virtual desktops, development and test environments

  • It’s  suited for a wide range of workloads where the very highest disk performance is not critical, such as: 
    • System boot volumes
    • Small- to medium-sized databases 
    • Development and test environments
  • T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. 
  • M4 instances are the latest generation of General Purpose Instances. This family provides a balance of compute, memory, and network resources, and it is a good choice for many applications

#03

Throughput-Optimized HDD

Throughput-Optimized HDD volumes are low-cost HDD volumes designed for frequent access, throughput-intensive workloads such as big data, data warehouses, and log processing. 

  • Throughput Optimized HDD is designed for applications that require larger storage and bigger throughput, such as big data or data warehousing, where IOPS is not that relevant. St1 volumes, much like SSD gp2, use a burst model, where the initial baseline throughput is tied to the volume size, and credits are accumulated over time. 
  • Volumes can be up to 16 TB with a maximum IOPS of 500 and maximum throughput of 500 MB/s. These volumes are significantly less expensive than general purpose SSD volumes. 
  • ST1 is backed by hard disk drives (HDDs) and is ideal for frequently accessed, throughput intensive workloads with large datasets and large I/O sizes, such as MapReduce, Kafka, log processing, data warehouse, and ETL workloads.
  • Low cost HDD volume designed for frequently accessed, throughput intensive workloads
  • Big data, data warehouses, log processing.
  • HDD st1:– Which can be used for frequently accessed, throughput-intensive workloads.
  • It is good when the workload customers are going to run defines the performance metrics in terms of throughput instead of IOPS. The hard drives are based on magnetic drives
  • HDD (sc1):– Which is for less frequently accessed data that has the lowest cost.

#04

Cold HDD

Cold HDD volumes are designed for less frequently accessed workloads, such as colder data requiring fewer scans per day.  SC1 is backed by hard disk drives (HDDs) and provides the lowest cost per GB of all EBS volume types. It is ideal for less frequently accessed workloads with large, cold datasets. Similar to st1, sc1 provides a burst model.

  • Volumes can be up to 16 TB with a maximum IOPS of 250 and maximum throughput of 250 MB/s. These volumes are significantly less expensive than Throughput-Optimized HDD volumes.
  • COLD HDD defines performance in terms of throughput instead of IOPS. The use case for COLD HDD is noncritical, cold data workloads and is designed to support infrequently accessed data. Similar to st1, sc1 uses a burst-bucket.
  • Lowest cost HDD volume designed for less frequently accessed workloads.
  • Colder data requiring fewer scans per day

Magnetic volume:– Magnetic volumes have the lowest performance characteristics of all Amazon EBS volume types. Magnetic volumes are billed based on the amount of data space provisioned, regardless of how much data you actually store on the volume. A magnetic EBS volume can range in size from 1 GB to 1 TB and will average 100 IOPS, but has the ability to burst to hundreds of IOPS. 

They are best suited for:

  • Workloads where data is accessed infrequently Sequential reads
  • Situations where low-cost storage is a requirement
  • Magnetic volumes are billed based on the amount of data space provisioned, regardless on how much data customers actually store on the volume.
  • Cold workloads where data is infrequently accessed 
  • Scenarios where the lowest storage cost is important

EBS performance

As described previously, Amazon EBS provides a range of volume types that are divided into two major categories: SSD-backed storage volumes and HDD-backed storage volumes. SSD-backed storage volumes offer great price/performance characteristics for random small block workloads, such as transactional applications, whereas HDD-backed storage volumes offer the best price/performance characteristics for large block sequential workloads. Users can attach and stripe data across multiple volumes of any type to increase the I/O performance available to your Amazon EC2 applications.

Several factors, including I/O characteristics and the configuration of your instances and volumes, can affect the performance of Amazon EBS. Customers who follow the guidance on our Amazon EBS and Amazon EC2 product detail pages typically achieve good performance out of the box. However, there are some cases where customers may need to do some tuning in order to achieve peak performance on the platform. This topic discusses general best practices as well as performance tuning that is specific to certain use cases.

  • RAID
  • Benchmarking AWS EBS Workloads with Fio
  • fio
  • Oracle Orion
Amazon EBS

01

Benchmarking AWS EBS Workloads with Fio

One of the main components of AWS EBS performance is I/O. Applications running on the AWS EC2 instance submits read and write operations to an EBS volume. Each operation then converted to a system call to the kernel, 

  • The kernel knows that the underlying file system is a virtualized block storage, and through internal mechanisms the kernel will redirect the read/write operation to the I/O domain where the I/O operation will pass through a grant mapping process to finally, once the I/O is mapped, be performed in the EBS volume.
  • When customers create a new EBS volume they need to provide the size and the type of the volume. 
    • General Purpose SSD (gp2), 
    • Provisioned IOPS SSD (io1), 
    • Throughput Optimized HDD (st1), Cold HDD (sc1), and Magnetic.

Tools customers can use to benchmark the performance of EBS volumes

02

RAID Configuration on Linux

 

RAID 1:– can mirror two volumes together.  RAID 1. Mirrored (take one disk, mirror a copy to another disk), Redundancy

  • A RAID 1 array offers a “mirror” of your data for extra redundancy. Before you perform this procedure, you need to decide how large your RAID array should be and how many IOPS you want to provision.
  • The resulting size and bandwidth of a RAID 1 array is equal to the size and bandwidth of the volumes in the array.
  • Its ideal to use when fault tolerance is more important than I/O performance; for example, as in a critical application.
  • It is safer from the standpoint of data durability.
  • Does not provide a write performance improvement; requires more Amazon EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously.

RAID 5;–at least 3 disks, good for reads, bad for writes, AWS does not recommend ever putting RAID 5’s on EBS

RAID 10:– Striped & Mirrored, good redundancy, good performance

Amazon EBS does not recommend RAID 5 and RAID 6 because the parity write operations of these RAID modes consume some of the IOPS available to your volumes.

03

Oracle Orion

Oracle Orion is used to gocalibratte the I/O performance of storage systems to be used with Oracle databases. Oracle Orion is a tool for predicting the performance of an Oracle database without having to install Oracle or create a database. 

  • Oracle Orion is expressly designed for simulating Oracle database I/O workloads using the same I/O software stack as Oracle. 
  • Orion can also simulate the effect of striping performed by Oracle Automatic Storage Management.
  • Orion can run tests using different I/O loads to measure performance metrics such as MBPS, IOPS, and I/O latency.
  • Load is expressed in terms of the number of outstanding asynchronous I/Os. 
    • For random workloads, using either large or small sized I/Os, the load level is the number of outstanding I/Os. 
    • For large sequential workloads, the load level is a combination of the number of sequential streams and the number of outstanding I/Os per stream. 
    • Testing a given workload at a range of load levels can help you understand how performance is affected by load

04

fio 

fio was created to allow benchmarking specific disk IO workloads. It can issue its IO requests using one of many synchronous and asynchronous IO APIs, and can also use various APIs which allow many IO requests to be issued with a single API call.

  • It helps tune how large the files fio uses.
  • What offsets in those files IO is to happen at, how much delay if any there is between issuing IO requests.
  • What if any filesystem sync calls are issued between each IO request. 
    • A sync call tells the operating system to make sure that any information that is cached in memory has been saved to disk and can thus introduce a significant delay.
  • The options to fio allow customers to issue very precisely defined IO patterns and see how long it takes their disk subsystem to complete these tasks.
  • fio is packaged in the standard repository for Fedora 8 and is available for openSUSE through the openSUSE Build Service

Amazon EBS CloudWatch Metrics

Amazon EBS logo

Amazon CloudWatch metrics are statistical data that users can use to view, analyze, and set alarms on the operational behavior of the volumes. There are two types of monitoring data available for Amazon EBS volumes:

  1. Basic: Data is available automatically in 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances.
  2. Detailed: Provisioned IOPS SSD (io1 and io2) volumes automatically send one-minute metrics to CloudWatch>

Amazon Elastic Block Store (Amazon EBS) sends data points to CloudWatch for several metrics. General Purpose SSD (gp2 and gp3), Throughput Optimized HDD (st1), Cold HDD (sc1), and Magnetic (standard) volumes automatically send five-minute metrics to CloudWatch. Provisioned IOPS SSD (io1 and io2) volumes automatically send one-minute metrics to CloudWatch. Data is reported to CloudWatch only when the volume is attached to an instance.

VolumeReadBytes: Provides information on the read operations in a specified period of time. The Sum statistic reports the total number of bytes transferred during the period. The Average statistic reports the average size of each read operation during the period, except on volumes attached to a Nitro-based instance, where the average represents the average over the specified period.

  • The SampleCount statistic reports the total number of read operations during the period, except on volumes attached to a Nitro-based instance, where the sample count represents the number of data points used in the statistical calculation. For Xen instances, data is reported only when there is read activity on the volume.

VolumeWriteBytes: Provides information on the write operations in a specified period of time. The Sum statistic reports the total number of bytes transferred during the period. The Average statistic reports the average size of each write operation during the period, except on volumes attached to a Nitro-based instance, where the average represents the average over the specified period.

  • The SampleCount statistic reports the total number of write operations during the period, except on volumes attached to a Nitro-based instance, where the sample count represents the number of data points used in the statistical calculation. For Xen instances, data is reported only when there is write activity on the volume.

VolumeReadOps: The total number of read operations in a specified period of time. To calculate the average read operations per second (read IOPS) for the period, divide the total read operations in the period by the number of seconds in that period.

VolumeWriteOps: The total number of write operations in a specified period of time. To calculate the average write operations per second (write IOPS) for the period, divide the total write operations in the period by the number of seconds in that period.

VolumeTotalReadTime: This metric is not supported with Multi-Attach enabled volumes. The total number of seconds spent by all read operations that completed in a specified period of time. If multiple requests are submitted at the same time, this total could be greater than the length of the period. 

  • For Xen instances, data is reported only when there is read activity on the volume.

This metric is not supported with Multi-Attach enabled volumes.

VolumeTotalWriteTime: The total number of seconds spent by all write operations that completed in a specified period of time. If multiple requests are submitted at the same time, this total could be greater than the length of the period.

  • For Xen instances, data is reported only when there is write activity on the volume.

VolumeIdleTime: This metric is not supported with Multi-Attach enabled volumes. The total number of seconds in a specified period of time when no read or write operations were submitted.

  • The Average statistic on this metric is not relevant for volumes attached to Nitro-based instances.

VolumeQueueLength: The number of read and write operation requests waiting to be completed in a specified period of time.

  • The Sum statistic on this metric is not relevant for volumes attached to Nitro-based instances.

VolumeThroughputPercentage: This metric is not supported with Multi-Attach enabled volumes. Used with Provisioned IOPS SSD volumes only. The percentage of I/O operations per second (IOPS) delivered of the total IOPS provisioned for an Amazon EBS volume. Provisioned IOPS SSD volumes deliver their provisioned performance 99.9 percent of the time.

  • During a write, if there are no other pending I/O requests in a minute, the metric value will be 100 percent. Also, a volume’s I/O performance may become degraded temporarily due to an action users have taken.

VolumeConsumedReadWriteOps: Used with Provisioned IOPS SSD volumes only. The total amount of read and write operations (normalized to 256K capacity units) consumed in a specified period of time. I/O operations that are smaller than 256K each count as 1 consumed IOPS. I/O operations that are larger than 256K are counted in 256K capacity units. 

BurstBalance: Used with General Purpose SSD (gp2 and gp3), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes only. Provides information about the percentage of I/O credits (for gp2 and gp3) or throughput credits (for st1 and sc1) remaining in the burst bucket. Data is reported to CloudWatch only when the volume is active. If the volume is not attached, no data is reported.

  • The Sum statistic on this metric is not relevant for volumes attached to instances built on the Nitro System.

Amazon Elastic Block Store (Amazon EBS) is high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. Persistent storage means the storage is independent outside the lifespan of an EC2 instance. EBS volumes provide durable block-level storage for use with Amazon EC2 instances. EBS provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. AWS customers can mount these volumes as devices on their instances. 

  • Amazon EBS volumes behave like raw, unformatted block devices, that can be mounted in block level storage volumes which use EC2 instances as devices on users instances.
  • Amazon EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. 
  • Amazon EBS volumes are available in a variety of types that differ in performance characteristics and price. 
  • Although multiple Amazon EBS volumes can be attached to a single Amazon EC2 instance, a volume can only be attached to a single instance at a time.
  • EBS is designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. AWS clients can use EBS Snapshots with automated lifecycle policies to back up their volumes in Amazon S3.