Amazon EFS

Amazon Elastic File System (Amazon EFS) delivers a simple, scalable, elastic, highly available, and highly durable network file system as a service to EC2 instances. It supports Network File System versions 4 (NFSv4) and 4.1 (NFSv4.1), which makes it easy to migrate enterprise applications to AWS or build new ones. We recommend clients run NFSv4.1 to take advantage of the many performance benefits found in the latest version, including scalability and parallelism. Users can create and configure file systems quickly and easily through a simple web services interface. Users don’t need to provision storage in advance and there is no minimum fee or setup cost—simply pay for what customers use.

  • Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes, which allows massively parallel access from EC2 instances to data within a Region.
  • Amazon EFS is also highly available and highly durable because it stores data and metadata across multiple Availability Zones in a Region.
  • Amazon EFS is well suited to support a broad spectrum of use cases from highly parallelized, scale-out workloads that require the highest possible throughput to single-threaded, latency-sensitive workloads. 
  • Use cases such as lift-and-shift enterprise applications, big data analytics, web serving and content management, application development and testing, media and entertainment workflows, database backups, and container storage.
Amazon EFS

Amazon EFS Benefits

Amazon EFS automatically and instantly scales the file system storage capacity up or down as users add or remove files without disrupting the applications, dynamically providing the storage capacity as needed. Amazon EFS is a fully managed service providing shared file system storage for Linux workloads. It provides a simple interface allowing users to create and configure file systems quickly and manages the file storage infrastructure, removing the complexity of deploying, patching, and maintaining the underpinnings of a file system.

Amazon EFS allows users to securely access the files using the existing security infrastructure. Control access to Amazon EFS file systems with POSIX permissions, Amazon VPC, and AWS IAM. Users can Secure data by encrypting data at rest and in transit. Amazon EFS also meets many eligibility and compliance requirements to help meet regulatory needs. Click here for a list of compliance programs in scope for Amazon EFS.

Amazon EFS provides secure access for thousands of connections for Amazon EC2 instances and on-premises servers simultaneously using a traditional file permissions model, file locking capabilities, and hierarchical directory structure via the NFSv4 protocol. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN

Amazon EFS is designed to provide the throughput, IOPS, and low latency needed for Linux workloads. Throughput and IOPS scale as a file system grows and can burst to higher throughput levels for short periods of time to support the unpredictable performance needs of file workloads. For the most demanding workloads, Amazon EFS can support performance over 10 GB/sec and more than 500,000 IOPS.

 

Amazon EFS Features

Durability and Availability: Amazon EFS is designed to be highly durable and highly available. Each Amazon EFS file system object (such as a directory, file, or link) is redundantly stored across multiple Availability Zones within a Region. Amazon EFS is designed to be as highly durable and available as Amazon S3.

  • All files and directories are redundantly stored within and across multiple Availability Zones in a region to prevent the loss of data from the failure of any single component.
  • The distributed architecture of Amazon EFS provides data protection from an AZ outage, system and component failures, and network connection errors.

Scalability and Elasticity: Amazon EFS automatically scales file system storage capacity up or down when adding or removing files without disrupting applications, and while eliminating time-consuming administration tasks associated with traditional storage management (such as planning, buying, provisioning, and monitoring). EFS file system can grow from an empty file system to multiple petabytes automatically, and there is no provisioning, allocating, or administration.

  • Amazon EFS is designed to be highly scalable both in storage capacity and throughput performance. It can grow to petabyte scale and allows massively parallel access from Amazon EC2 instances to the data.
  • With Amazon EFS, throughput and IOPS scale as a file system grows, and file operations are delivered with consistent, low latencies.
    • For the most demanding workloads, Amazon EFS can support performance over 10 GB/sec and over 500,000 IOPS.
  • Amazon EFS is designed to provide the throughput, IOPS, and low latency needed for a broad range of workloads.

Performance: Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte-scale and allowing massively parallel access from EC2 instances within a Region. This distributed data storage design means that multi-threaded applications and applications that concurrently access data from multiple EC2 instances can drive substantial levels of aggregate throughput and IOPS. 

  • General Purpose performance mode is the default mode and is appropriate for most file systems. However, if overall Amazon EFS workload exceed 7,000 file operations per second per file system, it is better to use Max I/O performance mode.
  • Max I/O performance mode is optimized for applications where tens, hundreds, or thousands of EC2 instances are accessing the file system. With this mode, file systems scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations.

Due to the spiky nature of file-based workloads, Amazon EFS is optimized to burst at high-throughput levels for short periods of time, while delivering low levels of throughput the rest of the time. A file system can drive throughput continuously at its baseline rate. Amazon EFS offers two throughput modes: 

  • With Bursting Throughput, the throughput scales with the size of the file system, dynamically bursting as needed to support the spiky nature of many file-based workloads.
    • Throughput and IOPS scale as a file system grows and can burst to higher throughput levels for short periods of time to support the unpredictable performance needs of file workloads.
  • Provisioned Throughput is designed to support applications that require higher dedicated throughput than the default Bursting mode and can be configured independently of the amount of data stored on the file system.

Interfaces: Amazon offers a network protocol-based HTTP (RFC 2616) API for managing Amazon EFS, as well as supporting for EFS operations within the AWS SDKs and the AWS CLI. The API actions and EFS operations are used to create, delete, and describe file systems; create, delete, and describe mount targets; create, delete, and describe tags; and describe and modify mount target security groups. If you prefer to work with a graphical user interface, The AWS Management Console gives you all the capabilities of the API in a browser interface.

  • EFS file systems use Network File System version 4 (NFSv4) and version 4.1 (NFSv4.1) for data access. We recommend using NFSv4.1 to take advantage of the performance benefits in the latest version, including scalability and parallelism.

Cost Model: Amazon EFS provides the capacity need, when needed, without having to provision storage in advance. It is also designed to be highly available and highly durable as each file system object (such as a directory, file, or link) is redundantly stored across multiple Availability Zones. This highly durable, highly available architecture is built into the pricing model, and only pay for the amount of storage put into the file system. As files are added, the EFS file system dynamically grows, and customers only pay for the amount of storage used.

  • As files are removed, the EFS file system dynamically shrinks, and users stop paying for the data deleted. There are no charges for bandwidth or requests, and there are no minimum commitments or up-front fees.

Security: There are three levels of access control to consider when planning EFS file system security: IAM permissions for API calls; security groups for EC2 instances and mount targets; and Network File System-level users, groups, and permissions. IAM enables access control for administering EFS file systems, allowing users to specify an IAM identity (either an IAM user or IAM role) to create, delete, and describe EFS file system resources.

The primary resource in Amazon EFS is a file system. All other EFS resources, such as mount targets and tags, are referred to as sub resources. Identity-based policies, like IAM policies, are used to assign permissions to IAM identities to manage the EFS resources and subresources. Amazon groups play a critical role in establishing network connectivity between EC2 instances and EFS file systems.

  • Users associate one security group with an EC2 instance and another security group with an EFS mount target associated with the file system. These security groups act as firewalls and enforce rules that define the traffic flow between EC2 instances and EFS file systems.
  • EFS file system objects work in a Unix-style mode, which defines permissions needed to perform actions on objects. Users and groups are mapped to numeric identifiers, which are mapped to EFS users to represent file ownership.
  • Files and directories within Amazon EFS are owned by a single owner and a single group. Amazon EFS uses these numeric IDs to check permissions when a user attempts to access a file system object.

Storage: Amazon EFS is designed to meet the needs of multi-threaded applications and applications that concurrently access data from multiple EC2 instances and that require substantial levels of aggregate throughput and input/output operations per second (IOPS). Its distributed design enables high levels of availability, durability, and scalability, which results in a small latency overhead for each file operation. 

This makes Amazon EFS ideal for growing datasets consisting of larger files that need both high performance and multi-client access. Amazon EFS supports highly parallelized workloads and is designed to meet the performance needs of big data and analytics, media processing, content management, web serving, and home directories. Amazon EFS doesn’t suit all storage situations. The following are some AWS storage options:

  • Archival data: Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more cost-effectively.
  • Relational database storage: In most cases, relational databases require storage that is mounted, accessed, and locked by a single node (EC2 instance, etc). When running relational databases on AWS, look at leveraging Amazon RDS or Amazon EC2 with Amazon EBS PIOPS volumes. 
  • Temporary storage: Consider using local instance store volumes for needs such as scratch disks, buffers, queues, and caches. Temporary storage includes Amazon EC2 Local Instance Store

Fully managed: Amazon EFS is a fully managed service providing NFS shared file system storage for Linux workloads. Amazon EFS makes it simple to create and configure file systems. Users don’t have to worry about managing file servers or storage, updating hardware, configuring software, or performing backups. In seconds, users can create a fully managed file system by using the AWS Management Console, the AWS CLI, or an AWS SDK.

  • Amazon EFS is well suited to a broad range of use cases, from home directories to business-critical applications. Customers can use Amazon EFS to move NFS-based file storage workloads to managed file systems on the AWS Cloud.
  • Other use cases include: analytics, web serving and content management, application development and testing, media and entertainment workflows, database backups, and containers and serverless storage.

Encryption: Amazon EFS offers encryption for data at rest and in transit providing a comprehensive encryption solution to secure both stored data and data in flight. Data at rest is transparently encrypted using encryption keys managed by the AWS Key Management Service (KMS), eliminating the need to build and maintain a key management infrastructure.

  • Encryption of data in transit uses industry-standard Transport Layer Security (TLS) to secure network traffic without having to modify the applications. Refer to the user documentation on Encryption for more information about encrypting file system data.

Use cases

Amazon EFS is designed to meet the performance needs of the following use cases.

Containers and serverless persistent file storage

Amazon EFS enables customers to persist data and state from their containers and serverless functions, providing fully managed, elastic, highly-available, scalable, and high-performance, cloud-native shared file systems. These same attributes are shared by Amazon Elastic Container Service (Amazon ECS), Amazon Kubernetes Service (Amazon EKS), AWS Fargate, and AWS Lambda, so developers don’t need to design for these traits, the services are simply ready for modern application development with data persistence.

  • Amazon EFS allows data to be persisted separately from compute, and enables applications to have cross-AZ availability and durability.
  • Amazon EFS provides a shared persistence layer that allows stateful applications to elastically scale up and down, such as for DevOps, web serving, web content systems, media processing, machine learning, analytics, search index, and stateful microservices applications.
Move to managed file systems

Amazon EFS provides the scalability, elasticity, availability, and durability to be the file store for enterprise applications and for applications delivered as a service. Its standard file system interface, file system permissions, and directory hierarchy make it easy to migrate enterprise applications from on-premises to the AWS cloud, and to build new ones.

Analytics & machine learning

Amazon EFS provides the ease of use, scale, performance, and consistency needed for machine learning and big data analytics workloads. Data scientists can use EFS to create personalized environments, with home directories storing notebook files, training data, and model artifacts. 

  • Amazon SageMaker integrates with EFS for training jobs, allowing data scientists to iterate quickly.
 
Web serving & content management

Amazon EFS provides a durable, high throughput file system for content management systems and web serving applications that store and serve information for a range of applications like websites, online publications, and archives. Since Amazon EFS adheres to the expected file system directory structure, file naming conventions, and permissions that web developers are accustomed to, it can easily integrate with web applications.

Application testing & development

Amazon EFS provides development environments a common storage repository that gives you the ability to share code and other files in a secure and organized way. Users can provision, duplicate, scale, or archive the test, development, and production environments with a few clicks, enabling organization to be more agile and responsive to customer needs.

  • Amazon EFS delivers a scalable and highly available solution that is ideal for testing and development workloads.
Media & entertainment

Media workflows like video editing, studio production, broadcast processing, sound design, and rendering often depend on shared storage to manipulate large files. Amazon EFS provides a strong data consistency model with high throughput and shared file access which can cut the time it takes to perform these jobs and consolidate multiple local file repositories into a single location for all users.

Database backups

Amazon EFS presents a standard file system that can be easily mounted with NFSv4 from database servers. This provides an ideal platform to create portable database backups using native application tools or enterprise backup applications. Many businesses want to take advantage of the flexibility of storing database backups in the cloud either for temporary protection during updates or for development and test.

  • Amazon EFS provides the scale and performance required for big data applications that require high throughput to compute nodes coupled with read-after-write consistency and low-latency file operations.
Home Directories

Amazon EFS can provide storage for organizations that have many users that need to access and share common datasets. An administrator can use Amazon EFS to create a file system accessible to people across an organization and establish permissions for users and groups at the file or directory level.

Amazon EFS Performance

#01

Bursting Mode

 
 

With Bursting Throughput mode, throughput on Amazon EFS scales as a file system stored in the standard storage class grows. File-based workloads are typically spiky, driving high levels of throughput for short periods of time, and low levels of throughput the rest of the time. To accommodate this, Amazon EFS is designed to burst to high throughput levels for periods of time.

All file systems, regardless of size, can burst to 100 MiB/s of throughput. Those over 1 TiB in the standard storage class can burst to 100 MiB/s per TiB of data stored in the file system. For example, a 10-TiB file system can burst to 1,000 MiB/s of throughput (10 TiB x 100 MiB/s/TiB). The portion of time a file system can burst is determined by its size. The bursting model is designed so that typical file system workloads can burst virtually any time they need to. For file systems using Bursting Throughput mode, the allowed throughput is determined based on the amount of the data stored in the Standard storage class only. 

Amazon EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system that is stored in the standard storage class. A file system uses credits whenever it reads or writes data. The baseline rate is 50 MiB/s per TiB of storage (equivalently, 50 KiB/s per GiB of storage).

Accumulated burst credits give the file system the ability to drive throughput above its baseline rate. A file system can drive throughput continuously at its baseline rate, and whenever it’s inactive or driving throughput below its baseline rate, the file system accumulates burst credits.

For example, a 100-GiB file system can burst (at 100 MiB/s) for 5 percent of the time if it’s inactive for the remaining 95 percent. Over a 24-hour period, the file system earns 432,000 MiBs worth of credit, which can be used to burst at 100 MiB/s for 72 minutes.

File systems larger than 1 TiB can always burst for up to 50 percent of the time if they are inactive for the remaining 50 percent.

Managing Burst Credits

When a file system has a positive burst credit balance, it can burst. Users can see the burst credit balance for a file system by viewing the BurstCreditBalance Amazon CloudWatch metric for Amazon EFS. 

The bursting capability (both in terms of length of time and burst rate) of a file system is directly related to its size. Larger file systems can burst at larger rates for longer periods of time. In some cases, the application might need to burst more. Users can use historical throughput patterns to calculate the file system size needed to sustain the level of activity. To calculate the file system size that is necessary to sustain the level activity that users need to

  1. Identify the throughput needs by looking at historical usage. From the Amazon CloudWatch console, check the sum statistic of the TotalIOBytes metric with daily aggregation, for the past 14 days. Identify the day with the largest value for TotalIOBytes.
  2. Divide this number by 24 hours, 60 minutes, 60 seconds, and 1024 bytes to get the average KiB/second the application required for that day.
  3. Calculate the file system size (in GiB) required to sustain this average throughput by dividing the average throughput number (in KiB/s) by the baseline throughput number (50 KiB/s/GiB) that EFS provides.
 

Provisioned Throughput mode is available for applications with high throughput to storage (MiB/s per TiB) ratios, or with requirements greater than those allowed by the Bursting Throughput mode. For example, let say Amazon EFS used for development tools, web serving, or content management applications where the amount of data in the file system is low relative to throughput demands. The file system can now get the high levels of throughput the applications require without having to pad the file system.

Additional charges are associated with using Provisioned Throughput mode. Using Provisioned Throughput mode, users will billed for the storage that s used and for the throughput that provision above what is provided. The amount of throughput that users are provided is based on the amount of data stored in the Standard storage class. Throughput limits remain the same, regardless of the throughput mode users choose. 

If the file system is in the Provisioned Throughput mode, users can increase the Provisioned Throughput of the file system as often as you want. Users can decrease your file system throughput in Provisioned Throughput mode as long as it’s been more than 24 hours since the last decrease. Additionally, users can change between Provisioned Throughput mode and the default Bursting Throughput mode as long as it’s been more than 24 hours since the last throughput mode change.

If the file system’s metered size provides a higher baseline rate than the amount of throughput provisioned, the file system follows the default Amazon EFS Bursting Throughput model. Users don’t incur charges for Provisioned Throughput below the file system’s entitlement in Bursting Throughput mode. 

Using the Right Throughput Mode

By default, AWS recommend that users run the application in the Bursting Throughput mode. If experienced performance issues, check the BurstCreditBalance CloudWatch metric. If the value of the BurstCreditBalance metric is either zero or steadily decreasing, Provisioned Throughput is right for the application.

In some cases, the file system might run in Provisioned Throughput mode with no performance issues. However, at the same time, BurstCreditBalance continuously increases for long periods of normal operations. In such a case, consider decreasing the amount of provisioned throughput to reduce costs.

If planning on migrating large amounts of data into the file system, consider switching to Provisioned Throughput mode. In this case, users can provision a higher throughput beyond the allotted burst capability to accelerate loading data. Following the migration, consider lowering the amount of provisioned throughput or switch to Bursting Throughput mode for normal operations.

Compare the average throughput to which driving the file system to the PermittedThroughput metric. If the calculated average throughput that the driving the file system to is less than permitted, consider changing throughput to lower costs.

In some cases, calculated average throughput during normal operations might be at or below the ratio of baseline throughput to storage capacity ratio for Bursting Throughput mode. That ratio is 50 MiB/s per TiB of data stored. In such cases, consider switching to Bursting Throughput mode. In other cases, the calculated average throughput during normal operations might be above this ratio. In these cases, consider lowering the provisioned throughput to a point between the current provisioned throughput and the calculated average throughput during normal operations.

Users can change the throughput mode of the file system using the AWS Management Console, the AWS CLI, or the EFS API. With the CLI, use the update-file-system action. With the EFS API, use the UpdateFileSystem operation.

#02

Provisioned Mode

 
 

 

AWS Backup

AWS Backup is a simple and cost-effective way to protect the data by backing up your Amazon EFS file systems. AWS Backup is a unified backup service designed to simplify the creation, migration, restoration, and deletion of backups, while providing improved reporting and auditing. AWS Backup makes it easier to develop a centralized backup strategy for legal, regulatory, and professional compliance. AWS Backup also makes protecting the AWS storage volumes, databases, and file systems simpler by providing a central place where users can do the following:

  • Configure and audit the AWS resources that users want to back up
  • Automate backup scheduling
  • Set retention policies
  • Monitor all recent backup and restore activity

Amazon EFS is natively integrated with AWS Backup. Users can use the EFS console, API, and AWS Command Line Interface (AWS CLI) to enable automatic backups for the file system. Automatic backups use a default backup plan with the AWS Backup recommended settings for automatic backups. Users can also use AWS Backup to manually set the backup plans where they specify the backup frequency, when to back up, how long to retain backups, and a lifecycle policy for backups. Users can then assign Amazon EFS file systems, or other AWS resources, to that backup plan.

Incremental backups

AWS Backup performs incremental backups of EFS file systems. During the initial backup, a copy of the entire file system is made. During subsequent backups of that file system, only files and directories that have been changed, added, or removed are copied. With each incremental backup, AWS Backup retains the necessary reference data to allow a full restore. This approach minimizes the time required to complete the backup and saves on storage costs by not duplicating data.

Backup consistency

Amazon EFS is designed to be highly available. Users can access and modify the Amazon EFS file systems while the backup is occurring in AWS Backup. However, inconsistencies, such as duplicated, skewed, or excluded data, can occur if users make modifications to the file system while the backup is occurring. These modifications include write, rename, move, or delete operations. To ensure consistent backups, we recommend that pause applications or processes that are modifying the file system for the duration of the backup process. Or, schedule the backups to occur during periods when the file system is not being modified.

Performance

In general, users can expect the following backup rates with AWS Backup:

  • 100 MB/s for file systems composed of mostly large files
  • 500 files/s for file systems composed of mostly small files
  • The maximum duration for a backup operation in AWS Backup is seven days.

Complete restore operations generally take longer than the corresponding backup. Using AWS Backup doesn’t consume accumulated burst credits, and it doesn’t count against the General Purpose mode file operation limits

Amazon EFS Works with Amazon EC2
Amazon EFS Works with Amazon EC2
Completion window

Users can optionally specify a completion window for a backup. This window defines the period of time in which a backup needs to complete. If specifying a completion window, make sure to consider the expected performance and the size and makeup of the file system. Doing this helps make sure backup can complete during the window.

Backups that don’t complete during the specified window are flagged with an incomplete status. During the next scheduled backup, AWS Backup resumes at the point that it left off. Users can see the status of all of the backups on the AWS Backup Management Console.

EFS storage classes

Users can use AWS Backup to back up all data in an EFS file system, whatever storage class the data is in. Users don’t incur data access charges when backing up an EFS file system that has lifecycle management enabled and has data in the Infrequent Access (IA) storage class.

  • When users restore a recovery point, all files are restored to the Standard storage class. 
On-demand backups

Using either the AWS Backup Management Console or the CLI, you can save a single resource to a backup vault on-demand. Unlike with scheduled backups, users don’t need to create a backup plan to initiate an on-demand backup. Users can still assign a lifecycle to the backup, which automatically moves the recovery point to the cold storage tier and notes when to delete it.

Concurrent backups

AWS Backup limits backups to one concurrent backup per resource. Therefore, scheduled or on-demand backups may fail if a backup job is already in progress. 

Automatic backups

When creating a file system using the Amazon EFS Console, automatic backups are turned on by default. Users can turn on automatic backups after creating the file system using the CLI or API. The default EFS backup plan uses the AWS Backup recommended settings for automatic backups – daily backups with a 35 day retention period. The backups created using the default EFS backup plan are stored in a default EFS backup vault which is also created by EFS on users behalf. The default backup plan and backup vault cannot be deleted.

  • Users can edit the default backup plan settings using the AWS Backup Management Console. Users can see all of the automatic backups, and edit the default EFS backup plan settings using the AWS Backup Management Console. Users can turn off automatic backups at any time using the Amazon EFS console or CLI, described in the following section.

Security 

 

 

AWS provides various tools that can be used to monitor Amazon EFS. Users can configure some of these tools to do the monitoring, while some of the tools require manual intervention. AWS recommend that users automate monitoring tasks as much as possible.

Automated monitoring tools

Users can use the following automated monitoring tools to watch Amazon EFS and report when something is wrong:

  • Amazon CloudWatch Alarms – Watch a single metric over a time period that users specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a particular state; the state must have changed and been maintained for a specified number of periods. 
  • Amazon CloudWatch Logs –Users can use Amazon CloudWatch Logs the Monitor, store, and access the log files from AWS CloudTrail or other sources. 
  • Amazon CloudWatch EventsMatch events and route them to one or more target functions or streams to make changes, capture state information, and take corrective action. 
  • AWS CloudTrail Log Monitoring – Share log files between accounts, monitor CloudTrail log files in real time by sending them to CloudWatch Logs, write log processing applications in Java, and validate that the log files have not changed after delivery by CloudTrail. 

Manual monitoring tools

Another important part of monitoring Amazon EFS involves manually monitoring those items that the Amazon CloudWatch alarms don’t cover. The Amazon EFS, CloudWatch, and other AWS console dashboards provide an at-a-glance view of the state of the AWS environment. AWS recommend that users also check the log files on file system.

From the Amazon EFS console, users can find the following items for the file systems:

  • The current metered size
  • The number of mount targets
  • The lifecycle state

CloudWatch home page shows:

  • Current alarms and status
  • Graphs of alarms and resources
  • Service health status

In addition, you can use CloudWatch to do the following:

  • Create customized dashboards to monitor the services customers use
  • Graph metric data to troubleshoot issues and discover trends
  • Search and browse all AWS resource metrics
  • Create and edit alarms to be notified of problems

Amazon EFS integrates with AWS Key Management Service (AWS KMS) for key management. Amazon EFS uses customer master keys (CMKs) to encrypt the file system in the following way:

  • Encrypting metadata at rest – Amazon EFS uses the AWS managed CMK for Amazon EFS, aws/elasticfilesystem, to encrypt and decrypt file system metadata (that is, file names, directory names, and directory contents).
  • Encrypting file data at rest – Users choose the CMK used to encrypt and decrypt file data (that is, the contents of the files). Users can enable, disable, or revoke grants on this CMK. This CMK can be one of the two following types:
    • AWS managed CMK for Amazon EFS – This is the default CMK, aws/elasticfilesystem. Users are not charged to create and store a CMK, but there are usage charges. To learn more, see AWS Key Management Service pricing.

    • Customer-managed CMK – This is the most flexible master key to use, because users can configure its key policies and grants for multiple users or services. For more information on creating CMKs, see Creating Keys in the AWS Key Management Service Developer Guide.

      If using a customer-managed CMK as a master key for file data encryption and decryption, users can enable key rotation. When you enable key rotation, AWS KMS automatically rotates the key once per year. Additionally, with a customer-managed CMK, users can choose when to disable, re-enable, delete, or revoke access to the CMK at any time.

Amazon EFS Pricing

 

Amazon EFS

With Amazon EFS, users pay only for the resources that is used. There is no minimum fee and there are no set-up charges. Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that’s cost-optimized for files not accessed every day. To load data into EFS IA, simply enable Lifecycle Management for your file system and reduce the storage costs by up to 92%.

Industry research and customer analysis shows that on average, 20% of files are actively used and 80% are infrequently accessed. Using that estimate, you can store the files on Amazon EFS at an effective price of $0.08/GB-month. For examples, here is pricing in the US East (N. Virginia) region

Amazon EFS Standard Storage Class

The Standard storage class is designed for active file system workloads and pay only for the amount of file system storage customers use per month.

  • Standard Storage (GB-Month) is $0.30
Amazon EFS Infrequent Access Storage Class

The Infrequent Access storage class is cost-optimized for files accessed less frequently. Data stored on the Infrequent Access storage class costs less than Standard and users will pay a fee each time the user read from or write to a file.

  • Infrequent Access Storage (GB-Month) is $0.025
  • Infrequent Access Requests (per GB transferred) is $0.01
Amazon EFS Bursting Throughput (Default)

In the default Bursting Throughput mode, there are no charges for bandwidth or requests and users get a baseline rate of 50 KB/s per GB of throughput included with the price of EFS Standard storage.

Amazon EFS Provisioned Throughput

Users can optionally select the Provisioned Throughput mode and provision the throughput of the file system independent of the amount of data stored and pay separately for storage and throughput. Like the default Bursting Throughput mode, the Provisioned Throughput mode also includes 50 KB/s per GB (or 1 MB/s per 20 GB) of throughput in the price of EFS Standard storage. Users are billed only for the throughput provisioned above what are provided based on data users have stored.

  • Provisioned Throughput (MB/s-Month) is $6.00

Amazon Elastic File System (Amazon EFS) delivers a simple, scalable, elastic, highly available, and highly durable network file system as a service to EC2 instances. It supports Network File System versions 4 (NFSv4) and 4.1 (NFSv4.1), which makes it easy to migrate enterprise applications to AWS or build new ones. We recommend clients run NFSv4.1 to take advantage of the many performance benefits found in the latest version, including scalability and parallelism. Users can create and configure file systems quickly and easily through a simple web services interface. Users don’t need to provision storage in advance and there is no minimum fee or setup cost—simply pay for what customers use.

  • Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes, which allows massively parallel access from EC2 instances to data within a Region.
  • Amazon EFS is also highly available and highly durable because it stores data and metadata across multiple Availability Zones in a Region.
  • Amazon EFS is well suited to support a broad spectrum of use cases from highly parallelized, scale-out workloads that require the highest possible throughput to single-threaded, latency-sensitive workloads. 
  • Use cases such as lift-and-shift enterprise applications, big data analytics, web serving and content management, application development and testing, media and entertainment workflows, database backups, and container storage.