Techno Blender
Digitally Yours.

Cost-Optimization Strategies within AWS | by Ross Rhodes | Sep, 2022

0 46


Considerations to reduce recurring bills for EC2, Lambda, and S3

Having worked extensively with Amazon Web Services (AWS) for the past four years, one topic that surfaces frequently within my circles is the cost of running applications and databases in the cloud. Whether that be the price of storing data within the Simple Storage Service (S3), retaining snapshots of Elastic Block Storage (EBS) volumes, or running Elastic Cloud Compute (EC2) instances, it’s easy for charges to accumulate and grab our attention when we review upcoming bills.

Reflecting on recent cost optimisations applied within my own work for EC2, Lambda, and S3, let’s explore how we may analyse our charges and take advantage of AWS’ functionality to reduce our bills. By no means is this a unique blog post — there’s plenty more material out there from AWS and other contributors — but I hope it serves as a useful supplement for future reference.

source: screenshot of Cost Explorer from author

Within the AWS Console, we may navigate to Cost Explorer for insight into our existing charges and forecasted future bills.

source: screenshot of AWS search from author

This directs us to the Cost Management homepage, where we may review a high-level chart of our AWS costs to date. Thankfully my personal bill is looking healthy, but commercial accounts may have notably higher figures.

source: screenshot of Cost Management from author

Selecting “View in Cost Explorer” above the bar chart, we redirect ourselves to a more sophisticated chart with filters to break down charges. Grouping costs by service, we see in my case that tax contributes the largest amount toward my costs, followed by the Key Management Service (KMS).

source: screenshot of Cost Explorer from author

Another grouping which may prove particularly valuable is tags. With a well-defined framework of tag keys and values applied across different AWS resources, billing breakdowns by tag prove extremely useful for greater insight on the source of AWS charges — especially if resources are tagged by department, or team, or different layers of organisational granularity.

With a better understanding of the source of our costs, let’s explore what features AWS has to offer in a few of its most popular services to help us reduce bills, starting first with EC2.

Covering a wide range of different resources, EC2 is an umbrella service for server instances, traffic load balancers, and elastic network interfaces, amongst other infrastructure. Instances may require persistent storage, taking the form of EBS volumes. Furthermore, we may want to easily spin up replica instances matching existing configuration using machine images, or we may want to retain back-ups of our EBS volumes using snapshots.

Costs easily creep up across these different resources and strategies. Starting first with EC2 instances, we may adopt EC2 Instance Savings Plans to reduce compute charges for specific instance types and AWS regions, or explore Compute Savings Plans to reduce compute costs irrespective of type and region. The former makes savings of up to 72% at the time of writing, whilst the latter saves up to 66% and extends to ECS Fargate and Lambda functions.

EC2 also offers different purchasing options for instances. This includes reserved instances, where we may commit to specific configurations for one or three years at reduced cost, as well as spot instances, where we may pay significantly lower costs if we’re happy for applications to be interrupted.

Having considered instance purchasing and compute plans, we may turn our attention to EBS-backed machine images and volume snapshots. The Data Lifecycle Manager is tremendously useful for automating the creation, retention, and deletion of these resources. However, this will not manage images and snapshots created by other means, and it also excludes instance store-backed images.

source: screenshot of Data Lifecycle Manager from author

Although not helpful toward cost reduction, another feature worth highlighting is the EC2 Recycle Bin. If we manually delete images and snapshots, or rely upon the Data Lifecycle Manager, Recycle Bin serves as a safety net to avoid the accidental deletion of resources — retaining images and snapshots for a configurable time where we may restore them before they are deleted permanently.

source: screenshot of EC2 Recycle Bin from author

With EC2 instances covered, let us consider the aforementioned Lambda service. Unlike EC2, Lambda functions are serverless, so consideration focuses primarily on resource usage and configuration rather than provisioning costs.

First, let’s explore instruction set architectures for our functions. In September 2021, AWS launched Arm/Graviton2 processors for general availability, serving as a cheaper and more performant alternative to functions currently running x86 processors. AWS documents suggested migration steps for the switchover from x86 to Graviton2, which would be wise to follow for initial cost savings.

A more subtle cost moves our attention to Lambda’s logging configuration within CloudWatch. By default, Lambda automatically creates log groups for its functions, unless a group already exists matching the name /aws/lambda/{functionName}. These default groups do not configure a log retention period, leaving logs to accumulate indefinitely and increasing CloudWatch costs. Consider explicitly configuring groups with matching names and a retention policy to maintain a manageable volume of logs.

Last, but not least, consider Lambda function memory capacity. Lambda charges based on compute time in GB-seconds, where the duration in seconds is measured from when function code executes until it either returns or otherwise terminates, rounded up to the nearest millisecond. To reduce these times, we desire optimal memory configuration. AWS Lambda Power Tuning can help to identify these optimisations, albeit with notable initial costs given the underlying use of AWS Step Functions.

Moving away from processing services, let us now consider data storage within S3. Persisting objects of up to 5TB, S3 uses “buckets” to store a theoretically unlimited number of objects. There’s no default object retention policy, so bucket sizes may grow quickly and inflate our AWS bills. We’re not only charged for how much data we store, but also which S3 storage classes we utilise.

Several classes are available with varying costs. The Standard (default) class is the most expensive, permitting regular access to objects with high availability and short access times. Infrequent Access (IA) classes offer reduced cost for data which requires limited access (usually once per month), whilst archival options via Glacier deliver further cost reductions.

To manage both storage classes and data retention times, we may draw our attention to S3 Lifecycle Configuration rules. Applying these rules, we may automatically transfer data to different storage classes and thereafter permanently delete it, X and Y days respectively after data creation.

source: screenshot of an S3 Lifecycle Configuration Rule from author

Covering only a few of AWS’ many different services, hopefully this blog post serves as a useful reference for cost optimisation considerations within EC2, Lambda, and S3. Billing and costs form a regular conversation topic — no doubt many of us will have followed strategies covered here, but I’m sure there’s many other options and ideas worth sharing. Do let us know if you’ve taken different approaches to reduce your bills and how effective they have proven to be.


Considerations to reduce recurring bills for EC2, Lambda, and S3

Having worked extensively with Amazon Web Services (AWS) for the past four years, one topic that surfaces frequently within my circles is the cost of running applications and databases in the cloud. Whether that be the price of storing data within the Simple Storage Service (S3), retaining snapshots of Elastic Block Storage (EBS) volumes, or running Elastic Cloud Compute (EC2) instances, it’s easy for charges to accumulate and grab our attention when we review upcoming bills.

Reflecting on recent cost optimisations applied within my own work for EC2, Lambda, and S3, let’s explore how we may analyse our charges and take advantage of AWS’ functionality to reduce our bills. By no means is this a unique blog post — there’s plenty more material out there from AWS and other contributors — but I hope it serves as a useful supplement for future reference.

source: screenshot of Cost Explorer from author

Within the AWS Console, we may navigate to Cost Explorer for insight into our existing charges and forecasted future bills.

source: screenshot of AWS search from author

This directs us to the Cost Management homepage, where we may review a high-level chart of our AWS costs to date. Thankfully my personal bill is looking healthy, but commercial accounts may have notably higher figures.

source: screenshot of Cost Management from author

Selecting “View in Cost Explorer” above the bar chart, we redirect ourselves to a more sophisticated chart with filters to break down charges. Grouping costs by service, we see in my case that tax contributes the largest amount toward my costs, followed by the Key Management Service (KMS).

source: screenshot of Cost Explorer from author

Another grouping which may prove particularly valuable is tags. With a well-defined framework of tag keys and values applied across different AWS resources, billing breakdowns by tag prove extremely useful for greater insight on the source of AWS charges — especially if resources are tagged by department, or team, or different layers of organisational granularity.

With a better understanding of the source of our costs, let’s explore what features AWS has to offer in a few of its most popular services to help us reduce bills, starting first with EC2.

Covering a wide range of different resources, EC2 is an umbrella service for server instances, traffic load balancers, and elastic network interfaces, amongst other infrastructure. Instances may require persistent storage, taking the form of EBS volumes. Furthermore, we may want to easily spin up replica instances matching existing configuration using machine images, or we may want to retain back-ups of our EBS volumes using snapshots.

Costs easily creep up across these different resources and strategies. Starting first with EC2 instances, we may adopt EC2 Instance Savings Plans to reduce compute charges for specific instance types and AWS regions, or explore Compute Savings Plans to reduce compute costs irrespective of type and region. The former makes savings of up to 72% at the time of writing, whilst the latter saves up to 66% and extends to ECS Fargate and Lambda functions.

EC2 also offers different purchasing options for instances. This includes reserved instances, where we may commit to specific configurations for one or three years at reduced cost, as well as spot instances, where we may pay significantly lower costs if we’re happy for applications to be interrupted.

Having considered instance purchasing and compute plans, we may turn our attention to EBS-backed machine images and volume snapshots. The Data Lifecycle Manager is tremendously useful for automating the creation, retention, and deletion of these resources. However, this will not manage images and snapshots created by other means, and it also excludes instance store-backed images.

source: screenshot of Data Lifecycle Manager from author

Although not helpful toward cost reduction, another feature worth highlighting is the EC2 Recycle Bin. If we manually delete images and snapshots, or rely upon the Data Lifecycle Manager, Recycle Bin serves as a safety net to avoid the accidental deletion of resources — retaining images and snapshots for a configurable time where we may restore them before they are deleted permanently.

source: screenshot of EC2 Recycle Bin from author

With EC2 instances covered, let us consider the aforementioned Lambda service. Unlike EC2, Lambda functions are serverless, so consideration focuses primarily on resource usage and configuration rather than provisioning costs.

First, let’s explore instruction set architectures for our functions. In September 2021, AWS launched Arm/Graviton2 processors for general availability, serving as a cheaper and more performant alternative to functions currently running x86 processors. AWS documents suggested migration steps for the switchover from x86 to Graviton2, which would be wise to follow for initial cost savings.

A more subtle cost moves our attention to Lambda’s logging configuration within CloudWatch. By default, Lambda automatically creates log groups for its functions, unless a group already exists matching the name /aws/lambda/{functionName}. These default groups do not configure a log retention period, leaving logs to accumulate indefinitely and increasing CloudWatch costs. Consider explicitly configuring groups with matching names and a retention policy to maintain a manageable volume of logs.

Last, but not least, consider Lambda function memory capacity. Lambda charges based on compute time in GB-seconds, where the duration in seconds is measured from when function code executes until it either returns or otherwise terminates, rounded up to the nearest millisecond. To reduce these times, we desire optimal memory configuration. AWS Lambda Power Tuning can help to identify these optimisations, albeit with notable initial costs given the underlying use of AWS Step Functions.

Moving away from processing services, let us now consider data storage within S3. Persisting objects of up to 5TB, S3 uses “buckets” to store a theoretically unlimited number of objects. There’s no default object retention policy, so bucket sizes may grow quickly and inflate our AWS bills. We’re not only charged for how much data we store, but also which S3 storage classes we utilise.

Several classes are available with varying costs. The Standard (default) class is the most expensive, permitting regular access to objects with high availability and short access times. Infrequent Access (IA) classes offer reduced cost for data which requires limited access (usually once per month), whilst archival options via Glacier deliver further cost reductions.

To manage both storage classes and data retention times, we may draw our attention to S3 Lifecycle Configuration rules. Applying these rules, we may automatically transfer data to different storage classes and thereafter permanently delete it, X and Y days respectively after data creation.

source: screenshot of an S3 Lifecycle Configuration Rule from author

Covering only a few of AWS’ many different services, hopefully this blog post serves as a useful reference for cost optimisation considerations within EC2, Lambda, and S3. Billing and costs form a regular conversation topic — no doubt many of us will have followed strategies covered here, but I’m sure there’s many other options and ideas worth sharing. Do let us know if you’ve taken different approaches to reduce your bills and how effective they have proven to be.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment