Pay-as-you-go resources are a compelling but daunting concept for budget conscious research customers. Uncertainty of cloud costs is a barrier-to-entry for most, and having near real-time cost visibility is critical. This is true for research businesses, including non-profits, K-12, and higher-education institutions who are grant-funded and have a limited budget. Grants are painstaking to write and apply for, so cost overruns represent a significant risk to research being done in the cloud.
More research customers now leverage managed compute services beyond just Amazon Elastic Compute Cloud (Amazon EC2) for experimentation in the cloud (e.g. AWS Batch, Amazon Elastic Container Service (Amazon ECS), AWS Fargate). These services enable the type of rapid scalability researchers need for high performance compute (HPC) workloads. However, this scalability can be a double-edged sword in regards to budget planning. Experimental research is unpredictable in nature primarily because it is not steady-state. HPC workloads are typically burstable and sometimes long running, which makes it harder to predict their costs.
In this blog, we introduce a novel architecture to manage the costs of your cloud workloads, leveraging low-cost, serverless technology that is capable of polling costs in near-real time and terminating resources before overspending. This solution is developed for use with AWS Batch, and it currently supports the Fargate deployment model. However, it is open-sourced on GitHub for future extensibility. This architecture relies on AWS Cloud Financial Management solutions as the primary mechanism for cost analysis, and complements them with the added feature of near real-time visibility.
Billing Updates and Frequency
There are a number of AWS Cloud Financial Management solutions to provide cost transparency. This architecture leverages several tools:
- The AWS Cost and Usage Report (CUR) tracks your AWS usage and breaks down your costs by providing estimated charges associated with your account. Each report contains line items for your AWS products, usage type, and operation used in your account. You can customize the AWS CUR to aggregate information by hour, day or month. The AWS CUR publishes AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket you own.
- AWS Cost Explorer is a tool that helps manage your AWS costs by giving detailed insights into the line items of your bill. Cost Explorer visualizes daily, monthly, and forecasted spend by combining an array of available filters. Filters allow you to narrow down costs according to AWS service type, linked accounts, and tags.
- AWS Budgets lets you set custom budgets to track your costs from the simplest to the most complex use cases. AWS Budgets supports email or SNS notification when actual or forecasted costs exceed your budget threshold.
These tools are great for retrospective cost analysis, but they do not update in real-time. They can update up to three times a day, or 8-12 hours of granularity. Meaning, if you set an alert for a limit you have passed, the notification may be delayed for half-a-day until the next billing cycle update. This lack of real-time cost data is a barrier to entry for grant-funded research customers that want to leverage AWS.
Solution Overview
This solution is scoped to focus on AWS Batch on AWS Fargate usage for research customers. The architecture leverages low-cost, serverless technology to monitor and manage long-running, experimental compute found in research. Event-driven processes will be kicked off to monitor and terminate managed-compute spend before it goes over budget. This solution is meant to be an enhancement of AWS cost tools with high-frequency polling utilizing serverless compute/storage options.
Additionally, this solution provides a blueprint for similar high-frequency cost tracking monitoring since pricing variables can be swapped out via the AWS Price List API and what resources are being tracked via cost allocation tags.
The following diagram illustrates the solution architecture.
The solution showcases an AWS Batch on AWS Fargate environment where researchers are sending compute jobs. The Batch Fargate environment has a cost allocation tag for AWS Budgets and Cost Explorer to track its cost.
The architecture begins with the ingestion of the AWS CUR, which lets us know that the most up-to-date billing data is available. The CUR is sent to an S3 bucket, which triggers an AWS Lambda function we named the Batch Spend Checker. Once this Lambda function is triggered, it will perform two actions:
- Check allocated budget and current spend in AWS Budgets
- Run a customized AWS Cost Explorer query to verify budget spend
If both checks return a budget spend less than a configurable threshold, say 80%, Amazon Simple Notification Service (SNS) will send an email notification stating how much is spent as well as how much spend is left according to AWS Budgets and Cost Explorer.
If either of the checks comes back greater than the configurable threshold, e.g. 80%, the architecture will kick off a AWS Step Functions workflow. Step Functions will orchestrate the stopping of new batch jobs from entering and keeping a running cost tally based on a near real-time, configurable, polling frequency. Users can set the polling frequency to be as short as a few seconds, but should ultimately set it to be anything less than 6 hours. This ensures a greater granularity than the existing AWS Cloud Financial Management solutions.
The Step Functions workflow leverages Amazon DynamoDB tables as a data store for both the currently running, indivdual Batch jobs and aggregated job data. In parallel, Amazon EventBridge kicks off an event-driven process that will delete individual Batch jobs that have completed from the DynamoDB table. Another Lambda function will keep a running cost tally in the aggregate job DynamoDB table. Once the AWS Budgets threshold has been reached, the Step Functions workflow will query the currently running, individual Batch jobs from DynamoDB and then stop them to avoid incurring further costs.
AWS Step Functions and Event-Driven Workflow
Here is a more detailed view of the architecture once the 80% threshold is broken. The figure illustrates the Step Functions workflow along with the event-driven parallel process.
The Step Functions workflow steps are as follows:
- After receiving the >80% spend trigger from Batch Spend Checker, the state machine is executed.
- The workflow immediately invokes Stop New Batch Jobs to stop adding new batch jobs to the queue because the budget threshold is almost met.
- Record Batch Tasks will record currently running Batch jobs in the Running Batch Tasks DynamoDB table. This is done once, but is kept up-to-date via the event-driven processes.
- The High Velocity Polling Batch Cost Checker is continuously invoked to poll the Aggregate Batch Task DynamoDB table and update the running cost of the existing batch jobs.
- Lastly, once the budget threshold is 100% met, the Stop Running Batch Jobs will terminate all remaining Batch jobs..
In parallel, these event-driven processes run in the background:
- If one of the currently running batch job finishes, EventBridge will invoke the Delete Task Lambda function to delete the finished batch job from Running Batch Tasks DynamoDB table.
- DynamoDB Streams captures any changes from the Running Batch Tasks DynamoDB table and invokes Update Aggregate Batch Task to aggregate the vCPU and memory of all the currently running Batch jobs and update the Aggregate Batch Task DynamoDB table.
Prerequisites
Before getting started, you must have the following prerequisites:
- An AWS Account
- Have the AWS Cloud Development Kit (AWS CDK) and Python installed (Python 3.6+) on a local machine or cloud IDE of your choice.
- Create an AWS CUR along with accompanying S3 bucket. Follow instructions here. (NOTE: this can take up to 24 hours to create).
- Set the report path prefix as “reports”
- Set granularity to “Hourly”
- Set report versioning to “create new report version”
Deploy the Solution
To deploy the solution using the CDK, complete the following steps to bootstrap your environment…
Read the full blog to learn more. Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.