Reduce the cost of your unused AWS DynamoDB tables by turning them to “on-demand” mode

Jagadeesh Dachepalli
4 min readMar 28, 2021

I am a Senior Software Engineer, having around 5 years of IT experience, predominantly worked with AWS, Python, DynamoDB.

This post/story has been inspired by an AWS blog that I recently came across when I was reading the dynamodb blogs, which is written by Jongnam Lee and Masudur Rahaman Sayem. AWS blog link -https://aws.amazon.com/blogs/database/safely-reduce-the-cost-of-your-unused-amazon-dynamodb-tables-using-on-demand-mode/

The above-mentioned blog is to safely reduce the cost of the DynamoDB tables in case they have been not used but might be used by some applications in the future. So, we cannot delete them unless we get approvals from the respective application’s owners. So, instead, we will change their I/O mode to “On-Demand” mode so that we will not incur any cost for unused tables. I highly recommend reading the above AWS blog prior to this blog.

I have followed the same approach as they’ve mentioned(nodejs solution)in the AWS blog, but just implemented a Python solution for it. I also added few other features namely — the serverless framework for easy deployments of the lambda, cron jobs, and SNS usage to send the statistics of updated DynamoDB tables to subscribers subscribed to the SNS Topic (Email address in my case)

In the code, we define a DynamoDB table as unused if the table hasn’t had any read or write activity in the last days_to_search_from days, here days_to_search_from is an environment variable we set up using the serverless framework. (please find the Github repo link below to explore the code at your convenience)

Here are some snippets of my code

function to get a single dynamodb table’s info
function to get dynamodb table’s metrics statistics
function to update DynamoDB table I/O mode to On-Demand
lambda handler — Starting function for AWS Lambda

Screenshots of my DynamoDB tables - before and after our lambda runs.

snapshot of dynamodb tables before the lambda execution
snapshot of the dynamodb tables after the lambda function execution

From the above snapshots of dynamodb tables, you can clearly see that two tables (test1, test2) have been changed to use ‘On-Demand’ I/O mode as they found out to be the only tables that are inactive based on our ‘project configuration/settings’

Here’s the link to my GitHub repo: https://github.com/jagadish432/reduce-inactive-dynamodb-table

I have used the Serverless framework here because it’s easy to manage deployments and also easy to manage configurable data (like the number of days since when we wanted to check, AWS region, multiple deployments regions/environments (like dev, staging, prod), email_address value for the SNS subscribed endpoint, etc. However, we are free to choose other frameworks like Amplify, Zappa, etc, but still mostly depends on whether we are starting a new isolated project or we want to add this to an existing python project.

The current code deploys an AWS lambda function and applies the CRON rule on it to run on the first day of every month. However, we can modify this schedule rate. Also, the lambda doesn’t run immediately after the stack creation/update, so if we need the lambda to run immediately after the stack creation or stack update i.e, after deploying new changes, I believe we can use SNS here again to call our lambda function on the message publish, and even before that, we need to capture the stack creation/update event and then trigger message to the SNS topic. Please refer to this StackOverflow link to get some idea.

This is the sample email message we will get from SNS in case there are any inactive dynamodb tables found and updated to On-Demand mode.

email received from SNS when the lambda found inactive tables and updated them to On-Demand mode

This is my first ever post/article, please suggest me in case of any changes required on this post in the comments section, so that I will improve my writings further.

Thank you for reading this, I hope this helped you.

--

--

Jagadeesh Dachepalli

Senior Software Engineer, having around 6 years of experience, worked predominantly in AWS, Python, Flask, DynamoDB, serverless.