Updated: Jan 17, 2022
I love automating tasks whenever possible. The tedium of repetition is something that blocks productivity and the ability to focus on bigger and more important projects.
I’m also a huge proponent of automated monitoring. This is essentially a hands-free approach to making sure everything is working as expected. As an SEO, this is critical because I’ve seen many unexpected and uncaught changes have a dramatic impact on SEO performance. Have you ever seen a site‘s homepage deindexed over the holidays and nobody was around to catch it until after Santa got back to the north pole? I feel like every SEO has a story like this.
But the point is, manually checking for every unprecedented event simply isn’t feasible, nor is it effective. The approach that I like to take is, “automate and alert” - I.e automate the process of checking and alert when there are anomalies. To help me with this process, I use AWS Lambda to run my code based on whatever event I decide appropriate. In the context of ”automate and alert” it’s typically timed intervals.
There is of course a seemingly infinite number of applications for AWS Lambda beyond what I just described, however, below I’ll outline my setup for the “automate and alert” approach.
But before we begin, one housekeeping item. This tutorial is for Macs only. Sorry Windows peeps!
Step 1: Create an AWS Account
To use the AWS Lambda service, you’ll first need to set up an AWS account. It’s free and you can set one up here.
Step 2: Create a Lambda Function
Once you're up and running with an AWS account, you'll want to go to the Lambda Service.
Then you're going to create a function.
There are a few options that you can select to create your function, however, I typically just choose to "Author from scratch." Next, you'll need to input the function name and then select the language that you'll be using for your function which in our case is Python 3.9. Once that's configured you can create your function.
So now you have your Lambda function set up and you can input your script. You'll notice there's already a sample Python script within the environment.
If you're just using the Python standard library for your script, you can simply input your code directly into the lambda_function.py file that's already available. If that's the case for you, you can actually skip to Step 5. But if you're using any other dependencies in your script, you'll need to do a few extra steps that I'll cover next.
Step 3: Zip Your Python Script and Dependencies
Again, if you're using any dependencies outside of what's available within the Python standard library, you'll need to follow these next two steps.
Locally you'll need to put your script into a file called lambda_function.py.
Also, within your script, you'll need to define a handler function which I recommend be called lambda_handler() that takes two arguments - event and context. This function runs when your Lambda function is invoked. I noticed that without lambda_handler() my script was throwing errors so it should be included. The implementation of it can even be arbitrary but it must exist.
It looks like this:
def lambda_handler(event, context):
There's more information on this handler function here.
Next you'll need to add the lambda_function.py file to a .zip file that you'll be uploading to AWS.
In the terminal, navigate to the directory where your lambda_function.py file is located and run the following command to add the file to a .zip file called my-deployment-package.zip.
zip my-deployment-package.zip lambda_function.py
This will generate a my-deployment-package.zip file within the directory.
Next, I'll assume you've been working locally within a virtual environment so I'll explain how to zip up those dependencies within your virtual environment.
Within the terminal, you'll need to change directories to where all of your dependencies are located. In my case it's the /site-packages/ folder within my virtual environment. It might look something like this depending on whatever the name of your virtual environment is.
Now within that directory, you're going to zip all of those dependencies back into your my-deployment-package.zip file with the following command.
zip -r ../../../../my-deployment-package.zip .
So now in your my-deployment-package.zip file, you should have your lambda_function.py file as well as all of your dependencies all located off of the root directory of the .zip file.
The deployment package should now be ready to upload to AWS.
Step 4: Upload to AWS Lambda Directly or S3
Depending on the size of your deployment package, you may just be able to upload your .zip file directly within your Lambda function. You can upload directly if your .zip file is less than 10 MB.
To do this, go back to your Lambda function and select the "Upload from" drop down. You'll need to select the .zip file option and then upload your my-deployment-package.zip file.
If your entire deployment package is greater than 10 MB you'll need to upload the package to an S3 bucket.
If you don't already have one set up, I'm not going to walk through all the steps for setting up an S3 bucket. It's pretty straightforward and there's plenty of documentation on AWS about how to do this anyway. Here is a link if you need any help with setting up a bucket.
Next, head over to your S3 bucket and upload your deployment package.
To be able to use this within your Lambda function, you'll need to copy the URL that references your deployment package.
To get the URL, you can tick the box next to your deployment package and then click the "Copy URL" button.
Now that you have the URL, head back to your Lambda function and click the "Upload from" drop down and select the option for "Amazon S3 location."
A modal will pop up and all you have to do is input the URL that you copied from S3 and click "Save."
Once your deployment package is uploaded and referenced within your Lambda function, you'll see a message saying that you can't do any inline code editing.
Now you're ready to start running your Lambda function!
Step 5: Run a Test
Next you're going to want to run a test just to make sure everything is working as intended. Click on the "Test" tab within your Lambda function to view your test configuration.
For my use cases, I've kept the defaults and haven't had to configure anything specifically. So just click the "Test" button and your script should be off and running.
If your script executed without any errors you should see a "Execution result succeeded" message like the one below.
Step 6: Create a CloudWatch Event for Scheduling Lambda Script Execution
Now running a script on AWS Lambda isn't much use if you can't automate the execution somehow. For most of my use cases, I've wanted to execute my scripts at some type of timed interval - e.g. every day, every hour, etc.
To set up an event like this that will trigger your Lambda function, click on the "Add trigger" button.
This will allow you to then select from many different types of triggers. For the timed interval approach, you'll want to select "EventBridge (CloudWatch Events)."
Next you'll need to configure the trigger to execute based upon whatever schedule you decide. Since you're creating a new rule, you'll need to select "Create new rule." Then give your rule any name you wish. Then to define the timed interval, you'll do this within the "Schedule expression" field. As the description says, you can use Cron or rate expressions here. In the screenshot below I'm specifying that the script should execute every hour.
After you click "Add," your trigger will be set up and your Lambda function will execute once every hour.
And that's all there is to it!
I hope you found this helpful and if you have any questions, feel free to drop me a comment. Also, I encourage you to share anything you've built using AWS Lambda.