AWS Shop example: unit tests

0 0
Read Time:11 Minute, 39 Second


In the last six blogs [1], I showed you an application that used AWS to process the sales from a cashing machine. This series continues with tests for this application. Some objects of our solution cannot be tested: we cannot test the API gateway, the SNS topics or DynamoDB tables: these are Amazon objects. What we can (and will) test, are the three Lambda functions. I created a new directory in my repository to deploy the shop example with the tests: shop-2.

Differences to shop-1

The first thing you will notice when you play along, is that installing these objects takes much longer. There is also a huge delay between the enrollment of the first part (infrastructural objects, like IAM roles and policies), the second part (with shop objects) and the third part (with test objects). It takes more than 5 minutes to create all objects in AWS. The delays between the different deployment scripts is needed to get the solution more stable: when I wouldn’t add these delays, I would get 500 errors in the API gateway and Lambda errors where Lambdas don’t have the right permissions.

I think this effect might have something to do with using a non-production account. I don’t expect that this needs to be done in a company account as well.

You will also see that the infra and the shop enrollments themselves have more objects than in the previous version of the shop example.

Name and code changes

In the last few weeks, I saw a presentation from my colleague Lucas Jellema and he recommended to look at some videos from the Belgium Devoxx conference. One of them was about clean code [2]. Though I thought that my code was pretty nice, I made many adjustments after seeing that video. My code was not as well written as I thought…

One of the consequences is that the process function has been renamed to update_db because of simple questions like “what do you process and which purpose has the processing”…

Calling lambdas directly

In next chapters and blogs I will show you Lambda functions to test the code. But then the question arises: which Lambdas can you use to test? Which functions are supporting Lambdas which are part of a test but shouldn’t be called by an operator? And which functions are part of our production code and shouldn’t be called directly by an operator as well?

I used three solutions for this: the first one is the names of the Lambda functions: the shop functions, the objects under tests and the support functions shouldn’t be called directly. The test functions and the get_stats functions can be called directly:

The second way to warn you is that in Lambda functions that you shouldn’t call from the GUI a comment block is placed at the beginning of the function.

A third solution might help as well. Up to now, we didn’t do a lot with tags. I tagged all Lambda functions with two tags: the first tag is type, which is either “prod” for the Lambda’s that we use for our shop example, or an indication of the type of the function (“perftest”, “unittest”, “object_under_test”, “smoketest”). I also gave each Lambda the tag “Execute via gui”, which can be either “yes” or “no”. When you click in the text box next to the search icon, then you can see the names of the tags:

When you click on “type”, you will see the different values that I used for this tag:

Lambdas to test lambdas

Up to now, we used a script called ./ to send one message to the AWS environment. In theory, we could use a Python-script that we run from our virtual machine to test the objects. There are however two problems with this approach: 1) we cannot test all situations from outside AWS, and 2) we cannot use automation in AWS to test our solution.

One of the situations that cannot be tested from outside AWS is the situation that an unknown shop-id is used in the update_db Lambda: the decrypt function will not send those records to the update_db function. This is not a problem for our shop example, but when we would re-use the update_db function to deal with a website that also is hosted in AWS, then we cannot be sure that the update_db function is tested well enough.

The second problem that arises when we would use Python scripts that are running on the virtual machine, is that we cannot use automation from AWS when we test the shop. This can be a disadvantage if you want to use, for example, a CI/CD pipeline with AWS CodeBuild, AWS CodeDeploy and AWS CodePipeline to automate the testing and enrollment.

There are six types of Lambda functions, three of them are used for unit testing:

1) The Lambda functions that are part of our shop, they are renamed to AMIS_shop_<<name>>. In shop-2, the same connections are made as in blog-1. We will use these objects for smoke- and performance testing. We will not touch these objects during the unit tests. When you use the ./ Python script from your VM, these three AMIS_shop_<<name>>-objects are used to change the data in the AMIS-shops table in DynamoDB.

2) I created other accept, decrypt and process functions for the unit tests. These are called AMIS_unittest_object_under_test_<<name>>, you can see them in the blue box. These Lambda functions have the exact same code and the exact same IAM roles and IAM policies as the original ones. Their environment variables are pointing to different objects. I will explain how this works in the next paragraphs.

3) The unit tests themselves are in the purple box. These are called AMIS_unittest_test_<<name>>.

4) The unit tests make use of supporting Lambda functions. These Lambda functions are called AMIS_unittest_support_<<name>> and these are in the orange box.

5) There is one Lambda function for the smoke test (I will talk about that in another blog)

6) I used two Lambda functions for the performance test (this will also be discussed in another blog).

Testset lambdas: accept

When we do the unit tests, we don’t want to use the original objects: when we would send tests to the accept function, the accept function would send the events also to the SNS topic to_decrypt, the decrypt would do something with this as well. I decided to create test new Lambda functions with the exact same code, but with a different environment variable.

You might recall that we used environment variables in both the accept and the decrypt Lambda function to store the ARN (Amazon Resource Name) for the SNS topic. When we change this variable from the to_shop_decrypt function to a new SNS topic to_unittest_support_echo, and we use a very simple Lambda function that will just send the content of the event that is in the parameter of the function call to CloudWatch, we can check in the test lambda which data is sent to the SNS topic and which data isn’t.

This looks like [3]:

There is, however, one disadvantage of this solution: it takes quite some time before the data that unittest_support_echo function has written to cloudwatch is available for our test function: it will take about three to four minutes. That’s long. And it will cost you three to four minutes of CPU usage in Lambda, where all you do is wait.

I decided to search for solutions where the CloudWatch data is faster available. This can done by using log data with subscriptions [4]. When something is written to the log, the data is delivered to another service, for example Amazon Kinesis Stream, Amazon Kineses Data Firehose Stream or AWS Lambda. This is nice: we know now within seconds, not minutes, which data is written by the unittest_support_echo function.

The next step is to get this data back to the test lambda function: I’d like to do the tests and check the results of those tests in the same code. This is done by creating an SQS queue (SQS = Simple Queue Service).

When using SQS, you have the choice between a FIFO queue (First In, First Out). In these queues, the order in which the messages are sent to the receiver is the same as the order in which messages are put on the queue, it also takes care that messages are only sent once to the receiver. The other option is to use non-FIFO queues, where the order of the messages is not guaranteed and you will get the messages once, or multiple times.

Our code doesn’t rely on the order in which messages are received and we can deal with the situation that messages are received more than once, so we use a non-FIFO SQS queue. There are many options for SQS, but in our case there is just one receiver on the queue, so most options are not relevant to us.

The new architecture for this solution is:

It is possible for the testset Lambda to get the logs of the Lambda function that is called (in our case: unittest_object_under_test_accept) directly. This log is gzipped and also encrypted with base64. Other values that come back from calling the Lambda function, are StatusCode, FunctionError and Payload. My first impression was that these would contain the variables that I put in the return statement of the accept function to the API Gateway: 200 or 500 for the StatusCode and the correct text (“OK” for 200, “NotOK: retry later, admins: see cloudwatch logs for error” for 500) in the FunctionError. This turned out not to be the case: you will always get 200 back and you can see in the Payload attribute why the Lambda function crashed (if it crashed).

When you look at the log of the test Lambda, you will see that there are four OK’s for two test cases. First, the test case is called and a check is done to see if the object_under_test has sent the expected output to the Cloudwatch log. When this is the case, one point is earned. After testing this for both test cases, the SQS queue will be read and it will be checked if the object_under_test has sent the correct test cases to SNS. A second point can be earned if the correct test cases are sent to the SNS topic (and that no data is sent to the SNS topic when this isn’t expected).

When you want to play along, you can start this Lambda function yourself. Just use the default Hello World parameters, none of the tests use the events that are passed to the Lambda functions. When you need more help starting these functions, look at the second blog in this series about Lambda functions [1].


In the test function for accept there are only two tests: a test where the Lambda function succeeds in sending the event data to the SNS topic, and a test where the Lambda function doesn’t succeed to do so.

The decrypt test function looks very much the same as the accept function. The only difference is that there are more tests: eleven in total. The decrypt unit test uses the same SNS topic and the same support functions to do its job. The disadvantage is that the unit tests for accept and decrypt cannot be started at the same time.


The unit test for update_db uses a different table: AMIS-unittest-shops. I added a record for each unit test that should change a database record:

This unit test is pretty straightforward:


You might recall that I talked about IAM in the Lambda blog before. I told you there, that I’d like to implement least privileged access rights. In the case of the unit tests, I used one IAM role and one IAM policy for all the lambda functions. This is done, because the Lambda functions don’t exist too long: they will be part of a CI/CD pipeline, and then be gone afterwards. To show you how this works, it is possible to create the pipeline without test objects. The only test object that is left, is the smoke test because you will use the smoke test in production as well.

Play along

When you want to play along, you can use the same repository as before (, but use shop-2 instead of shop-1. When you have still objects running from shop-1, please destroy these objects before installing the objects from shop-2.


[1] Previous blogs:

– Introduction:

– Lambda and IAM:

– SNS:

– DynamoDB:

– API Gateway (1):

– API Gateway (2):

[2] Clean code Devoxx conference:

[3] eyes symbol:


About Post Author

Frederique Retsema

Frederique Retsema is active in IT since 1993. She loves to automate everything, her main interests are currently the serverless solutions in the AWS and Azure cloud. She works from September 2021 at Xforce. Xforce and AMIS are both part of the Conclusion holding. AMIS and Xforce work very well together!
0 %
0 %
0 %
0 %
0 %
0 %

Average Rating

5 Star
4 Star
3 Star
2 Star
1 Star

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Next Post

OpenEBS: Create persistent storage in your Charmed Kubernetes cluster quick and easy!

As a developer I wanted to experiment with Kubernetes environments which approximate production deployments. In order to do that I wanted a distributed storage solution and chose OpenEBS. Mainly because it was easy to get started and quick to get up and running. In this blog post I’ll describe how […]
%d bloggers like this: