An Event-Driven Architecture for a Pizza Delivery Events Flow
TL;DR
- Event-driven architectures enable low-cost serverless approaches to solve problems in which we only need processing as response to events;
- We expose the design of an event-driven architecture for a Pizza Delivery flow, from order registration to delivery;
- Infrastructure as Code is used to create and destroy the infrastructure. We use both Terraform and Serverless Framework (The architecture diagram is shown in Figure 1 below);
- Lambda functions are defined using Python, and we use CloudWatch log messages to verify the functioning of the architecture by simulating some events.
Introduction and Context
Imagine the following scenario: you are the owner of a Pizzeria. The number of daily orders you have to deal with has become so big that you needed to come up with a scalable architecture in the Cloud, to handle a part of your events flow. You are going to use AWS to build this architecture, and it will be based on events. Here is what you need to delegate to the Cloud environment (in terms of key responsibilities and functional requirements):
- All the orders will be directed to an Event Bus inside EventBridge, for handling the incoming events;
- All events should be stored inside a DynamoDB table, by using a Lambda function. The information that should be stored for each event is: 1) Order ID, 2) Status (for instance “in the oven” or “ready”), 3) Client’s name and 4) Time of the request;
- For incoming data (that is, events in the Event Bus inside EventBridge) with a status of “ready”, we should both store the event inside DynamoDB table (step above), as well as send the event to an SQS queue using a Lambda function as the intermediary handler;
- There should be a final Lambda function responsible for handling the incoming SQS events, put in the queue in the previous step. This last Lambda function should just say something like “Hey, the Pizza was delivered!”.
Before continuing to the more technical aspects of the solution and the actual code part, let us illustrate the structure of the project:
base_folder
|--pizzas-service
| |--allPizzasLambda.py
| |--queueHandlingLambda.py
| |--readyPizzasLambda.py
| |--requirements.txt
| |--serverless.yml
|--terraform
| |--main.tf
| |--dynamo.tf
| |--eventbridge.tf
| |--sqs.tf
The base_folder in the schema above can have any name you prefer. The structure for the inner layers, however, is more of a recommendation to facilitate the deployment process. Terraform infrastructure definition files are kept separately since this is the first process we will do, and since the infra deployment has nothing to do with the codes used inside the Lambda functions.
Creating the supporting Infrastructure
For this application that we mentioned in the previous sections, we decided to adopt 2 tools for the Infra as Code (IaC) part: Terraform and Serverless Framework. If you want to follow along with the code from now on, we recommend checking out the appropriate installation guides from the provider’s documentations: Terraform and Serverless. Let us present here the Terraform code files for creating:
- The back-end state file (stored inside an S3 bucket);
- The DynamoDB table for storing the events coming from the EventBridge event bus;
- The EventBridge event bus itself;
- The SQS queue.
# main.tf
terraform {
required_version = ">=1.6.0"
backend "s3" {
bucket = "test-pizzaria-bucket-3728937219"
key = "terraform/pizzaria.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
}
data "aws_caller_identity" "current" {}
# eventbridge.tf
resource "aws_cloudwatch_event_bus" "pizzaria-event-bus" {
name = "pizzaria"
}
# dynamo.tf
resource "aws_dynamodb_table" "pizzaria-events-db" {
attribute {
name = "order"
type = "S"
}
attribute {
name = "status"
type = "S"
}
billing_mode = "PROVISIONED"
deletion_protection_enabled = "false"
hash_key = "order"
name = "pizzaria-events"
point_in_time_recovery {
enabled = "false"
}
range_key = "status"
read_capacity = "5"
stream_enabled = "false"
table_class = "STANDARD"
write_capacity = "5"
}
# sqs.tf
resource "aws_sqs_queue" "waiting-delivery-queue" {
content_based_deduplication = "false"
delay_seconds = "0"
fifo_queue = "false"
kms_data_key_reuse_period_seconds = "300"
max_message_size = "262144"
message_retention_seconds = "345600"
name = "waiting-delivery"
policy = <<POLICY
{
"Id": "__default_policy_ID",
"Statement": [
{
"Action": "SQS:*",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account_name>:root"
},
"Resource": "arn:aws:sqs:us-east-1:<account_name>:waiting-delivery",
"Sid": "__owner_statement"
}
],
"Version": "2012-10-17"
}
POLICY
receive_wait_time_seconds = "0"
sqs_managed_sse_enabled = "true"
visibility_timeout_seconds = "30"
}
Note that, in the main.tf file above, we used a bucket named “test-pizzaria-bucket-3728937219”. We recommend using something like <project-name>-<AWS-Account-Id>, since the chance that this name has been already used for another S3 bucket is pretty small. Also note that some of the infrastructure there are defined for specific regions (like the SQS Queue), and need specific data about your AWS Account (see the Resource and Principal fields inside sqs.tf, where you need to put your AWS Account Id to correctly set the statements in the policy part linked to the SQS queue).
Supposing you correctly installed the Terraform CLI in your environment, and that you created your Terraform S3 bucket using ClickOps (in our example case, the bucket named test-pizzaria-bucket-3728937219), you can already deploy the first part of the infrastructure by using the following commands (you have to be inside the terraform folder to use the commands shown below):
terraform init
terraform plan -out pizzaria.tfplan
terraform apply pizzaria.tfplan
The first line will initiate the back-end of the terraform project, and also fetch the AWS provider for Terraform. The terraform plan command will examine the four files we have inside the terraform folder, and produce a plan and corresponding output file named pizzaria.tfplan. Now you should be able to see, in the Console, that 3 resources will be created: One DynamoDB Table, one SQS queue, and one Event Bus in EventBridge. By using the terraform apply command passing the tfplan file as parameter, you will deploy exactly the resources shown after executing the plan command (so this tfplan file is good for keeping consistency between the plan and apply commands).
NOTE: Don’t forget that, to be able to deploy the infrastructure with Terraform, you need valid AWS credentials and proper permissions to create resources in the given AWS account. This procedure is easier to do if you have a personal or sandbox account, in which case you should have a set of Admin credentials you can put inside your .aws/credentials file, and use those credentials to deploy the infrastructure.
Creating the Lambda handlers and deploying the functions
Now that we have created the “base” infrastructure we will need, let us create the Lambda functions, which will be the intermediary resources for 1) storing events in DynamoDB, 2) sending ready orders to the SQS queue, and 3) processing the final ready pizzas from the SQS queue.
To create the remaining part of the infrastructure, use the following code inside the pizzas-service/serverless.yml file:
service: pizzas-service
frameworkVersion: '3'
plugins:
- serverless-pseudo-parameters
package:
exclude:
- layer/**
layers:
LayerDependencies:
path: layer
description: "Learing layer"
provider:
name: aws
runtime: python3.9
lambdaHashingVersion: 20201221
iam:
role: !Sub arn:aws:iam::${AWS::AccountId}:role/LabRole
eventBridge:
useCloudFormation: true
functions:
all-pizza-events:
handler: allPizzasLambda.allPizzaEventsHandler
layers:
- {Ref: LayerDependenciesLambdaLayer}
events:
- eventBridge:
eventBus:
- arn:aws:events:${aws:region}:${aws:accountId}:event-bus/pizzaria
pattern:
source:
- com.pizza.status
ready-pizza-events:
handler: readyPizzasLambda.onlyReadyPizzaHandler
layers:
- {Ref: LayerDependenciesLambdaLayer}
events:
- eventBridge:
eventBus:
- arn:aws:events:${aws:region}:${aws:accountId}:event-bus/pizzaria
pattern:
source:
- com.pizza.status
detail:
status:
- ready
sqs-pizza-handler:
handler: queueHandlingLambda.handler
events:
- sqs:
arn:
arn:aws:sqs:${aws:region}:${aws:accountId}:waiting-delivery
batchSize: 1
enabled: true
The file above defines exactly our three Lambda functions. Note that we already define the Lambda triggers as well, in the YAML file. Each Lambda is related to a different trigger (each trigger corresponds to an “events” key in the YAM file). Additionally, now you may be able to understand why we left this Serverless definition part as the second procedure, after the Terraform infrastructure deployment. We actually need to fetch the resources created with Terraform, to define the triggers of the Lambda functions. Thus, doing the Serverless YAML deployment as a first step would lead to errors since those resources didn’t exist prior to the Terraform Apply command we used above.
Before actually deploying the infrastructure for the Lambda functions, we still need to provide the Python file codes related to the Lambda handlers. The codes are given below.
# allPizzasLambda.py
import boto3
def allPizzaEventsHandler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('pizzaria-events')
dynamodb_item = {
"order": str(event["detail"]["order"]),
"status": event["detail"]["status"],
"client": event["detail"]["client"],
"time": event["time"]
}
table.put_item(Item=dynamodb_item)
print(f"Order {dynamodb_item['order']} was stored successfully in the DynamoDB Table.")
return True
# readyPizzasLambda.py
import json
import boto3
def onlyReadyPizzaHandler(event, context):
sqs = boto3.resource('sqs')
queue = sqs.get_queue_by_name(QueueName='waiting-delivery')
response = queue.send_message(MessageBody=json.dumps(event))
print(f"Order {str(event['detail']['order'])} is ready for delivery.")
return True
# queueHandlingLambda.py
import json
def handler(event, context):
for record in event["Records"]:
print(f"Quantity of records being sent each time: {len(event['Records'])}")
payload = json.loads(record["body"])
print(json.dumps(payload, indent=4))
return event
The first code block above is for storing all incoming events from the Event Bus, in the DynamoDB table. Note that we already rearrange the incoming data to store only the four important properties we discussed in the first section of this article.
The second code block handles the part of sending the ready pizzas to the SQS queue.
The third code block will process the messages inserted in the SQS queue, and put a message in the CloudWatch log group of the function, saying “Quantity of records being sent each time: 1”, as well as another message displaying the payload of each event coming from the SQS queue. Note that we only know that the number of records being sent each time equals 1, because we defined the batchSize as 1 in the YAML configuration, in the events definition part.
Finally, the last step to successfully deploy the stack is creating the “layer” folder, which will contain our Python dependencies of the Lambda functions. Create a requirements.txt file in the appropriate place (review the folder structure we showed in the beginning of this post), and put only one line, written “boto3” in it. Then, navigate (in the Terminal) to the base folder of the project, and execute the following command:
pip install -r ./pizzas-service/requirements.txt -t ./pizzas-service/layer
The command above will install the packages listed inside requirements.txt, inside a folder called layer, which is located inside the pizzas-service folder. Note that this folder is required inside the serverless.yml file. So the layer folder is where we define the Lambda layers we will deploy along with the Lambda functions themselves.
To deploy the Lambda functions using serverless, navigate to the pizzas-service folder and execute the following command in the Terminal:
sls deploy --verbose
If you miss any packages which are required during installation, the serverless framework will point out to you the missing requirements and how to install them. So this is a pretty straightforward procedure.
Time to Test!
If you followed along until this section, congratulations! You should have a nice Event-Driven architecture there, to handle your incoming Pizzeria order requests. To test if everything is working, you can create a sequence of events to be sent to the EventBridge Event Bus, to simulate the working environment of your Pizzeria. Create a file named putEventsPizzeria.py and copy the following code below inside the file:
import boto3
import json
import datetime
import random
clients = ['rafael','maria','teresa', 'tatiane', 'murilo']
possible_states = ['order done','preparing', 'in the oven','left the oven', 'packing for delivery','ready']
peeker = random.SystemRandom()
eventBridge = boto3.client('events')
def put_events(eventBus, source, detailType, detail):
response = eventBridge.put_events(
Entries=[
{
'Time': datetime.datetime.now(),
'Source': source,
'DetailType': detailType,
'Detail': json.dumps(detail),
'EventBusName': eventBus,
}
]
)
print("EventBridge Response: {}".format(json.dumps(response)))
def makeEvent(status, order_number, client):
eventBus="pizzaria"
source = "com.pizza.status"
detailType = "Change in Pizza"
detail = {
"status": status,
"order": order_number,
"client": client
}
put_events(eventBus, source, detailType, detail)
for i in range(100):
client = peeker.choice(clients)
for status in possible_states:
makeEvent(status, i, client)
The file above is simulating 600 events. For all the 6 possible status values of the pizza requests, we are sending events simulating different client’s names, to the Event Bus. We have the loop for 100 values of “i” and 6 possible values of status, so that is why we have 600 events being sent. Feel free to change this number. Just execute the above Python file and wait until the events have been sent.
Now comes the fun part: How can we know if this architecture is working properly?
- The most obvious thing we can do is a quick search at the DynamoDB table named “pizzaria-events”. A quick examination should show us all the 600 rows in the table, with 5 possibilities of names and 6 possibilities of status values.
- After verifying the DynamoDB items were inserted properly, we can look at the CloudWatch logs of the two Lambda functions handling the ready pizza orders. Those records should show us messages like this: “Order 1 is ready for delivery.” If you see this kind of message inside the log group in CloudWatch for the Lambda function named “ready-pizza-events”, it means that the message was successfully sent to the SQS queue.
- Finally, the last detail to check the functioning of the architecture, would be to look at the CloudWatch log messages of the “sqs-pizza-handler” Lambda function. Those should show us that all the 100 events (that is, the events for which status=ready) were processed after they were sent to the SQS queue.
Removing the resources
Last but not least, you can clean up the resources you created to follow this post instructions. One of the nicest parts of defining all the infrastructure using IaC is that we can destroy things as easily as we can re-create them. First, navigate back to pizzas-service folder and execute
sls remove
Wait until deletion of the resources is performed. Then, go back to the terraform folder and execute
terraform destroy
After executing those two commands, you should have all your environment cleaned up. You can check that the deletion was done for ALL the resources related to the stack we used, by navigating in the AWS Console to the CloudWatch log groups of the three Lambda functions we created. They should no longer exist. The DynamoDB table, EventBridge Event Bus and SQS queue should also vanish from the resources list.