- Serverless - Discussion
- Serverless - Useful Resources
- Serverless - Quick Guide
- Serverless - Telegram Echo Bot
- Serverless - REST API with DynamoDB
- Serverless - Layer Creation
- Serverless - Packaging Dependencies
- Serverless - Plugins
- Serverless - Include/Exclude
- Serverless - API Gateway Triggered Lambdas
- Serverless - Scheduled Lambdas
- Serverless - Service
- Serverless - Regions, Memory-Size, Timeouts
- Serverless - Deploying Function
- Serverless - Installation
- Serverless - Introduction
- Serverless - Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
Serverless - REST API with DynamoDB
So far, we have learned several concepts related to serverless lambda deployments. Now it is time to look at some examples. In this chapter, we will look at one of the examples officially provided by Serverless. We will be creating, as the name suggests, a REST API. All our lambda functions, as you would have guessed, will be triggered by an API Gateway. Our lambda functions will interface with a dynamoDB table, which is a to-do pst essentially, and the user will be able to perform several operations, pke creating a new item, fetching existing items, deleting items, etc. using the endpoints that will be exposed post the deployment.If you are not famipar with REST APIs, then you can read up more about them
.Code Walkthrough
The code can be found on GitHub −
We will have a look at the project structure, discuss some new concepts that we haven t seen so far, and then perform the walkthrough of the serverless.yml file. The walkthrough of all the function handlers will be redundant. Therefore, we will walk through just one function handler. You can take up understanding the other functions as an exercise.
Project Structure
Now, if you look at the project structure, the lambda function handlers are all within separate .py files in the todos folder. The serverless.yml file specifies the todos folder in the path of each function handler. There are no external dependencies, and therefore, no requirements.txt file.
New Concepts
Now, there are a couple of terms that you may be seeing for the first time. Let s scan these quickly −
dynamoDB − This is a NoSQL (Not only SQL) database provided by AWS. While not exactly accurate, broadly speaking, NoSQL is to SQL what Word is to Excel. You can read more about NoSQL
. There are 4 types of NoSQL databases − Document databases, key-value databases, wide-column stores, and graph databases. dynamoDB is a key-value database, meaning that you can keep inserting key-value pairs into the database. This is similar to redis cache. You can retrieve the value by referencing its key.boto3 − This is the AWS SDK for Python. If you need to configure, manage, call, or create any service of AWS (EC2, dynamoDB, S3, etc.) within the lambda function, you need the boto3 SDK.You can read up more about boto3
.Apart from these, there are some concepts that we will encounter during the walkthrough of the serverless.yml and the handler function. We will discuss them there.
serverless.yml Walkthrough
The serverless.yml file begins with the definition of the service.
service: serverless-rest-api-with-dynamodb
That is followed by the declaration of the framework version range through the following pne −
frameworkVersion: ">=1.1.0 <=2.1.1"
This acts pke a check. If your serverless version doesn t pe in this range, it will throw up an error. This helps when you are sharing code and would want everyone using this serverless.yml file to use the same serverless version range to avoid problems.
Next, within the provider, we see two extra fields that we haven t encountered so far − environment and iamRoleStatements.
provider: name: aws runtime: python3.8 environment: DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage} iamRoleStatements: - Effect: Allow Action: - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}: *:table/${self:provider.environment.DYNAMODB_TABLE}"
Environment, as you would have guessed, is used to define environment variables. All the functions defined within this serverless.yml file can fetch these environment variables. We will see an example in the function handler walkthrough below. Over here, we are defining the dynamoDB table name as an environment variable.
The $ sign signifies a variable. The self keyword refers to the serverless.yml file itself, while opt refers to an option that we can provide during sls deploy. Thus, the table name will be the service name followed by a hyphen followed by the first stage parameter that the file finds: either one available from options during serverless deploy, or the provider stage, which is dev by default.Thus, in this case, if you don t provide any option during serverless deploy, the dynamoDB table name will be serverless-rest-api-with-dynamodb-dev. You can read more about serverless variables
iamRoleStatements define permissions provided to the functions. In this case, we are allowing the functions to perform the following operations on the dynamoDB table − Query, Scan, GetItem, PutItem, UpdateItem, and DeleteItem. The Resource name specifies the exact table on which these operations are allowed.If you had entered "*" in place of the resource name, you would have allowed these operations on all the tables. However, here, we want to allow these operations on just one table, and therefore, the arn (Amazon Resource Name) of this table is provided in the Resource name, using the standard arn format. Here again, the first one of either the option region (specified during serverless deploy) or the region mentioned in provider (us-east-1 by default)is used.
In the functions section, the functions are defined as per the standard format. Notice that get, update, delete all have the same path, with id as the path parameter. However, the method is different for each.
functions: create: handler: todos/create.create events: - http: path: todos method: post cors: true pst: handler: todos/pst.pst events: - http: path: todos method: get cors: true get: handler: todos/get.get events: - http: path: todos/{id} method: get cors: true update: handler: todos/update.update events: - http: path: todos/{id} method: put cors: true delete: handler: todos/delete.delete events: - http: path: todos/{id} method: delete cors: true
Later on, we come across another block that we haven t seen before, the resources block. This block basically helps you specify the resources that you will need to create, in a CloudFormation template, for the functions to work. In this case, we need to create a dynamoDB table for the functions to work. So far, we have specified the name of the table, and even referenced its ARN. But we haven t created the table. Specifying the characteristics of the table in the resources block will create that table for us.
resources: Resources: TodosDynamoDbTable: Type: AWS::DynamoDB::Table DeletionPopcy: Retain Properties: AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 TableName: ${self:provider.environment.DYNAMODB_TABLE}
There are a lot of configurations being defined here, most of them specific to dynamoDB. Briefly, we are asking serverless to create a TodosDynamoDbTable , or type DynamoDB Table , with TableName (mentioned right at the bottom) equal to the one defined in environment variables in provider. We are setting its deletion popcy to Retain , which means that if the stack is deleted, the resource is retained. See
. We are saying that the table will have an attribute named id, and its type will be String. We are also specifying that the id attribute will be a HASH key or a partition key. You can read up more about KeySchemas in dynamoDB tables . Finally, we are specifying the read capacity and write capacity of the table.That s it! Our serverless.yml file is now ready. Now, since all the function handlers are more or less similar, we will walk through just one handler, that of the create function.
Walkthrough of the create function handler
We being with a couple of import statements
import json import logging import os import time import uuid
Next, we import boto3, which, as described above, is the AWS SDK for python. We need boto3 to interface with dynamoDB from within the lambda function.
import boto3 dynamodb = boto3.resource( dynamodb )
Next, in the actual function handler, we first check the contents of the events payload (create API uses the post method). If its body doesn t contain a text key, we haven t received a vapd item to be added to the todo pst. Therefore, we raise an exception.
def create(event, context): data = json.loads(event[ body ]) if text not in data: logging.error("Vapdation Failed") raise Exception("Couldn t create the todo item.")
Considering that we got the text key as expected, we make preparations for adding it to the dynamoDB table. We fetch the current timestamp, and connect to the dynamoDB table. Notice how the environment variable defined in serverless.yml is fetched (using os.environ)
timestamp = str(time.time()) table = dynamodb.Table(os.environ[ DYNAMODB_TABLE ])
Next, we create the item to be added to the table, by generating a random uuid using the uuid package, using the received data as text, setting createdAt and updatedAt to the current timestamp,and setting the field checked to False. checked is another field which you can update, apart from text, using the update operation.
item = { id : str(uuid.uuid1()), text : data[ text ], checked : False, createdAt : timestamp, updatedAt : timestamp, }
Finally, we add the item to the dynamoDB table and return the created item to the user.
# write the todo to the database table.put_item(Item=item) # create a response response = { "statusCode": 200, "body": json.dumps(item) } return response
With this walkthrough, I think the other function handlers will be self-explanatory. In some functions, you may see this statement − "body" − json.dumps(result[ Item ], cls=decimalencoder.DecimalEncoder). This is a workaround used for a
in json.dumps. json.dumps can t handle decimal numbers by default and therefore, the file has been created to contain the DecimalEncoder class which handles this.Congratulations on understanding your first comprehensive project created using serverless. The creator of the project has also shared the endpoints of his deployment and the ways to test these functions in the
file. Have a look. Head on to the next chapter to see another example. Advertisements