Coding is fun, but deployment can take the fun out of it!
Before cloud architecture gained traction, owning and borrowing servers used to be a very common strategy. Over the period of time managing own servers became a real issue. Many people dreamt of building an application and hosting it online. But investing in machines felt like too much investment.
Thanks to the eruption of cloud technologies like AWS! Now you have someone else to provision servers and databases for you. All you have to do is code and deploy.
Isn’t that easy? Not always. You still have to deal with external machines. Also, these machines are not cheap and you may not need these machines 24/7. So paying for them doesn’t make financial sense.
As humans evolve, we tend to find more and more shortcuts. Enter serverless architecture!
This new technology made it faster and easier to deploy your applications.
But the name serverless is a misnomer. It doesn’t mean that you don’t need servers to run your code. You still need them, but you don’t have to manage the servers.
It might seem like serverless architecture doesn’t suit big applications. But there are multi-million dollar companies I know of that run their entire application on serverless computing.
Why would you want to go serverless?
- If you are prototyping your application and if you don’t want to pay for a 24/7 server, then serverless architecture is your solution. When you are starting out, your app’s engagement is sparse. So it’s better to only pay for the processing power used.
- You don’t have to care about scaling, as it’s taken care of by the provider.
- Developers can focus on app development and worry less about maintaining infrastructure and handling problems associated with it.
- It’s ideal for fast-growing and rapidly evolving applications. This is because it’s easy to introduce new services using a serverless architecture.
Building our serverless application
When I was tasked to build a serverless application by my company on AWS, I had no idea how to do it. After a lot of googling and stack overflowing, I figured it out.
Our app was called Truemailer. The purpose of it was simple. Given an email ID, you can get the ID owner’s name and photo.
Let me brief you about the stack we used and how we went ahead implementing it!
1. Amazon Cognito
The first thing in an application is user management. You need to decide where and how will you store your user information. For our purposes, we chose Amazon Cognito.
Amazon Cognito handles user authentication, social signups, SAML, OpenID, you name it.
Implementing these from scratch is pretty tedious and can slow down application development. Giving the reigns to a trustworthy third party can make it easier.
2. AWS Lambda
This is where the core function of the app was executed. You can bundle your application logic into a zip and upload it on AWS Lambda as Lambda functions. Whenever you trigger a Lambda function, it executes and runs your application code.
Our Lambda functions handled a bunch of things:
- Receive requests for email ID queries and process them
- Handle payment from users
- Store data in DynamoDB
3. API Gateway
Now that the application logic and user management are taken care of, you will need something to route your users to your app and trigger app logic.
That’s where API Gateway comes in.
Using API Gateway, you can define the routes and assign which Lambda function to trigger for each route.
To be honest, API Gateway was a bit tricky to work with since nested routes can cause some confusion.
4. DynamoDB
DynamoDB is a solid candidate for managing data in a serverless architecture.
When it comes to databases, the complexities are not just limited to storing information. You need to make sure that
- your indexing is correct
- you are replicating it enough to make it more available
- your storage is optimised for the kind of data processing you are doing
In our use case, DynamoDB fits well.
It was used to store payment information of the user, profile information of email IDs queried, etc. Given the JSON document store format of it, DynamoDB was quite easy to deal with since we had data with a dynamic schema.
To learn more about various database types and where to use it check out my previous issue
5. Others
There were some other components which I would like to give a mention
- S3 bucket is the defacto for dumping large files if you are using AWS. So this was a no-brainer for storing invoices and images for the application.
- For queuing related tasks we used Amazon SQS.
- For sending out emails to our customers, we used Amazon SES.
Deploying your serverless architecture
Once you have figured out what services to use to effectively run your app, it’s time to have an efficient way of deploying it. After all, you cannot individually manage each service.
So how do we deploy this? Serverless framework to the rescue. Using this framework, you can write your services and associated resources in a file and with one command deploy it to your cloud account. Poof!
This is quite similar to docker files!
Wrapping up
I hope that this issue has given you enough understanding to start your first project using serverless architecture. My motive is to give you a general idea about technologies and tech stacks so that if you are interested in building your application using them, you can do a deep dive.
There are lots of tutorials on Youtube, that can provide you with a detailed understanding of how to write Serverless files and how to configure your AWS account to accept them.
Thanks for reading 😃
I write about Software Engineering and how to scale your applications in my weekly newsletter. To get such stories directly in your inbox, subscribe to it! 😃
If you like my content and want to support me to keep me going, consider buying me a coffee ☕️ ☕️