SERVERLESS — Attractive Engineering Practice
What is Serverless?
Serverless is a designing software method that enables developers to create and run services without the need to manage the underlying infrastructure. Whereas developers write and deploy code, cloud providers provision servers to run their applications, databases, and storage systems at any scale.
Why is Serverless attractive?
When we mention Serverless, we mean that there is no server or container management, no idle capacity, and scaling is very flexible and characterized by high availability.
Making app development & ops dramatically faster, cheaper, and easier infrastructure cost savings.
What is Serverless Architecture?
Serverless Architecture differs from other cloud computing models in that the cloud provider manages both the cloud infrastructure and app scaling.
Serverless architecture is an approach to software design that allows developers to build and run services without having to manage the underlying infrastructure. Developers can write and deploy code, while a cloud provider provisions servers to run their applications, databases, and storage systems at any scale.
Users pre-purchase capacity units in a standard Infrastructure-as-a-Service (IaaS) cloud computing model, which means you pay a public cloud provider for always-on server components to run your apps. Increasing server capacity during times of high demand and decreasing it when it is no longer required is the user’s responsibility. Even when an app isn’t being used, the cloud infrastructure required to run it is active.
In contrast, with Serverless architecture, apps are launched only as needed. When an event causes app code to run, the public cloud provider dynamically allocates resources for that code.
When the code has finished executing, the user’s payment is terminated. Serverless frees developers from routine and menial tasks associated with app scaling and server provisioning, in addition to cost and efficiency benefits.
Routine tasks such as operating system and file system management, security patches, load balancing, capacity management, scaling, logging, and monitoring are all offloaded to a cloud services provider with Serverless.
It is possible to create an entirely Serverless app or an app that is partially Serverless and partially traditional microservices.
Why Serverless Encourages Useful Engineering Practices?
Serverless encourages components that do ONE thing
It is advantageous to design individual software components that are only responsible for one thing. Examples of these advantages include:
They are easier to modify — Making software easy to change is a de facto principle for an IT professional. Using functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. As a result, changing your code is simple. Serverless functions, when written correctly, encourage code that is easy to change and stateless.
They are simpler to deploy — redeploying a single function or container should not disrupt other parts of your architecture if the changes you made to an individual service do not affect other components. That is one reason why many people switch from a “monorepo” to one repository per service.
It enforces self-contained execution environments
Serverless doesn’t only force you to make your components small, but it also requires that you define all resources needed for the execution of your function or container. That means that you cannot rely on any pre-configured state — you need to specify all package dependencies, environment variables, and any configuration you need to run your application.
Whether you use FaaS or a Serverless container, your environment must remain self-contained since code can be executed on an entirely different server any time you run it.
More frequent deployments are encouraged
Nothing prevents you from making more frequent deployments if your components are small, self-contained, and can be executed independently of one another. The need for functionality consolidation across single components remains (especially when it comes to the underlying data), but individual deployments become more autonomous.
It promotes the security principle of least privilege
In principle, your Serverless additives may still use an admin person with permission to get admission to and do the entirety. However, Serverless compute structures, such as AWS Lambda, inspire you to provide the feature permissions to only offerings strictly wished for the function’s execution, successfully leveraging the least privilege principle.
On the pinnacle of that, by using IAM roles, you may avoid hard-coding credentials or rely on storing secrets in external offerings or environment variables. Small Serverless additives encourage you to grant permissions on an according-to-service or even in line with-function stage.
It allows you to achieve high availability and fault tolerance easily
Most Serverless additives are designed in such a way as to offer high availability (HA). For instance, by using a default, AWS Lambda is deployed to more than one availability zone and retries twice in case of a failure of any asynchronous invocation. Accomplishing the same with non-serverless assets is possible and a long way from trivial.
Further, your containerized ECS duties, your DynamoDB tables, and your S3 items are, or can effortlessly be, deployed to more than one availability zones (or subnets) for resilience. Maximum DevOps engineers that leverage the “Infrastructure as Code” paradigm might consider that.
Serverless Infrastructure as Code
You’ve probably experienced this at some point in your IT career: you meticulously took care of installing everything on your computer and building all resources in such a way that this server (your “pet”) is appointed perfectly. Then, one day you notice that your server is down. You don’t have any backup and the code you used to configure the entire system isn’t stored. And it turns out that you had some environment variables responsible for defining user access to various resources. Now everything is gone, and you need to start entirely from scratch.
It isn’t necessary to look only at such extreme failure scenarios to see the danger in treating servers like pets. Imagine having a copy of the same server and resource configuration to create a development or user-acceptance-test environment. Perhaps you want to create a new instance of the same server for scale or provide high availability.
With a manual configuration, you always risk that the environments can end up being different.
Encourages using existing battle-tested components
If you decide on building a Serverless structure, it’s pretty unlikely that you could turn out to be constructing your very own message queuing device or notification service. You’ll rather rely on not unusual, well-known services presented with the aid of your cloud provider. Some examples based on AWS:
- Do you need a message queue? Use SQS.
- Do you need to send notifications? Use SNS.
- Do you need to handle secrets? Use Secrets Manager.
- Do you need to build a REST API? Use API Gateway.
- Do you need to manage permissions or user access? Use IAM or Cognito.
- Do you need to store some key-value pairs or data objects? Use DynamoDB or simply dump data to S3.
Why is that useful? The fact is that many software engineering initiatives are frequently no longer heavy for developers, in particular for very experienced programmers who have already, again and again, tackled similar problems inside the beyond. For the reason that software engineers are relatively clever and talented human beings, they often start building their own, on occasion overly complicated and difficult to maintain solutions when they become bored.
In this article, we investigated some of the reasons why serverless platforms encourage useful engineering practices. Among them, we could see those small self-contained components deployed independently of each other are encouraged by Serverless. We noticed that it also helps with security and the high availability of the overall infrastructure. Finally, we looked at different serverless building blocks that allow us to build resilient and cost-effective architectures.