22 min read

In this article by Diego Zanon, author of the book Building Serverless Web Applications, we give you an introduction to the Serverless model and the pros and cons that you should consider before building a serverless application.

(For more resources related to this topic, see here.)

Introduction to the Serverless model

Serverless can be a model, a kind of architecture, a pattern or anything else you prefer to call it. For me, serverless is an adjective, a word that qualifies a way of thinking. It’s a way to abstract how the code that you write will be executed. Thinking serverless is to not think in servers. You code, you test, you deploy and that’s (almost) enough.

Serverless is a buzzword. You still need servers to run your applications, but you should not worry about them that much. Maintaining a server is none of your business. The focus is on the development, writing code, and not in the operation.

DevOps is still necessary, although with a smaller role. You need to automate deployment and have at least a minimal monitoring of how your application is operating and costing, but you won’t start or stop machines to match the usage and you’ll neither replace failed instances nor apply security patches to the operating system.

Thinking serverless

A serverless solution is entirely request-driven. Every time that a user requests some information, a trigger will notify your cloud vendor to pick your code and execute it to retrieve the answer. In contrast, a traditional solution also works to answer requests, but the code is always up and running, consuming machine resources that were reserved specifically for you, even when no one is using your website.

In a serverless architecture, it’s not necessary to load the entire codebase into a running machine to process a single request. For a faster loading step, only the code that is necessary to answer the request is selected to run. This small piece of the solution is referenced to as a function. So we only run functions on demand.

Although we call it simply as a function, it’s usually a zipped package that contains a piece of code that runs as an entry point and all the modules that this code depends to execute.

What makes the Serverless model so interesting is that you are only billed for the time that was needed to execute your function, which is usually measured in fractions of seconds, not hours of use. If no one is using your service, you pay nothing.

Also, if you have a sudden peak of users accessing your app, the cloud service will load different instances to handle all simultaneous requests. If one of those cloud machines fails, another one will be made available automatically, without your interference.

Serverless and PaaS

Serverless is often confused with Platform as a Service (PaaS). PaaS is a kind of cloud computing model that allows developers to launch applications without worrying about the infrastructure. According to this definition, they have the same objective, which is true. Serverless is like a rebranding of PaaS or you can call it as the next generation of PaaS.

The main difference between PaaS and Serverless is that in PaaS you don’t manage machines, but you know that they exist. You are billed by provisioning them, even if there is no user actively browsing your website. In PaaS, your code is always running and waiting for new requests. In Serverless, there is a service that is listening for requests and will trigger your code to run only when necessary. This is reflected in your bill. You will pay only for the fractions of seconds that your code was executed and the number of requests that were made to this listener.

IaaS and on-premises

Besides PaaS, Serverless is frequently compared to

Infrastructure as a Service (IaaS) and the on-premises solutions to expose its advantages. IaaS is another strategy to deploy cloud solutions where you hire virtual machines and is allowed to connect to them to configure everything that you need in the guest operating system. It gives you greater flexibility, but comes along with more responsibilities. You need to apply security patches, handle occasional failures and set up new servers to handle usage peaks. Also, you pay the same per hour if you are using 5% or 100% of the machine’s CPU.

On-premises are the traditional kind of solution where you buy the physical computers and run them inside your company. You get total flexibility and control with this approach. Hosting your own solution can be cheaper, but it happens only when your traffic usage is extremely stable. Over or under provisioning computers is so frequent that it’s hard to have real gains using this approach, also when you add the risks and costs to hire a team to manage those machines. Cloud providers may look expensive, but several detailed use cases prove that the return on investment (ROI) is larger running on the cloud than on-premises. When using the cloud, you benefit from the economy of scale of many gigantic data centers. Running by your own exposes your business to a wide range of risks and costs that you’ll never be able to anticipate.

The main goals of serverless

To define a service as serverless, it must have at least the following features:

  • Scale as you need: There is no under or over provisioning
  • Highly available: Fault-tolerant and always online
  • Cost-efficient: Never pay for idle servers

Scalability

Regarding IaaS, you can achieve infinite scalability with any cloud service. You just need to hire new machines as your usage grows. You can also automate the process of starting and stopping servers as your demand changes. But this is not a fast way to scale. When you start a new machine, you usually need something like 5 minutes before it can be usable to process new requests. Also, as starting and stopping machines is costly, you only do this after you are certain that you need. So, your automated process will wait for some minutes to confirm that your demand has changed before taking any action.

IaaS is able to handle well-behaved usage changes, but can’t handle unexpected high peaks that happen after announcements or marketing campaigns. With serverless, your scalability is measured in seconds and not minutes. Besides being scalable, it’s very fast to scale. Also, it scales per request, without needing to provision capacity.

When you consider a high usage frequency in a scale of minutes, IaaS suffers to satisfy the needed capacity while serverless meets even higher usages in less time.

In the following figure, the left graph shows how scalability occurs with IaaS. The right graph shows how well the demand can be satisfied using a serverless solution:

With an on-premises approach, this is an even bigger problem. As the usage grows, new machines must be bought and prepared. However, increasing the infrastructure requires purchase orders to be created and approved, delay waiting the new servers to arrive and time for the team to configure and test them. It can take weeks to grow, or even months if the company is very big and requests many steps and procedures to be filled in.

Availability

A highly available solution is one that is fault-tolerant to hardware failures. If one machine goes out, you can keep running with a satisfactory performance. However, if you lose an entire data center due to a power outage, you need machines in another data center to keep the service online. It generally means duplicating your entire infrastructure by placing each half in a different data center.

Highly available solutions are usually very expensive in IaaS and on-premises. If you have multiple machines to handle your workload, placing them in different physical places and running a load balance service can be enough. If one data center goes out, you keep the traffic in the remaining machines and scale to compensate. However, there are cases where you pay extra without using those machines.

For example, if you have a huge relational database that scaled vertically, you will end up paying for another expensive machine just as a slave to keep the availability. Even for NoSQL databases, if you set a MongoDB replica set in a consistent model, you pay for instances that will act only as secondaries, without serving to alleviate read requests.

Instead of running idle machines, you can set them in a cold start state, meaning that the machine is prepared, but is off to reduce costs. However, if you run a website that sells products or services, you can lose customers even in small downtimes. Cold start for web servers can take a few minutes to recover, but needs several minutes for databases.

Considering these scenarios, you get high availability for free in serverless. The cost is already considered in what you pay to use.

Another aspect of availability is how to handle Distributed Denial of Service (DDoS) attacks. When you receive a huge load of requests in a very short time, how do you handle it? There are some tools and techniques that help mitigate the problem, for example, blacklisting IPs that go over a specific request rate, but before those tools start to work, you need to scale the solution, and it needs to scale really fast to prevent the availability to be compromised. In this, again, serverless has the best scaling speed.

Cost efficiency

It’s impossible to match the traffic usage with what you have provisioned. Considering IaaS or on-premises, as a rule of thumb, CPU and RAM usage must always be lower than 90% to consider the machine healthy, and it is desirable to have a CPU using less than 20% of the capacity on normal traffic. In this case, you are paying for 80% of the waste, where the capacity is in an idle state. Paying for computer resources that you don’t use is not efficient.

Many cloud providers advertise that you just pay for what you use, but they usually offer significant discounts when you provision for 24 hours of usage in long term (one year or more). This means that you pay for machines that will keep running even in very low traffic hours. Also, even if you want to shut down machines in hours with very low traffic, you need to keep at least a minimum infrastructure 24/7 to keep you web server and databases always online. Regarding high availability, you need extra machines to add redundancy. Again, it’s a waste of resources.

Another efficiency problem is related with the databases, especially relational ones. Scaling vertically is a very troublesome task, so relational databases are always provisioned considering max peaks. It means that you pay for an expensive machine when most of the time you don’t need one.

In serverless, you shouldn’t worry about provisioning or idle times. You should pay exactly for the CPU and RAM time that is used, measured in fractions of seconds and not hours. If it’s a serverless database, you need to store data permanently, so this represents a cost even if no one is using your system. However, storage is usually very cheap. The higher cost, which is related with the database engine that runs queries and manipulates data, will only be billed by the amount of time used without considering idle times.

Running a serverless system continuously for one hour has a much higher cost than one hour in a traditional system. However, the difference is that it will never keep one machine with 100% for one hour straight. The cost efficiency of serverless is perceived clearer in websites with varying traffic and not in the ones with flat traffic.

Pros

We can list the following strengths for the Serverless model:

  • Fast scalability
  • High availability
  • Efficient usage of resources
  • Reduced operational costs
  • Focus on business, not on infrastructure
  • System security is outsourced
  • Continuous delivery
  • Microservices friendly
  • Cost model is startup friendly

Let’s skip the first three benefits, since they were already covered in the previous section and take a look into the others.

Reduced operational costs

As the infrastructure is fully managed by the cloud vendor, it reduces the operational costs since you don’t need to worry about hardware failures, neither applying security patches to the operating system, nor fixing network issues. It effectively means that you need to spend less sysadmin hours to keep your application running.

Also, it helps reducing risks. If you make an investment to deploy a new service and that ends up as a failure, you don’t need to worry about selling machines or how to dispose of the data center that you have built.

Focus on business

Lean software development advocates that you must spend time in what aggregates value to the final product. In a serverless project, the focus is on business. Infrastructure is a second-class citizen.

Configuring a large infrastructure is a costly and time consuming task. If you want to validate an idea through a Minimum Viable Product (MVP), without losing time to market, consider using serverless to save time. There are tools that automate the deployment, such as the Serverless Framework, which helps the developer to launch a prototype with minimum effort. If the idea fails, infrastructure costs are minimized since there are no payments in advance.

System security

The cloud vendor is responsible to manage the security of the operating system, runtime, physical access, networking, and all related technologies that enable the platform to operate. The developer still needs to handle authentication, authorization, and code vulnerabilities, but the rest is outsourced to the cloud provider. It’s a positive feature if you consider that a large team of specialists is focused to implement the best security practices and patches new bug fixes as soon as possible to serve hundreds of their customers. That’s economy of scale.

Continuous delivery

Serverless is based on breaking a big project into dozens of packages, each one represented by a top-level function that handles requests. Deploying a new version of a function means uploading a ZIP package to replacing the previous one and updating the endpoint configuration that specifies how this function can be triggered.

Executing this task manually, for dozens of functions, is an exhaustive task. Automation is a must-have feature when working in a serverless project. For this task, we can use the Serverless Framework, which helps developers to manage and organize solutions, making a deployment task as simple as executing a one-line command. With automation, continuous delivery is a consequence that brings many benefits like the ability to deploy short development cycles and easier rollbacks anytime.

Another related benefit when the deployment is automated is the creation of different environments. You can create a new test environment, which is an exact duplicate of the development environment, using simple commands. The ability to replicate the environment is very important to favor acceptance tests and deployment to production.

Microservices friendly

A microservices architecture is encouraged in a serverless project. As your functions are single units of deployment, you can have different teams working concurrently on different use cases. You can also use different programming languages in the same project and take advantage of emerging technologies or team skills.

Cost model

Suppose that you have built a serverless store. The average user will make some requests to see a few products and a few more requests to decide if they will buy something or not. In serverless, a single unit of code has a predictable time to execute for a given input. After collecting some data, you can predict how much a single user costs in average, and this unit cost will almost be constant as your application grows in usage.

Knowing how much a single user cost and keeping this number fixed is very important for a startup. It helps to decide how much you need to charge for a service or earn through ads or sales to make a profit.

In a traditional infrastructure, you need to make payments in advance, and scaling your application means increasing your capacity in steps. So, calculating the unit cost of a user is more difficult and it’s a variable number.

The following chart makes this difference easier to understand:

Cons

Serverless is great, but no technology is a silver bullet. You should be aware of the following issues:

  • Higher latency
  • Constraints
  • Hidden inefficiencies
  • Vendor dependency
  • Debugging
  • Atomic deploys
  • Uncertainties

We will discuss these drawbacks in detail now.

Higher latency

Serverless is request-driven and your code is not running all the time. When a request is made, it triggers a service that finds your function, unzips the package, loads into a container, and makes it available to be executed. The problem is that those steps takes time—up to a few hundreds of milliseconds. This issue is called cold start delay and is a trade-off that exists between serverless cost-effective model and a lower latency of traditional hosting.

There are some solutions to fix this performance problem. For example, you can configure your function to reserve more RAM memory. It gives a faster start and overall performance. The programming language is also important. Java has a much higher cold start time then JavaScript (Node.js).

Another solution is to benefit from the fact that the cloud provider may cache the loaded code, which means that the first execution will have a delay but further requests will benefit by a smaller latency. You can optimize a serverless function by aggregating a large number of functionalities into a single function. The consequence is that this package will be executed with a higher frequency and will frequently skip the cold start issue. The problem is that a big package will take more time to load and provoke a higher first start time.

At a last resort, you could schedule another service to ping your functions periodically, like one time per couple of minutes, to prevent them to be put to sleep. It will add costs, but remove the cold start problem.

There is also a concept of serverless databases that references services where the database is fully managed by the vendor and it charges only the storage and the time to execute the database engine. Those solutions are wonderful, but they add even more delay for your requests when compared with traditional databases. Proceed with caution when selecting those.

Constraints

If you go serverless, you need to know what the vendor constraints are. For example, on Amazon Web Services (AWS), you can’t run a Lambda function for more than 5 minutes. It makes sense because if you’re doing this, you are using it wrong. Serverless was designed to be cost efficient in short bursts. For constant and predictable processing, it will be expensive.

Another constraint on AWS Lambda is the number of concurrent executions across all functions within a given region. Amazon limits this to 100. Suppose that your functions need 100 milliseconds in average to execute. In this scenario, you can handle up to 1000 users per second. The reasoning behind this restriction is to avoid excessive costs due to programming errors that may create potential runways or recursive iterations.

AWS Lambda has a default limit of 100 concurrent executions. However, you can file a case into AWS Support Center to raise this limit. If you say that your application is ready for production and that you understand the risks, they will happily increase this value.

When monitoring your Lambda functions using Amazon CloudWatch, there is an option called Throttles. Each invocation that exceeds the safety limit of concurrent calls is counted as one throttle.

Hidden inefficiencies

Some people are calling serverless as a NoOps solution. That’s not true. DevOps is still necessary. You don’t need to worry much about servers because they are second-class citizens and the focus is on your business. However, adding metrics and monitoring your applications will always be a good practice. It’s so easy to scale that a specific function may be deployed with a poor performance that takes much more time than necessary and remains unnoticed forever because no one is monitoring the operation.

Also, over or under provisioning is also possible (in a smaller sense) since you need to configure your function setting the amount of RAM memory that it will reserve and the threshold to timeout the execution. It’s a very different scale of provisioning, but you need to keep it in mind to avoid mistakes.

Vendor dependency

When you build a serverless solution, you trust your business in a third-party vendor. You should be aware that companies fail and you can suffer downtimes, security breaches, and performance issues. Also, the vendor may change the billing model, increase costs, introduce bugs into their services, have poor documentations, modify an API forcing you to upgrade, and terminate services. A whole bunch of bad things may happen.

What you need to weigh is whether it’s worth trusting in another company or making a big investment to build all by yourself. You can mitigate these problems by doing a market search before selecting a vendor. However, you still need to count on luck. For example, Parse was a vendor that offered managed services with really nice features. It was bought by Facebook in 2013, which gave more reliability due to the fact that it was backed by a big company. Unfortunately, Facebook decided to shutdown all servers in 2016 giving one year of notice for customers to migrate to other vendors.

Vendor lock-in is another big issue. When you use cloud services, it’s very likely that one specific service has a completely different implementation than another vendor, making those two different APIs. You need to rewrite code in case you decide to migrate. It’s already a common problem. If you use a managed service to send e-mails, you need to rewrite part of your code before migrating to another vendor. What raises a red flag here is that a serverless solution is entirely based into one vendor and migrating the entire codebase can be much more troublesome.

To mitigate this problem, some tools like the Serverless Framework are moving to include multivendor support. Currently, only AWS is supported but they expect to include Microsoft, Google, and IBM clouds in the future, without requiring code rewrites to migrate. Multivendor support represents safety for your business and gives power to competitiveness.

Debugging

Unit testing a serverless solution is fairly simple because any code that your functions relies on can be separated into modules and unit tested. Integration tests are a little bit more complicated because you need to be online to test with external services. You can build stubs, but you lose some fidelity in your tests.

When it comes to debugging to test a feature or fix an error, it’s a whole different problem. You can’t hook into an external service and make slow processing steps to see how your code behaves. Also, those serverless APIs are not open source, so you can’t run them in-house for testing. All you have is the ability to log steps, which is a slow debugging approach, or you can extract the code and adapt it to host into your own servers and make local calls.

Atomic deploys

Deploying a new version of a serverless function is easy. You update the code and the next time that a trigger requests this function, your newly deployed code will be selected to run. This means that, for a brief moment, two instances of the same function can be executed concurrently with different implementations. Usually, that’s not a problem, but when you deal with persistent storage and databases, you should be aware that a new piece of code can insert data into a format that an old version can’t understand.

Also, if you want to deploy a function that relies on a new implementation of another function, you need to be careful in the order that you deploy those functions. Ordering is often not secured by the tools that automate the deployment process.

The problem here is that current serverless implementations consider that deployment is an atomic process for each function. You can’t batch deploy a group of functions atomically. You can mitigate this issue by disabling the event source while you deploy a specific group, but that means introducing a downtime into the deployment process, or you can use a monolith approach instead of a microservices architecture for serverless applications.

Uncertainties

Serverless is still a pretty new concept. Early adopters are braving this field testing what works and which kind patterns and technologies can be used. Emerging tools are defining the development process. Vendors are releasing and improving new services. There are high expectations for the future, but the future hasn’t come yet. Some uncertainties still worry developers when it comes to build large applications. Being a pioneer can be rewarding, but risky.

Technical debt is a concept that compares software development with finances. The easiest solution in the short run is not always the best overall solution. When you take a bad decision in the beginning, you pay later with extra hours to fix it. Software is not perfect. Every single architecture has pros and cons that append technical debt in the long run. The question is: how much technical debt does serverless aggregate to the software development process? Is it more, less, or equivalent to the kind of architecture that you are using today?

Summary

In this article, you learned about the Serverless model and how it is different from other traditional approaches. You already know what the main benefits and advantages are that it may offer for your next application. Also, you are aware that no technology is a silver bullet. You know what kind of problems you may have with serverless and how mitigate some of them.

Resources for article

Refer to Building Serverless Architectures, available at https://www.packtpub.com/application-development/building-serverless-architectures for further resources on this subject.

Resources for Article:


Further resources on this subject:

LEAVE A REPLY

Please enter your comment!
Please enter your name here