Why Serverless/FaaS is not there yet

Dragos
Dragos
Founder, robot with feelings. From planet Aiur.

Serverless offerings from different cloud services providers

It all started in 2014, when AWS launched Lambda, and promised developers a better, more hands-free way of running their systems.

Google Cloud and Azure followed with Cloud Functions and Azure Functions respectively.

The promised benefits are tremendous. Developers would get closer to event & microservice driven architectures, and worry less about production environments going down under intense or spiky traffic. And you get all this while paying a lot less money for it.

But in the developer world, things haven't taken off. The push was extraordinary. The pull, not so much. As the benefits are clear for both sides, why has this happened?

ā€

Runtimes are proprietary

Developers & their companies fear closed-source runtimes. AWS, GCP and Azure each have their own internal separate implementation of the runtimes and no one really knows what it's doing.

Some cloud providers like IBM opted to run on Apache OpenWhisk which is open-source. Others like Oracle Cloud built their own runtime and open-sourced it. It's called Fn - https://github.com/fnproject/fn. Let's applaud both IBM and Oracle for doing this from the beginning.

In 2018 AWS open sourced their serverless runtime (Firecracker) and this made it possible to support any language as long as somebody builds an adapter for it.

In 2019 Google Cloud launched Cloud Run which allows us to build Docker containers with our functions and this lets us pick and choose our specific runtimes.

Great moves from Google and AWS, but the other issues outlined below are still there.

Runtime Fragmentation

Because nothing is standardized, runtimes differ greatly. When you build serverless functions you're supposed to get an object containing event properties. Well, again those properties are different for each cloud vendor.

This makes it close to impossible to write your code once and deploy it wherever you wish.

Costs are unpredictable

Providers tax you on call count and on function runtime duration. Might seem straightforward, but most providers don't allow your function to be called by HTTP directly.

For example, using AWS Lambda. To call your function from outside, you must go through API Gateway, which is difficult to setup programmatically and has additional cost.

Seems like a freemium model with VERY expensive premium. If you have an app or API with low to medium traffic you could be paying nothing. If you have very high traffic, you'll get charged an arm.

Development experience lacks

As FaaS is young, most of the developer tools are rudimentary and cause a close to horrible DX. Things are improving, but still... You have to run a separate runtime just to run your code locally, and breakpoint debugging is really hard to do, sometimes impossible.

Language fragmentation

Not all providers can run your favorite programming language.

AWS runs Node.js, Java, Python, .NET, Go and a bunch of other languages people have built adapters for.

GCP runs everything with their new Cloud Run product .

Azure runs .NET(C#, F#), Node.js, Java. And is experimenting with Python, PHP, TypeScript, Batch, Bash and Powershell.

IBM Functions runs Node.js, Swift, Java, Go, PHP and Python. And Docker, which is a huge win, because we can run anything.

Oracle Fn claims to support any programming language. That's huge.

In 2019 it seems language fragmentation is not an issue anymore. Unless you're using very niche programming languages.

WebSockets don't work

Because serverless is stateless, stateful stuff like WebSockets doesn't work and there is no promise it will in the future.

There are alternatives like AWS API Gateway which provides WebSocket functionality through keeping state in a DynamoDB table, but it seems rather expensive and still proprietary.

Other solutions involve getting a 3rd party realtime provider like Pusher or Pubnub, but the same problems arise there.

Startup times

This is actually the biggest downside. Because your functions are not always on, sometimes the system will be slower to respond because it has to spin an instance with your code and respond on the fly.

There are some solutions like keeping some instances warm by pinging them, but still, when new traffic arrives it will still be slower for those users.

While still ok for some situations, it's not really ethical doing this, because cloud providers can only offer low-prices if not many instances are on all the time. As soon as everybody keeps their instances warm all the time, cloud providers will increase prices inevitably.

Startup times can be anything from 500ms for Go to 6s for Node.js and Python. So definitely, your users' experience will suffer. So it's a big no-no for most companies looking to run their APIs on FaaS platforms.

The future

Runtimes are getting better and better so we might see better startup times. Some cloud providers already support any language, and are doing it in the open. Hopefully in the next 2 to 5 years we'll be able to run everything serverless, as promised.

For now FaaS is actually good only for systems that don't have response time restrictions like background jobs, or batch processing, or internal data processing. It's actually not exactly where developers get most value from FaaS.

The most important things to developers seem to be auto-scalability and reliability, which seem to be promised, but the technology is just not there yet for most use cases.

How can Archbee help with a Serverless system?

Archbee Wiki provides an easy to adopt solution for documentation that is specifically tailored to software development teams. We've studied many teams and their workflows, we've identified common usecases and coded them into our software's DNA. When your team picks Archbee, you are guaranteed to see upsides like team happiness, synergy and productivity within less than 3 months of using our product.

Frequently Asked Questions

When did AWS launch Lambda?
Expand FAQ
AWS launched Lambda in 2014.
What are some of the benefits of using serverless architectures?
Expand Button
Serverless architectures provide benefits such as reducing the worry about production environments going down under intense or spiky traffic, the shift towards event-driven and microservice architecture, and cost-effectiveness.
What are some of the issues with serverless runtimes?
Expand Button
Some of the issues with serverless runtimes include proprietary nature, runtime fragmentation, unpredictability of costs, lack of development experience, language fragmentation and the inability to use WebSockets.
Why are startup times a downside in serverless architectures?
Expand Button
Startup times in serverless architectures can be slow due to the need to spin an instance with your code and respond on the fly, negatively affecting users' experiences. This makes it unattractive for companies looking to run their APIs on FaaS platforms.
What are the aspects where developers get most value from FaaS?
Expand Button
Developers get most value from FaaS in areas such as auto-scalability, reliability, and use cases that don't have response time restrictions like background jobs, batch processing, or internal data processing.

šŸ“– Table of contents

Answer questions instantly

Create and share documentation that answers questions instantly with Gen AI

Discover Archbee

Receive documentation and technical writing tips & trends ā€”Ā our newsletter

Join 5000+ people from around the world that receive a monthly edition of the Archbee Blog Newsletter.

Read more in

Knowledge Management