Serverless Functions — Where to start? Part 1

Prashant M
3 min readJun 25, 2021

This article is “Foundational — Part 1” of Series on Serverless Functions.

Serverless Functions — Where to Start?

Micro-service or Function — which way to go? There are certain use cases that call for choosing the right implementation strategy during architectural design.

AWS #LambdaFunctions, GCP #CloudFunctions, #AzureFunctions — these are powerful tools that provides solid reasons to consider over microservices, some of those reasons are:

  • Low Learning Curve with your preferred language and runtime
  • Efficient processing in terms of time and cost
  • Minimal development and maintenance cost
  • Faster release cycle

Certainly managed serverless implementation has its own benefits over microservices we build from the ground up and design an ecosystem to maintain those.

This article gives you first-hand information on:

  1. Comparison between different cloud providers
  2. Use Cases

The upcoming articles shall explore in-depth implementation strategies.

Let’s get started…

1. Runtime Languages

The first thing to check if the required language is supported?

Language Support — Quick Reference

Custom Handler: If you prefer to create handlers in your own language which is not in the supported list then consider this option.

2. Runtime Environment

Let’s look at the limiting factors of the Function from various aspects:

  • How much memory function would need to process data?
  • How much CPU function would need to compute data?
  • How much duration function would need to finish processing?

Cloud providers have unique limiting specifications which need to be considered while deciding whether to create a function than a micro-service.

Below details are summarised for contextual understanding only. Refer to respective cloud providers’ documentation for the latest updates. Always consider future peak requirements on the processing specifications of the API/Service while designing with these constraints.

AWS Lambda

Memory/CPU allocation: 128MB to 10GB (Every 1,769 MB gets one vCPU allocated)

Duration: 15 mins (Timeout)

Scalability: It varies per region. Also AWS has more comprehensive management of concurrency and can be extended with reserved pool. Please refer below reference link for more details.

Cloud Functions

Memory/CPU allocation: (allowed values)128MB, (default)256MB, 512MB, 1024MB and 2048MB — CPU allocation happens dynamically.

Duration: 1 min (default) extended upto 9 mins.

Scalability: Shorter functions scales more. In Google Cloud, functions are divided into 2 categories: Background and HTTP. Background has certain invocation limits however HTTP functions can scale as required with selected memory allocation and corresponding CPU usage charged as per compute time. Refer documentation for more details.

Azure Functions

Memory/CPU allocation: This changes based on 2 things: Consumption Plan and Trigger Events. Based on these 2, no of instances are added as Function Hosts which then allocates Memory and CPU. Per Instance limits ranges: 1 to 14 GB memory and 1–2 CPU cores.

Duration: Its unbounded except the base “Consumption Plan” which has max 10 mins timeout.

Scalability: Again, its all depends on the plans: Consumption, Premium, Dedicated, ASE, Kubernetes. Based on plans, there will be cold start meaning few hosts will be running all the time and can scale based on no. of trigger events. Scaling is limited to plan purchased and no of instances added to Functions Hosts. No. of Instances varies from 10 to 200 Max.

3. Use Cases

--

--

Prashant M

Certified Enterprise and Multi-Cloud Solution Architect having expertise in Digital Transformation in Retail Ecommerce, Financial Services and other domains.