Serverless Limitations with Lambda

AWS is complex. Start learning what's important now.

Sign up to get some heard-earned knowledge, starting with my top 10 AWS mistakes.

Lambda is the AWS service that powers their serverless offering. Essentially, you provide a bit of code to run (a function!), and Lambda will run in.

Serverless is not serverless, of course. In reality, you're running code on someone else's server. The tooling and the developer experience makes it feel like there's no server to worry about.

While we can thankfully avoid server management, there's still infrastructure running that code somewhere. That means there are limitations and caveats to know about!

Some limitations are imposed on us by AWS to ensure system stability. Other limitations are based on the nature of serverless and the technology running it.

If you're interested, the system running Lambda is called Firecracker. It's open source!

Let's dive into what limitations you may hit in Amazon's Lambda. We'll discuss some limitations placed on us by AWS, along with those common to serverless in general.

Payload Size

Lambda payloads (whether invoked by HTTP request or other means) have a limit of 6 MB. When running Lambda behind API Gateway or a load balancer, this means you need to make sure your HTTP requests are small.

For the most part, this means not managing file uploads in the application directly. Instead, you can upload files directly to an S3 bucket (and then send a request to your application that has the location of the uploaded file).

If you're a Laravel Vapor user, the laravel-vapor npm package is available to help upload files to S3 directly.

Since many of us use Lambda in an HTTP context, it's important to know limits of the 2 systems we can use to send HTTP requests to Lambda:

  1. API Gateway - API Gateway has an HTTP request limit of 10 MB (higher than Lambda's 6 MB)
  2. Application Load Balancers- These don't really have published HTTP request size limits, BUT they'll only allow HTTP bodies of up to 1 MB when used to invoke Lambda functions (and only accept a 1 MB response from Lambda!)

Using an Application Load Balancer can save you money when invoking Lambda functions at scale. API Gateway becomes costly at scale, eventually over-coming the cost of using an ALB instead.

It's common to start using Lambda with API Gateway, and then do a cost-benefit analysis after seeing your traffic patterns. You can then better calculate which service will save you money.

Pricing for API Gateway here. Pricing for ALB here.

Time

Lambda invocations are currently restricted to 15 minutes of runtime per function call. This makes them suitable for quick tasks (they're billed per millisecond!).

Longer tasks can be broken up into multiple Lambdas, but a cost/benefit analysis should certainly be made for your situation. You may just want to spin up EC2 instances or run containers in ECS (perhaps using Fargate, which is the serverless flavor of ECS).

Laravel Vapor users run queue jobs using SQS, which invokes Lambda functions. That means that with Vapor, your queue jobs are limited to 15 minutes!

Read-Only

Lambda file systems are read-only, with the exception of the /tmp directory. That directory has a storage limit of 512 MB.

The file systems are ephemeral - you'll lose any filesystem based changes (new files, changed files, etc). You absolutely cannot expect files to exist between invocations of Lambda functions.

This means that any persistence needs to be handled outside of Lambda. Files can be uploaded to, or downloaded from, S3 as needed.

A file managed within a Lambda function can be stored in-memory for a program to work on, or saved to the /tmp directory. RAM limits and the /tmp directory size limit are the limiting factors in those scenarios.

You determine how much RAM/CPU us available to a Lambda function ahead of time. This is part of the pricing model for Lambda.

Databases, caches, etc, will all need to be hosted elsewhere. Using a managed service such as DynamoDB, RDS, or ElastiCache is the most common way to accomplish this.

If you need to store files that never change in your Lambda function, you can use Layers to add that data into the Lambda function.

You can also attach EFS drives to your Lambda function, allowing for persistent file storage.

Some options for persistence in Lambda functions are discussed here.

Concurrency

You may have heard about surprise AWS bills resulting from Lambda functions gone amok. This is also a common way to kill your database. It's fairly easy for Lambda to overwhelm a database with too many connections.

It's possible to limit concurrency for your Lambda functions. This helps reduce that type of issue (in addition to being careful about how and when you make connections to a database).

However, for those running Lambda at scale, concurrency limits are potentially troublesome.

First and foremost, new AWS accounts have "lower" concurrency limits (usually 1000 concurrent Lambda's). You can request that they are raised by opening a support request.

However, there is also the notion of Burst Concurrency, which has a cap of 3000 concurrent functions running. This limit cannot be raised.

All concurrency limits are not PER function but are instead across all functions per region within an account.

Read the specifics on how the Burst Concurrency works carefully to see if it may affect you.

Depending on your workload, this could become an issue to work around. How you work around that depends on what's causing the traffic spikes - there's no one answer. Perhaps specific endpoints are handled by EC2 instead of Lambda, or perhaps a CDN could be employed, or perhaps you can spread Lambda usage across multiple regions/AWS accounts.

Deployment Package Size

Building Lambda functions involves packaging your code, either as a .zip file or a container image.

If you're uploading files as your Lambda function, there's a limit of 50 MB for the ZIP file and 250 MB for the unzipped files (Sorry, node_modules!).

Functions packaged as a container image have a much larger limit - 10 GB!

Layers may be a way around these limits in certain situations as well.

Container images are the way to build your Lambda functions if you're in danger of hitting size limits.

Cold Starts

Lambda functions have the concept of a Cold Start. What cold starts are (and how they affect your Lambda functions) is described here.

Essentially, AWS needs to download the Lambda function and start its execution environment. This takes time! That time isn't billed for, but certainly adds time to the response time.

After a function is "warmed", it stays running for a period of time (to potentially handle additional requests) for a varying amount of time.

The potential delay due to cold start times is felt the most for HTTP requests, where a user might be waiting on a response in real time.

There are two ways to reduce cold start times:

  1. Warming Requests - EventBridge (formerly "CloudWatch Events") can be configured to send a request to your Lambda function once per minute to keep it warm. This isn't perfect - it increases the likelihood an HTTP request won't hit a cold start, rather than guarantees it.
  2. Provisioned Concurrency - Pay to have Lambda functions warmed and ready to accept new requests (more on that here)

Connecting to Other Services

If your Lambda function needs to speak to over services, it may need to communicate over a VPC (a private network).

By default, Lambda functions don't run within a VPC. This means that services that you might normally connect to over a private network - within a VPC, such as databases and caches - must do so over a public network instead.

To communicate within a private network, you can run Lambda's within a VPC. This incurs some extra warm up time caused by attaching an ENI.

Unlike RDS, ElastiCache cannot be accessed publicly, and so you need to run Lambda within a VPC to use it.

Some AWS services run purely over their HTTP API. These are nicer to work with in Lambda, as you won't hit connection limits. The most common service to use with Lambda in this fashion is DynamoDB, a NoSQL database that can scale massively.

Laravel Vapor: Static Assets

Laravel Vapor takes your static assets in your public directory and uploads them to an S3 bucket, which "sits behind" a CloudFront distribution.

A quirk of Vapor is that it needs to track a list of these assets. Vapor limits the public files to 500 assets per environment.

While 500 files in your public directory is rather high, it's definitely a limit I've seen people hit - keep an eye out for a growing public directory!

Don't miss out

Sign up to learn when new content is released! Courses are in production now.