Putting Rails on AWS Lambda, end to end
Running Rails on Lambda (with no black magic, please)
I have a Rails app I want to run on AWS Lambda. Why? Because traffic is bursty, the app spends most of its day doing nothing, and I'd rather pay $0 for idle time than $20/mo for an EC2 instance that's sitting there twiddling its thumbs. But I also don't want a magic deployment tool that hides what's actually happening — when something breaks at 2am, "magic" becomes "I have no idea where to look."
This is the setup I landed on. Two Lambda functions, one Docker image, one Postgres database that also scales to zero. Here's how the pieces fit and what each one is doing.
The shape of the thing
flowchart LR
U[User] -->|HTTPS| F[Lambda Function URL]
F --> R[Lambda: Rails server]
D[You / CI] -.->|Manual invoke| M[Lambda: DB migrations]
R --> N[(Neon Postgres<br/>scale-to-zero)]
M --> N
E[ECR: one Docker image] -.-> R
E -.-> M
Two Lambdas, same image, different entrypoints:
- The Rails server lambda handles HTTP traffic via a Function URL.
- The migrations lambda runs
rake db:migrate(and optionallydb:drop / db:create / db:seed) when I invoke it manually.
Same Docker image because the dependencies are identical — only the entrypoint changes. No reason to maintain two builds.
The pieces
Lamby
Lamby is a small gem that translates Lambda invocation events into Rack requests Rails can handle. Add it to your Gemfile:
gem 'lamby'
The Lambda Ruby runtime
When you ship Lambda as a container image, you need the Lambda runtime interface client (RIC) installed. In your Dockerfile:
RUN gem install aws_lambda_ric
This is the binary Lambda actually invokes. It boots, calls into your Rails app via Lamby's adapter, and waits for the next event.
A new Rails environment: lambda
Lambda has constraints regular Rails environments don't expect: a mostly read-only filesystem (so no log files, no schema.rb dump after migrations), no persistent memory between invocations (so in-memory session storage is pointless), and a hostname that's whatever ugly URL Lambda gives you (which Rails' host authorization will reject by default).
Create config/environments/lambda.rb:
# Based on production defaults
require Rails.root.join("config/environments/production")
Rails.application.configure do
# Settings here override config/environments/production.rb
# Whitelist the Lambda Function URL — Rails has never heard of it
config.hosts << ENV.fetch("LAMBDA_FUNCTION_URL")
# Lambda can't write to files; logs go to STDOUT and CloudWatch picks them up
config.logger = ActiveSupport::Logger.new(STDOUT)
# Don't dump schema.rb after migrations — read-only filesystem
config.active_record.dump_schema_after_migration = false
# Sessions stored in memory disappear when the lambda freezes; cookies persist
config.session_store :cookie_store, key: ENV.fetch("SECRET_KEY_BASE")
end
Each of those settings is fixing a real thing that breaks in Lambda. Drop any of them and you'll find out which.
database.yml
Mirror the new environment in config/database.yml so connections work:
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see Rails configuration guide
# https://guides.rubyonrails.org/configuring.html#database-pooling
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: <%= ENV.fetch("DB_HOST") %>
password: <%= ENV.fetch("DB_PASS") %>
user: <%= ENV.fetch("DB_USER") %>
development:
<<: *default
database: <%= ENV['DB_NAME'] %>_development
test:
<<: *default
database: <%= ENV['DB_NAME'] %>_test
production:
<<: *default
database: <%= ENV['DB_NAME'] %>_production
lambda:
<<: *default
database: <%= ENV['DB_NAME'] %>_lambda
Migrations: two ways
Option 1 — a dedicated migrations Lambda
Add lambda-entrypoint-migrate.sh to your image and mark it executable with chmod +x:
#!/bin/bash
if [[ "$RESET_DATABASE" == "true" ]]; then
rake db:drop db:create db:migrate db:seed
else
rake db:migrate
fi
echo "Migration completed"
# Hand control to the Lamby runtime so the container exits cleanly with a 404
# instead of a non-zero exit code (which Lambda treats as a failure).
exec "/usr/local/bundle/bin/aws_lambda_ric" "config/environment.Lamby.cmd"
Build, push, deploy as a separate Lambda function (more on that below), and invoke it manually whenever you ship a migration.
Option 2 — just connect from your laptop
If your DB is on Neon (or any other database reachable from outside AWS), point a local docker compose at it with the production credentials and run rake db:migrate from your machine. Same trick works for rails console — handy for one-off debugging.
I use Option 2 for small stuff and Option 1 for anything CI-driven.
Deploying to AWS
1. ECR
Create a private repository. Build, tag, push your Docker image. No special configuration.
2. Lambda for the Rails server
Create a new Lambda from a container image, pointing at the repo you just pushed.
[image: AWS "Create function" page with "Container image" selected and the function name filled in]
[image: ECR image picker dialog with the latest tag selected]
Then expand Container image overrides and set:
ENTRYPOINT:/usr/local/bundle/bin/aws_lambda_ricCMD:config/environment.Lamby.cmd
[image: "Container image overrides" section with ENTRYPOINT and CMD filled in — heads up, this screenshot also shows the Container image URI which contains your AWS account ID; crop or blur it before publishing]
3. Memory and timeout
1024 MB RAM and a 60-second timeout is a reasonable starting point. Adjust based on actual usage — Lambda gives you CPU proportional to memory, so under-provisioning RAM also slows down boot.
[image: Lambda "Basic settings" page with Memory: 1024 MB and Timeout: 1 min]
4. Function URL
Under Configuration → Function URL → Create function URL:
[image: Lambda Configuration tab with "Function URL" highlighted in the sidebar and the "Create function URL" button visible]
Set Auth type to NONE (Rails handles its own auth) and create it. Lambda gives you back a URL like xxxxxxxx.lambda-url.us-east-1.on.aws.
[image: Function URL config page with Auth type set to NONE — this screenshot also shows the resource ARN, which contains your AWS account ID; crop the ARN out before publishing]
5. Environment variables
Configure everything your app needs — DB credentials, secret key base, third-party API keys. Two are mandatory for the lambda environment to boot:
RAILS_ENV=lambdaLAMBDA_FUNCTION_URL— set to whatever Function URL Lambda gave you in the previous step (without thehttps://)
[image: Environment variables page showing the configured keys — note: the LAMBDA_FUNCTION_URL value is your actual public endpoint; redact it before publishing]
6. Migrations Lambda
Same ECR image, new Lambda function, with one difference: override ENTRYPOINT to /app/lambda-entrypoint-migrate.sh and leave CMD blank — the script will exec into the Lamby runtime when it's done migrating.
[image: Create function page for the migrations Lambda, ENTRYPOINT overridden to the migrate script path — also contains the account ID in the image URI; crop accordingly]
Same RAM, same timeout, same env vars. For the very first deploy, set RESET_DATABASE=true so it creates the schema from scratch — then unset it, otherwise every invocation will nuke your data.
The DB: Neon for scale-to-zero
To keep the "everything scales to zero" property end-to-end, I use Neon for Postgres. It pauses the database after some idle time and resumes on the next connection. Combined with Lambda, this means the running cost when nobody's using the app is effectively zero.
The cold-start penalty when both Lambda and Neon are cold is real — the first request after a long idle can take a few seconds. For internal tools and side projects, fine. For anything user-facing in production you'd want to keep at least the DB warm.
When this is a good fit (and when it isn't)
Good fit: internal tools, admin dashboards, low-traffic apps, bursty workloads (webhooks, scheduled fan-outs), anything where you'd rather pay $0 for idle time than $20/mo for a small EC2.
Not a good fit: high-traffic public apps (at scale Lambda costs more than EC2/Fargate, and cold starts hurt UX), long-running requests (Lambda caps at 15 minutes and you're paying every second), apps that need persistent background workers like Sidekiq.
For the use case I built this for — a Rails admin tool used a few hours a day — it's a perfect match. The whole stack costs almost nothing when it's idle, which is most of the time.