Adam McCrea

Indie founder & developer at Judoscale (formerly Rails Autoscale).

2025-07-08

Visit our booth at #railsconf to get your kazoo! Watching us make fools of ourselves is just a bonus.

2025-07-07

Who’s ready to party? #railsconf #kazoos

2025-07-04

If it’s not at least a little bit cringe, we didn’t go far enough. #railsconf #kazoos #marketing

2025-07-03

Marketing technique #77: Humiliate yourself for attention. #railsconf #kazoos

2025-07-02

Y’all this is only the beginning. Buckle up! #railsconf #kazoos

2025-07-02

@soulcutter Autoscaling is not a silver bullet, for sure. There are good ways to autoscale and less good ways, so I'm not surprised you've gotten burned.

2025-07-01

That’s where proactive autoscaling comes in. By targeting, say, 60% utilization, you’re building in 40% headroom for unexpected spikes. You’re trading efficiency (and some cost) for reliability.

🧠 More on the differences and trade-offs: judoscale.com/blog/introducing

2025-07-01

Most autoscalers are reactive—they respond to traffic after it spikes. That’s usually fine… until it’s not.

If your app gets hit with big surges of traffic all at once, even the fastest queue-time autoscaler might not spin up capacity in time to avoid slowdowns or timeouts.

2025-07-01

One week until #railsconf! The whole Judoscale team will be there with... kazoos?

2025-06-12

@tekin @wj Yep, we'll autoscale your Sidekiq ECS cluster based on queue latency. LMK if you have questions!

2025-04-29

Python task queue pro-tip: Autoscale based on queue latency, not CPU!

Your task queue can back up without CPU spikes, leaving you in the dark.

Check out Jeff's full Celery & RQ comparison here:
judoscale.com/blog/choose-pyth

2025-04-29

Choosing a Python task queue? Jeff Morhous compared Celery vs RQ:

🧠 Celery: Feature-rich but complex

🚀 RQ: Simple & easy to deploy

Jeff's advice: Most apps do fine with RQ until they need more horsepower. Then consider making the switch to Celery.

2025-04-28

🔌 Big news! Custom platform integrations coming soon to Judoscale! Create your own adapter, keep your infrastructure, get all the autoscaling magic.

Message me if you want early access.

What platform would you connect with?

2025-04-22

This behavior is confusing at first, but it's actually super cool once you wrangle it.

And with autoscaling in place, we don't really need to worry about it. New machines are being created with fresh burst balances when needed, spreading the load allowing balances to rebuild.

2025-04-22

But why do they perform so well at first, only to fall apart after a bit?

It's the bursting!

Shared machines have a "burst balance" where they can use 100% CPU. Once the balance is depleted, CPU is throttled to 1/16.

2025-04-22

I finally answered my questions about shared CPUs on Fly.io:

- How are they so cheap?
- Why do I need so many of them?
- Why does perf tank after 10 minutes?

Turns out it's well-documented: "shared" machines only get 1/16 of each CPU!

fly.io/docs/machines/cpu-perfo

2025-04-21

⚠️ It's a mistake to ignore these configs. The defaults are NOT what you want.

EXAMPLE: A 4-process, single-threaded web server should use a hard limit of 4 since that's the max concurrent requests. A soft limit of 1-2 would help route requests to less busy machines.

2025-04-21

This is the exact config I landed on last week, and it's been a night-and-day difference with our 800 RPS app running on Fly. 🚀

Learn more: fly.io/docs/reference/configur

2025-04-21

Last week I dug into HTTP routing behavior on Fly.io, and it's so cool!

Unlike random routing on platforms like Heroku, Fly can intelligently route requests to machines based on a load. Here are the configs you need to know...

2025-04-21

🧸 soft_limit: Traffic to a given machine is deprioritized when the soft limit is met.
🪨 hard_limit: Traffic to a given machine is STOPPED when the hard limit is met.
🔧 concurrency type: How concurrency is measured. Should be "requests" for web servers. NOT THE DEFAULT!

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst