At work, we have built up quite a bit of custom monitoring, for example. It’s all driven by things failing in new and exciting ways.
Before that particular day, there were other crash-loop events with that code, which mainly manifested as “the site is down” or “being weird,” and which showed up in the metrics we were collecting as high load average (loadavg.) Generally, we’d get onto the machine—slowly, because it was busy looping—see the flood in the logs, and then ask Auto Scaling to terminate-and-replace the instance. It was the fastest way out of the mess.
This eventually led to the suggestion that the instance should monitor its own loadavg, and terminate itself (letting Auto Scaling replace it) if it got too high.
We didn’t end up doing that, though. What if we had legitimate high CPU usage? We’d stop the instance right in the middle of doing useful work.
Instead, during that iteration, we built the exit_manager()
function that would bring down the service from the inside (for systemd to replace) if that particular cause happened again.
…
Some other time, I accidentally poisoned php-fpm. The site appeared to run fine with the existing pages. However, requests involving the newly updated extension would somehow both generate a segfault, and tie up the request worker forevermore. FPM responded by starting up more workers, until it hit the limit, and then the entire site was abruptly wedged.
It became a whole Thing because the extension was responsible for EOM reporting that big brass was trying to run… after I left that evening. The brass tried to message the normally-responsible admin directly. It would have worked, but the admin was strictly unavailable that night, and they didn’t reach out to anyone else. I wouldn’t find out about the chaos in my wake until reading the company chat the next morning.
Consequently, we also have a daemon watching php-fpm for segfaults, so it can run systemctl restart
from the outside if too many crashes are happening. It actually does have the capability to terminate the instance, if enough restarts fail to alleviate the problem.
I’m not actually certain if that daemon is useful or unnecessary, because we changed our update process. We now deploy new extension binaries by replacing the whole instance.
…
Having a daemon which can terminate the instance opens a new failure mode for PHP: if the new instance is also broken, we might end up rapidly cycling whole instances, rather than processes on a single instance.
Rapidly cycling through main production instances will be noticed and alerted on within 24 hours. It has been a long-standing goal of mine to alert on any scaling group’s instances within 15 minutes.
On the other hand, we haven’t had rapidly-cycling instances in a long time, and the cause was almost always crash-looping on startup due to loading unintended code, so expanding and improving the system isn’t much of a business priority.
It doesn’t have to be well-built; it just has to be well-built enough that it never, ever stops the flow of dollars. Apparently.