Wednesday, May 13, 2026

Kernel Config Must Get Easier

Not to be “old person on main,” but 25 years ago, it was possible to build my own Linux kernel and run it on my hardware.  As usual in development, it would take a few times to get one that would boot nicely, but the process itself didn’t feel difficult.  It was pretty easy to select IDE, VIA chipset support, and the like, and receive a fairly streamlined kernel.

Fast forward to the 2020s, and the opposite has happened.  I’m reliant on distro kernels because trying to configure and build a modern Linux kernel is a huge, interconnected maze of options, with no visibility into why it doesn’t boot.  Options in section 3 may not be visible if required options in section 5 haven’t been selected yet, which make it a nonlinear meta-adventure.  I have repeatedly failed to build a kernel for a VirtualBox guest, something where the hardware should be well known in advance.

However, if I had been able to build my own kernels successfully, Copy Fail 2 and Dirty Frag wouldn’t have been issues.  IPSec is pretty much dead to me, and I had never heard of RxRPC until now, so these things would have naturally been configured out of my own kernel build.

There’s a distinct conflict here between convenience and security: if we autoload the kitchen sink, then nobody needs to build enable-feature ipsec and a GUI for it.  Nobody needs to recompile their kernel for it.  On the other hand, if there’s a bug anywhere in the autoload surface, it can be reachable by any user.

And honestly, the whole reason I fell out of building kernels—other than the journeys into FreeBSD and Windows—is that it was terrible to try to keep up with the influx of kernel updates.  Maybe userspace had a compatibility guarantee, but config did not.  I was soon unwilling to spend so much energy (mine and my PC’s) on rebuilding a kernel so frequently.  Most of the commits would be irrelevant, but trying to filter and judge them all was an even worse proposition.

I know my distro doesn’t want to maintain dozens of packages like linux-modules-ipsec for people (a logical equivalent of enable-feature ipsec) but it’s also difficult to rebuild a distro kernel according to my own config.  Leaving aside the problems with choosing/generating a config, the distro kernel has a lot of features, which makes it take a while to compile.  The cycle to even try a new kernel build is much, much longer than it was when I was building my own, and I think it crosses the threshold between “tolerable” and “too much to bear.”

Sunday, May 10, 2026

Blogroll Updates

I updated my blogroll project on GitHub.  The real problem is that my feed reader at work is broken, so I got NetNewsWire at home and imported the feeds from my blogroll project.  Consequently:

mjg59’s URL has been updated to follow the blog’s move.

nullprogram was removed for becoming an AI-only feed.  I was reading it to expand my technical horizons, and it is no longer technical.  Something to ponder: if an LLM is so good that “no code” is practical, why would anyone even need a CMake debugger, instead of asking their LLM to go fix it?

The AWS CodeDeploy Agent was removed from the releases because we no longer depend on it.  The agent was always last to support an Ubuntu LTS, often delaying many months when everyone else was ready within weeks.  The whole time, there would be either no communication, or “we’ll get it this month” replies that would be proven extremely false.  This dissatisfaction limited CodeBuild to only one system, which is now mothballed, and if it ever comes back, it will be on ECS/Fargate.

jquery-color was also removed because we no longer depend on it… or if we do, it should not be hard to make a pure CSS replacement.  jquery itself lives on in “old-foo”, which I believe is showing just a couple of admin-only pages under the otherwise-React-based Foo2 intranet site.

The actual problem with the feed reader at work is that Akregator from Flathub has decided that it can’t access its feeds in ~/.var/app/..., which seems to imply some sort of packaging error.  But it’s been going on for enough weeks that I finally took action.  I had chosen Flatpak because the in-repo version regularly deleted its feeds, so going back isn’t an option.  Ubuntu effectively never updates software once the release has been shipped.

Wednesday, May 6, 2026

phpseclib's host key verification includes the algorithm

At work, one of our partners updated their SFTP server to support SHA-2 with the RSA host key exchange.  It started failing to validate in our scheduled job using phpseclib.  A quick test showed that OpenSSH still writes a known_hosts entry for the key as type ssh-rsa, which is the format I used when validating with phpseclib.

The problem is, OpenSSH stores information on the key only; this is an RSA key, no matter what signature type is used in the KEX algorithm actually performed, so OpenSSH always records it as “ssh-rsa” type.  However, `phpseclib` passes the exact host key algorithm that was used to the verifier code.  My naïve comparison of previous ssh-rsa BASE/64 reference string to rsa-sha2-256 BASE/64 from the library began failing, in spite of having the same RSA key in the BASE/64 text.

For my purposes, I just updated the expected algorithm, so that it will fail if someone manages to downgrade back to SHA-1.  (Of course, it’s also possible that I’m holding phpseclib wrong, and they have better verification baked in somewhere.)

Sunday, May 3, 2026

A Few Notes on Nginx in Debian/Ubuntu

The PPA I had been getting nginx (stable) from decided to delete the whole PPA, with no announcement on Patreon first.  I only found out when unattended-upgrades started sending email that the Release file for the PPA was missing.  I have canceled my support.  However, I learned some things in the process.

nginx-light Is Obsolete

First, the nginx-light package in Debian 13 (Trixie) and Ubuntu 24.04 LTS is a transitional package.  The modern approach is to install nginx and whatever libnginx-mod-* packages are desired.  If you’re making this change on such a system, nginx is already installed, and should be marked manual before removing nginx-light:

$ sudo apt-mark manual nginx
$ sudo apt remove nginx-light

I tried using the nginx apt repository (as published in extrepo because nobody should have to run gpg ever again) but it is packaged against Debian 13, and thus doesn’t work with Ubuntu 24.04’s older OpenSSL.

Getting HTTP/2 Back

I switched back to the version included in the Ubuntu 24.04 repositories, but this downgraded from 1.28 to 1.24, which is before the introduction of a separate http2 on; configuration directive in 1.25.1.  I initially turned it off and proceeded, which meant my server didn’t provide HTTP/2 for a bit.

The solution is, older nginx uses an http2 in the listen directive:

listen 443 ssl http2 default_server;

This “http2” should appear in all “listen 443” directives; certbot renew will leave it alone if it happens to be present.

Getting Brotli Compression Back

The other issue I had was that the servers were configured to support brotli compression.  I got into a state where the Ubuntu nginx couldn’t finish installing itself, because it didn’t recognize the brotli configuration.  Meanwhile, its failure stopped the process of setting up other packages, including the one that would let nginx support brotli, libnginx-mod-http-brotli-filter.

Breaking that logjam required commenting out the brotli configurations, then finishing the package setup, before re-enabling brotli.

$ sudo dpkg --configure -a

This restarts the setup process, and the updated configuration lets it complete.

Looking Forward

I left myself good comments in the nginx configuration, because in a few months, Ubuntu 26.04.1 will be released, and my VPS will be eligible for upgrade.  At that point, I’ll want to know all this again.

Sunday, April 26, 2026

Fixing Weblish/SSH Lish showing nothing

I had a fire drill: my SSH host key certificate expired, predictably enough, and I wanted to see if I could get in without simply answering yes at the unknown-host prompt.  The answer was no, but now it’s yes.  What changed?

# systemctl enable --now getty@ttyS0.service

Weblish, and the Lish SSH gateways, use the system’s serial console to provide their service.  If nothing is ‘listening’ on the console, then having access to the console is meaningless.  All I had to do was actually turn on the getty process for that serial console.  Everything worked for me out of the box, without needing me to specify baud/parity/stop bits anywhere.

The ‘problem’ with using Glish instead was that it doesn’t paste; it just prints ^[. on the console when trying to paste.  There is no way that I’m hand-copying 500 bytes of Base64 text into the system, except in a true and dire emergency.  Hence, I accepted the host key, updated the system, and deleted the host keys later.

Reminder: test the serial console/recovery path, before it is needed.

An additional cautionary tale: our EC2 instances at work have serial consoles running, but we don’t have user passwords configured, so they still cannot be logged into.  Fortunately, that problem was curable via reboot, and I didn’t have to restore an EBS backup.

Sunday, April 19, 2026

A Few More Words about LLMs for Coding

One of the clear and present dangers to LLM coding is to produce a codebase which can only be operated on by LLMs. Shall we be so eager to have the corporations make us an offer we can’t refuse?

Sunday, April 12, 2026

Server Management: We Should Be Able to Recreate Pets, Too

For anyone not familiar with the analogy, there are two types of server or private instance: “cattle” exist en masse and have names like atl-prod3-24.  If one has problems, the first solution is replacement.  “Pets,” on the other hand, are unique instances that matter a lot, often with whimsical names.  Problems with pets are approached by working to cure what ails them.

Once upon a time, we had a single pet server which was our bastion host, cron job runner, and SFTP file-exchange point with third parties.  We split those roles apart; the cron jobs get access to the SFTP files (for input or output) by sharing an EFS mount.  The SFTP machine everyone uses can be firewalled off from the rest of our codebase and AWS resources.  (Not that I expected 0days, but there was a notable close call a couple of years after this split was made.)

After a couple of Debian upgrades, the SFTP host had some network issues.  Investigation found a post putting the blame on ‘the image builder’ and telling the person with the question (my same question, unfortunately) that because of that, they were on their own.

It seems likely, then, that we’ll want to rebuild this pet from scratch at some point.  I fixed everything this time, but who knows what might happen on the next upgrade?  It seems like it would be far more stable if we could re-customize a clean base image, instead of doing a series of in-place upgrades.  This is especially true if each Debian upgrade isn’t prepared for anything that is specific to the Debian AWS cloud images.

A long aside about the networking problems I faced follows.