Sunday, December 21, 2025

Adventures with my old iPod Touch

The iPod Touch (4th Gen) in the car could no longer be detected, so we pulled it out of the console to find it in DFU mode.  Yikes.

I had extremely little hope, but I took it and its USB-A to 30-pin dock cable (the only extant cable of this type in my collection) inside, plugged it into a USB-C to USB-A adapter, and plugged it into the Mac.  It… er… worked. Sure, it appeared to open in Finder and not iTunes (rip), but it actually worked (on the second try; the advice for “unknown error 9” is to try again.  What are we doing as a profession.)  I was able to restore iOS 6.1.6 to it, although I did not have the option to keep my data.

But, I never moved my music onto the Mac.  I figured, with a hardware device that is fifteen years old, and back in factory state, surely, Linux should be able to sync to it?

The first problem was even getting it connected, because Amarok threw an error from ifuse.  Copying the command out of the error message and running it in a terminal worked totally fine.  (I didn’t think of this at the time, but… Amarok logs in the systemd journal.  Maybe its permissions have been stripped down too far.)

Once that was up and running, I restarted Amarok a couple of times, before I found out where it had hidden the iPod.  It’s under “Local Collection.”

I then waited a long time for things to sync.  I waited so long that I wandered off and forgot to set “Don’t Sleep”, so the computer suspended.  The iPod made its ancient, discordant glissando when the computer woke up, and then Amarok—and any process trying to stat() the ifuse mount point—froze.  ifuse sat there burning 100% CPU for a couple of minutes, and then I restarted.

(Apparently the sleep interval was fifteen minutes, the longest time that doesn’t make KDE System Settings complain about ’using more energy.'  Well… I paid for it, one way or the other.)

I got it going again.  Amarok carefully loaded gigabytes of tracks onto the iPod Touch, then started complaining about checksum errors for the database.  That’s the part that makes them useful, instead of having the iPod show “No content” and a button for the iTunes Store.  That ended up being the final boss that I couldn’t beat.  The tracks are still there, apparently, showing up as “Other” data on the Mac.

Yeah.

I plugged the backup drive into the Mac, imported everything, and exported it to the iPod Touch.  The double copy was orders of magnitude faster than Amarok’s unidirectional efforts.  I should never have been so lazy.

  • Free Software: 0
  • Proprietary OS: 2

I don’t know how we got here.

Sunday, December 14, 2025

Three Zsh Tips

To expand environment variables in the prompt, make sure setopt prompt_subst is in effect.  This is the default in sh/ksh modes, but not zsh mode.

To automatically report the time a command took if it consumes more than a certain amount of CPU time, set REPORTTIME to the limit in seconds.  That is, REPORTTIME=1 (I am impatient) will act like I ran the command with time originally, if it consumes more than a second.

There’s a similar REPORTMEMORY variable to show the same (!) stats for processes that use more than the configured amount of memory during execution.  (Technically, RSS, the Resident Set Size.)  The value is in kilobytes, so REPORTMEMORY=10240 will print time statistics for processes larger than 10 MiB.  Relatedly, one should configure TIMEFMT to include “max RSS %M” in order to actually show the value that made the stats print.

Note that REPORTTIME and REPORTMEMORY do not have to be exported, as they’re only relevant to the executing shell.

# in ~/.zshrc
REPORTTIME=3
REPORTMEMORY=40960
TIMEFMT='%J  %U user %S system %P cpu %*E total; max RSS %M'

Sources: REPORTTIME and REPORTMEMORY are documented in the zshparam man page.  Prompt expansion is described in zshmisc, and the prompt_subst option is in zshoptions.

Sunday, December 7, 2025

Notes on an ECS Deployment

In order to try FrankenPHP and increase service isolation, we decided to split our API service off of our monolithic EC2 instances.  (The instances carry several applications side-by-side with PHP-FPM, and use Apache to route to the applications based on the Host header.  Each app is not supposed to meddle in the neighbor’s affairs, but there’s no technical barrier there.)

I finally got a working deployment, and I learned a lot along the way.  The documentation was a bit scattered, and searching for the error messages nearly useless, so I wanted to pull all of the things that tripped me up together into a single post.  It’s the Swiss Cheese Model, except that everything has to line up for the process to succeed, rather than fail.

  1. Networking problems
  2. ‘Force Redeployment’ is the normal course of operation
  3. The health check is not optional
  4. Logs are obscured by default
  5. The ports have to be correct (Podman vs. build args)
  6. The VPC Endpoint for an API Gateway “Private API” is not optional
  7. There are many moving parts

Let’s take a deeper look.

Sunday, November 30, 2025

Container Friction

I find it inconvenient that a number of container settings are immutable.  Forget a volume?  Forget a port mapping?  Want to bring up a container, make a change, and then mark the root filesystem as read-only?  Start with a read-only root, and then realize it should have been read-write after all?

Too bad.  Go set up all of the other settings again, along with the desired change, and don’t miss anything or get any settings wrong this time.  If it’s “change something before going read-only,” it’s time to create a Containerfile and build a new image, too.  No matter what, don’t forget to delete the original container first, to free its name for reuse.

There’s no way to create a container based on an existing container’s settings.  There’s no direct option for it.  There’s not even an indirect option to export the settings of a container, and then use them as a template during container creation later.

It’s worse in Podman Desktop, which has no memory.  It doesn’t even have separate “last used directory” memory for building from a Containerfile versus choosing a volume to mount.  Why not make everyone go back and forth through the filesystem every time?

At least in the CLI, the previous commands could be in the shell history.  “Could be,” because I might have been distracted for two months, and the commands got pushed out of the history in the meantime.  (Speaking of, “the container” has its Containerfile baked in, but doesn’t remember where on the filesystem that Containerfile came from.  Not even locally.  Maybe it’s just me, but it’s always a treasure hunt to resume a project.)

I can see the logic of “not allowing settings drift” in production container environments, but it seems like Podman Desktop should be optimized for the developer experience and experimentation instead.

Sunday, November 23, 2025

Firefox’s new Profile Manager: a lightning review

Back in Firefox 138, the new profile UI made it to the stable channel.  If “Profiles” isn’t already showing near the top of the main menu, interested users can go in through about:config and flip the browser.profiles.enabled option to true.

I created a new Shopping profile to check it out.  (Formerly, my shopping has happened in a mix of dedicated containers for commonly-shopped stores, and temporary containers for less-common stores, in my core profile.  However, that profile frequently breaks checkout with its high level of privacy settings and extensions.)

First off, the good: this is far more convenient to access than about:profiles, and much prettier.  What’s more, it adds the profile badge to the Firefox icon in the Dock and Cmd+Tab list (macOS)!  There’s no more guessing about which identical Firefox corresponds to which profile.

Passkeys, since they are stored in the system’s keyring, are available across profiles.  Signing in at the new profile didn’t require any password management.  Finally, as a particularly geeky note, these are just like old Profiles, with independent extensions, themes, settings, bookmarks, and history.  The meaning of the “profile” name hasn’t been changed by this.

That leaves the one thing that could be improved.  This UI is completely separate from the traditional about:profiles.  Existing profiles do not import into the new UI.  Profiles created under the new UI aren’t visible at the old UI.  If my existing profiles had seamlessly imported, that would have been amazing.

Incidentally, if anyone needs to know, the new profile UI is at the url about:profilemanager.

On the whole, the new system is a no-brainer.  It’s love at first sight.  I will probably retire Containers from my core profile, only retaining them in Shopping or AWS profiles to keep sites/accounts separate within those domains.  I do know that AWS has multi-session support now, but I’m used to the containers for that.

Sunday, November 16, 2025

HTTPS in a Caddyfile, and also Not Doing That

I ended up putting together a couple of Caddyfiles for the Caddy server.  The project really wants users to choose automatic HTTPS, but it wasn’t a good fit for using with a load balancer and auto-scaling.  Surprisingly, it is willing to contact the production Let’s Encrypt API even without an email address.

Anyway, I have a few pieces here to cover:

  • The easiest way to ignore HTTPS, and returning a fixed error message on HTTP
  • Mysterious error messages, relating to the way Caddy chooses certificates and configuration blocks
  • Using the tls directive to get self-signed certificates for a public name, for using HTTPS between Caddy and a load balancer which does not validate certificates
  • How to use trusted_proxies—and a bit more trickery—so that, using HTTP behind a load balancer, PHP knows the client’s connection is HTTPS
  • How to compose multiple URL mappings, and using a front controller for PHP apps
  • One small note on FrankenPHP (added: 2025-11-26)
  • Closing Notes

Aside from the above links, Caddy provides more conceptual documentation at the Automatic HTTPS article.

My end goal is to get FrankenPHP running, but as it is built on Caddy, I wanted to understand that part before moving ahead.

Sunday, November 9, 2025

The System Is the Authority

This is the first post for Decoded Node that has been written with league/commonmark instead of erusev/parsedown.  This technically aligns the Markdown implementation between this blog and my personal website (where CommonMark was readily integrated with Twig.)  Which is fancy talk for:

  • Footnote-style link syntax works on both sites now
  • Smart quotes are automated, instead of using platform-dependent input methods, or UniCycle

UniCycle “works,” but it’s not maintained (AFAICS), and the implementation strikes me as a bit of a hack.  It works fine in insert mode, but r' won’t replace with smart quote.  It’s a shame, because it has such a great name.

(Also along the way, I decided that .md won, so this processor no longer accepts anything like .mkd or .mdown.  Makes things easier than asking, “does this file match /\.ma?r?k?do?w?n$/?”)

Didn’t the Title Mention… Authority?

What strikes me about this is that “the format which I author in” is only incompletely described as “Markdown."  This particular move was feasible because league/commonmark is effectively a superset of erusev/parsedown.  If I decided I wanted to go back, I would have to edit all the new features out of newer posts.  Actually, if I want to update an old post and I happened to miss a smart quote in it, the non-smart quotes would also be changed by using the new parser.

What actually defines the output is not only the input, but the exact tool used to create it.

(I’m thrashing around in the direction of “the purpose of a system is what it does” and Hyrum’s Law, but neither of those things fit.)

That wouldn’t be much of an observation if I were saying, “An XLST processor doesn’t turn my Markdown into HTML,” but I didn’t really expect the choice of Markdown tool to matter so much to processing Markdown input.

Maybe it could be phrased as, “Markdown processors are not bug-for-bug compatible.”

Version Note

I use this code to write posts, not to mess around with code, so it’s not always clear whether the Markdown parser is up-to-date.  However, it seems I was using Parsedown 1.7.4 which is, at the time of writing this post, the latest stable release.  1.8.0 is in beta, and 2.0.0 seems to be barely started.