Tuesday, December 17, 2024

What I Learned Trying to Install Kubuntu (alongside Pop!_OS)

First and foremost, once again, this is clearly not a supported configuration that I tried to make.  I'm sure that if I wiped the drive and started afresh, things would have gone much better.  I just… wanted to push the envelope a bit.

Pop!_OS installs (with encryption) with the physical partition as a LUKS container, holding an LVM volume group, and the root filesystem is on a logical volume within.  The plan was hatched:

  • Create a logical volume for /home and move those files over to it
  • Create a logical volume for Kubuntu’s root filesystem
  • Install Kubuntu into the new volume, and share /home for easy switching (either direction)

Things immediately got weird.  The Kubuntu installer (calamares) knows how to install into a logical volume, but it doesn’t know how to open the LUKS container.  I quit the installer, unlocked the thing, and restarted the installer.  This let the installation proceed, up to the point where it failed to install grub.

Although that problem can be fixed, the whole installation ended up being irretrievably broken, all because booting Linux is clearly not important enough to get standardized. Oh well!

Sunday, December 8, 2024

Side Note: Firefox’s Primary Password is Local

When signing into Firefox Sync to set up a new computer, the primary password is not applied.  I usually forget this, and it takes a couple of runs for me to remember to set it up.

That’s not enough for a post, so here are some additional things about it:

The primary password protects all passwords, but not other data.  If someone can access Firefox data, bookmarks and history are effectively stored in the clear.

The primary password is intended to prevent reading credentials… and the Sync password is one of those credentials.  That’s why a profile with both Sync and a primary password wants that password as soon as Firefox starts; it wants to check for new data.

The same limitation of protections applies to Thunderbird.  If someone has access to the profile, they can read all historic/cached email, but they will not be able to connect and download newly received email without the primary password.

The Primary Password never times out.  As such, it creates a “before/after first unlock” distinction.  After first unlock, the password is in RAM somewhere, and the Passwords UI asking for it again is merely re-authentication.  Firefox obviously has the password saved already, because it can fill form data.

Some time ago, the hash that turns the primary password into an actual encryption key has been strengthened somewhat.  I believe it is now a 10,000-iteration thing, and not just one SHA-1 invocation.  The problem with upgrading it further is that the crypto is always applied; ”no password” is effectively a blank password, and the encryption key still needs to be derived from it to access the storage.  Mozilla understandably doesn’t want to introduce a noticeable startup delay for people who did not set a password.


Very recently (2024-10-17), the separate Firefox Sync authentication was upgraded.  Users need to log into Firefox Sync with their password again in order to take advantage of the change.

Sunday, December 1, 2024

Unplugging the Network

I ended up finding a use case for removing the network from something.  It goes like this:

I have a virtual machine (guest) set up with nodejs and npm installed, along with @redocly/cli for generating some documentation from an OpenAPI specification.  This machine has two NICs, one in the default NAT configuration, and one attached to a host-only network with a static IP.  The files I want to build are shared via NFS on the host-only network, and I connect over the host-only network to issue the build command.

Meaning, there is no loss of functionality to remove the default NIC (the one configured for NAT), but it does cut npm off from the internet.  That’s an immediate UX improvement: npm can no longer complain that it is out of date! Furthermore, if the software I installed happened to be compromised and running a Bitcoin miner, it has been cut off from its c2 server, and can’t make anyone money.

An interesting side benefit is that it also cuts off everyone’s telemetry, impassively.

I can’t update the OS packages, but I’m not sure that is an actual problem.  If the code installed doesn’t have an exploit payload already, there’s no way to get one later.  The vulnerability remains, but nothing is there to go after it.

Level Up

(Updated 2024-12-19: this section was a P.S. hypothetical on the original post. Later sections are added.)

It is actually possible to deactivate both NICs.  The network was used for only two things: logging in to run commands, and to (re)use the NFS share to get the files.

Getting the files is easy: they can be shared using the hypervisor’s shared-folders system.  Logging in to run commands can be done on the hypervisor’s graphical console.  As a bonus, if the machine has a snapshot when starting, it can be shut down by closing the hypervisor’s window and reverting to snapshot.

Now, we really have a network-less (and stateless) appliance.

Reconfigure

Before I made that first snapshot, I configured the console to boot with the Dvorak layout, because the default of Qwerty is pretty much why I use SSH when available for virtual machines.  But then, after a while, I got tired of being told that the list of packages was more than a week old, so I set out to de-configure some other things.

I cleared out things that would just waste energy on a system that would revert to snapshot: services like rsyslog, cron, and logrotate.  Then I trawled through systemctl list-units --all and cleared a number of timers, such as ones associated with “ua”, apt, dpkg, man-db, and update-notifier.  Any work these tasks do will simply be thrown away every time.

I took the pam_motd modules out of /etc/pam.d/login, too.  If Canonical doesn't want me to clear out the dynamic motd entirely, the next best thing is to completely ignore it.

After a reboot, I went through systemd-analyze critical-chain and its friend, systemd-analyze blame, and turned off more things, like ufw and apport.

With all that out of the way, I rebooted and checked how much memory my actual task consumed; it was apparently a hundred megabytes, so I pared the machine’s memory allocation down from 2,048 MiB to 512 MiB.  The guest runs with neither swap nor earlyoom, so I didn’t want to push it much farther, but 384 MiB is theoretically possible.

NFS

A small, tiny note: besides cutting off the Internet as a whole, sharing files from the hypervisor instead of NFS adds another small bit of security.  The NFS export is a few directories up, and the host has no_subtree_check to improve performance on the other guest that the mount is actually meant for.

Super theoretically, if the guest turned evil, it could possibly look around the entire host filesystem, or at least the entire export.  When using the hypervisor’s file sharing, only the intended directory is accessible to the guest kernel.

Sunday, November 24, 2024

Mac Mini (M4/2024) First Impressions

I bought an M4 Mac Mini (2024) to replace my Ivy Bridge (2012) PC.

It was difficult to choose a configuration, because of the need to see a decade into the future, and the cost of upgrades.  It is hard to believe that an additional half terabyte (internal) would cost more than a whole terabyte external drive (with board, USB electronics/port, case, cable, and retail box.)

It feels pretty fast.  Apps open unexpectedly quickly.  Which is to say, on par with native apps on my 12C/16T Alder Lake work laptop.  Apparently, my expectations have been lowered by heavy use of Flatpaks.

It is quiet.  When I ejected the old USB drive I was using for file transfer, it spun down, and that was the noise I had been hearing all along.  The Mac itself is generally too quiet to hear.

It is efficient.  I have a power strip that detects when the main device is on, and powers an extra set of outlets for other devices.  Even with the strip moved from “PC” to “Netbook,” the Mini does not normally draw enough power to keep the other outlets on.  (I put the power strip on the desk and plugged it into the desk power, then turned off the Mac’s wake-on-sleep feature.  Now I can unplug the whole strip when not in use.)

It has been weird getting used to the Mac keyboard shortcuts again.  For two years, I haven’t needed to think about which computer I’m in front of; Windows and Linux share the basic structure for app shortcuts and cursor movement.  I don’t know how many times I have pressed Ctrl+T in Firefox on the Mac and waited a beat for the tab to open, before pressing Cmd+T instead.

It is extremely weird to me that the PC Home/End keys do nothing by default on the Mac.  It’s not like they do something better, or even different; they just don’t do anything. Why?

I also had to search the web to find out why an NTFS external drive couldn’t put things in the trash after I had copied them onto the Mac.  It seems the whole volume is read-only; macOS doesn’t have built-in support for writing to NTFS.  Meanwhile, I didn’t notice anything in the UI to suggest that the volume is read-only; some operations just don’t work (quietly, in the case of keyboard shortcuts.)

There was one time where I tried to wake the Mac up, and it didn't want to talk to the keyboard. I plugged and unplugged the USB (both the keyboard from the C-to-A adapter, and the adapter from the Mac) and tried it with a different keyboard, but to no avail.  I couldn’t find any way to open an on-screen keyboard with the trackpad alone.  I had to hard power off, but it has been fine ever since.

I guess that’s about it!  It doesn’t feel like “coming home” or anything, it just feels like a new computer to be set up.

Sunday, November 17, 2024

Fixing a Random ALB Alarm Failure

tl;dr: if an Auto Scaling Group’s capacity is updated on a schedule, the max instance lifetime is an exact number of days, and instances take a while to reach healthy state after launching… Auto Scaling can terminate running-but-healthy instances before new instances are ready to replace them.

I pushed our max instance lifetime 2 hours further out, so that the max-lifetime terminations happen well after scheduled launches.

Sunday, November 10, 2024

Ubuntu 24.10 First Impressions

I hit the button to upgrade Ubuntu Studio 24.04 to 24.10.  First impressions:

  1. The upgrade process was seriously broken.  Not sure if my fault.
  2. Sticky Keys is still not correct on Wayland.
  3. Orchis has major problems on Ubuntu Studio.

Sunday, October 6, 2024

Pulling at threads: File Capabilities

For unimportant reasons, on my Ubuntu 24.04 installation, I went looking for things that set file capabilities in /usr/bin and /usr/sbin.  There were three:

  • ping: cap_net_raw=ep
  • mtr-packet: cap_net_raw=ep
  • kwin_wayland: cap_sys_resource=ep

The =ep notation means that only the listed capabilities are set to “effective” and “permitted”, but not “inheritable.”  Processes can and do receive the capability, but cannot pass it to child processes.

ping and mtr-packet are “as expected.”  They want to send unusual network packets, so they need that right.  (This is the sort of thing I would also expect to see on nmap, if it were installed.)

kwin_wayland was a bit more surprising to see.  Why does it want that?  Through reading capabilities(7) and running strings /usr/bin/kwin_xwayland, my best guess is that kwin needs to raise its RLIMIT_NOFILE (max number of open files.)

There’s a separate kwin_wayland_wrapper file.  A quick check showed that it was not a shell script (a common form of wrapper), but an actual executable.  Could it have had the capability, set the limits, and launched the main process?  For that matter, could this whole startup sequence have been structured through systemd, so that none of kwin’s components needed elevated capabilities?

The latter question is easily answered: no.  This clearly isn’t a system service, and if it were run from the user instance, that never had any elevated privileges.  (The goal, as I understand it, is that a systemd user-session bug doesn’t permit privilege escalation, and “not having those privileges” is the surest way to avoid malicious use of them.)

If kwin actually adjusts the limit dynamically, in response to the actual number of clients seen, then the former answer would also be “no.”  To exercise the capability at any time, kwin itself must retain it.

I haven’t read the code to confirm any of this.  Really, it seems like this situation is exactly what capabilities are for; to allow limited actions like raising resource limits, without giving away broad access to the entire system.  Even if I were to engineer a less-privileged alternative, it doesn’t seem like it will measurably improve the “security,” especially not for cap_sys_resource.  It was just a fun little thought experiment.