I wrote a while ago about DynamoDB in the trenches, but that was in 2012 and now 2015 is nigh.
The real pain point of DynamoDB is that everything is a hash key. It's not sorted, at all. Ever.
A local secondary index isn't the answer; it's a generalization of range keys, and as such, is still subordinate to the hash key of the table. An LSI does not cover items with different hash keys.
A global secondary index isn't helpful in this regard, either. It is essentially building and maintaining a second table for you, where its hash key is the attribute the index is on, and then it stores the key of the target element (and optionally, any other attributes from that element) from the 'parent' table. But as a hash key, it doesn't support ordering... The only thing that can be done with the index is a Scan operation.
DynamoDB still offers nothing else.
Some ideas that would seem, on their surface, to be perfect for DynamoDB actually don't work out well, because there's no efficient query for ordered metadata. Most frequently, this bites me for data that expires, like sessions. Sessions! The use case that's so blindingly obvious, the PHP SDK includes a DynamoDB session handler.
Update (2023-01-28): DynamoDB has support for an 'expiration time' attribute these days. We are using it for sessions now. Due to encoding issues, our sessions are stored as binary instead of string type, but otherwise, it's compatible with the PHP SDK's data format. Our programming languages actually use memcached but the data is stored in DynamoDB. End of update.
We store SES message data in there to correlate bounces with the responsible party. We have a nightly job that expires the old junk, and just live with it sucking up a ton of read capacity when it has to do that. On the bright side, unlike sessions, the website won't grind to a halt (or log people out erroneously) if the table gets wedged.
I'd like to put some rarely-changing configuration data in there (it's in SimpleDB and cached in memcache because SimpleDB had erratic query performance), but I wouldn't be able to efficiently search over it when I wanted to look up an entry for editing in the admin interface. And really, I still want to put that in LDAP or something.
Honestly, if all I get with DynamoDB is a blob of stuff and a hash key, why not just store it as a file in S3? How is DynamoDB even a database if it doesn't actually do anything with the data, and doesn't let you have any metadata? Am I the crazy one, or is that the NoSQL crowd?
Related: Tim Gross on Falling In And Out Of Love with DynamoDB.
Friday, December 12, 2014
Wednesday, December 3, 2014
Common Sense vs. Fatal Deprecations
In Perl, use common::sense 3.73 or newer. Older versions, including Ubuntu Trusty’s libcommon-sense-perl package at 3.72, enable fatal deprecations and will unexpectedly kill your code if you transitively depend on a module using common::sense that throws a deprecation warning.
Say, one whose maintainer received a patch and is apparently letting it rot in their issue tracker forever.
Say, one whose maintainer received a patch and is apparently letting it rot in their issue tracker forever.
Thursday, November 27, 2014
Versioning
Dan Tao asks, “In all seriousness, what’s the point of the MINOR version in semver? If the goal is dependency stability it should just be MAJOR and PATCH.”
I think I finally remember the answer. It’s set up to flag no compatibility, source compatibility, and binary compatibility. The “thing” that dictates bumping the minor shouldn’t be “features” so much as binary-incompatible changes. For example, GTK 2: if code was written against 2.4.0 using the shiny new file chooser, it would still compile against 2.12. But, once it had been compiled against 2.12, the resulting binary wouldn’t run if 2.4 was in the library path. A binary compiled against 2.4.2 would run on 2.4.0, though, because every 2.4.x release was strictly ABI compatible.
IIRC, they had a policy of forward compatibility for their ABI, so that a binary compiled against 2.4 would run on 2.12, but I don’t remember if that’s actually necessary for SemVer. Another way to look at this is, “If we bump the minor, you’ll need to recompile your own software.” Where that could be programs using a library, or plugins for a program.
I believe that’s the motivation for SemVer to include minor, but upon reflection, it doesn’t really make sense in a non-binary world. If there is no binary with baked-in assumptions about its environment, then that layer isn’t meaningful.
Also upon reflection, most of my currently-tracked projects don’t use SemVer. The AWS SDK for PHP is (in 1.x/2.x) organized as “paradigm.major.minor”, where 2.7 indicates a breaking change vs. 2.6 (DynamoDB got a new data model) but e.g. 2.6.2 added support for loading credentials from
So I’ve actually come to like the AWS 2.x line, where the paradigm represents a major update to fundamental dependencies (like straight cURL and calling
I think for systems level code, SemVer is a useful goal to strive for. But the meta point is that a project’s version should always be useful; if minor doesn’t make sense in a language where the engine parses all the source every time it starts, then maybe that level should be dropped.
At the same time, the people that SemVer might be most helpful for don’t really use it. It doesn’t matter that libcool 1.3.18 is binary compatible with libcool 1.3.12 that shipped with the distro, because the average distro (as defined by popular usage) won’t ship the newer libcool; they’ll backport security patches that affect their active platform/configuration. Even if that means they have effectively published 1.3.18, it’ll still be named something like 1.3.12-4ubuntu3.3 in the package manager. Even a high-impact bug fix like “makes pressure sensitivity work again in all KDE/Qt apps” won’t get backported.
Distros don’t roll new updates or releases based on versions, they snapshot the ecosystem as a whole and then smash the bugs out of whatever they got. They don’t seem to use versions to fast-track “minor” updates, nor to schedule in major merges.
One last bit of versioning awkwardness, and then I’m done: versions tend to be kind of fuzzy as it is. Although Net::Amazon::DynamoDB’s git repo has some heavy updates (notably, blob type support) since its last CPAN release, the repo and the CPAN release have the same version number stored in them. When considering development packages in an open-source world, “a version” becomes a whole list of possible builds, all carrying that version number, and each potentially subject to local changes.
Given that, there’s little hope for a One True Versioning scheme, even if everyone would follow it when it was done. I suspect there’s some popularity around SemVer simply because it’s there, and it’s not so obviously broken/inappropriate that developers reject it at first glance. It probably helps that three-part versions are quite common, from the Linux kernel all the way up to packages for interpreted languages (gems or equivalents).
I think I finally remember the answer. It’s set up to flag no compatibility, source compatibility, and binary compatibility. The “thing” that dictates bumping the minor shouldn’t be “features” so much as binary-incompatible changes. For example, GTK 2: if code was written against 2.4.0 using the shiny new file chooser, it would still compile against 2.12. But, once it had been compiled against 2.12, the resulting binary wouldn’t run if 2.4 was in the library path. A binary compiled against 2.4.2 would run on 2.4.0, though, because every 2.4.x release was strictly ABI compatible.
IIRC, they had a policy of forward compatibility for their ABI, so that a binary compiled against 2.4 would run on 2.12, but I don’t remember if that’s actually necessary for SemVer. Another way to look at this is, “If we bump the minor, you’ll need to recompile your own software.” Where that could be programs using a library, or plugins for a program.
I believe that’s the motivation for SemVer to include minor, but upon reflection, it doesn’t really make sense in a non-binary world. If there is no binary with baked-in assumptions about its environment, then that layer isn’t meaningful.
Also upon reflection, most of my currently-tracked projects don’t use SemVer. The AWS SDK for PHP is (in 1.x/2.x) organized as “paradigm.major.minor”, where 2.7 indicates a breaking change vs. 2.6 (DynamoDB got a new data model) but e.g. 2.6.2 added support for loading credentials from
~/.aws/credentials
. PHP itself has done things like add charset
to the PDO MySQL DSN in 5.3.6. When PHP added the DateTime class, it wasn’t a compatible change, but it didn’t kick the version to 6.0.0. (They were going to add it as Date, but many, many people had classes named that in the wild. They changed to DateTime so there would be less, but not zero, breakage.)So I’ve actually come to like the AWS 2.x line, where the paradigm represents a major update to fundamental dependencies (like straight cURL and calling
new
on global classes to Guzzle 3, namespaces, and factory methods) and the major/minor conveys actual, useful levels of information. It makes me a bit disappointed to know they’re switching to SemVer for 3.x, now that I’ve come to understand their existing versioning scheme. If they follow on exactly as before, we’ll have SDK 4 before we know it, and the patch level is probably going to be useless.I think for systems level code, SemVer is a useful goal to strive for. But the meta point is that a project’s version should always be useful; if minor doesn’t make sense in a language where the engine parses all the source every time it starts, then maybe that level should be dropped.
At the same time, the people that SemVer might be most helpful for don’t really use it. It doesn’t matter that libcool 1.3.18 is binary compatible with libcool 1.3.12 that shipped with the distro, because the average distro (as defined by popular usage) won’t ship the newer libcool; they’ll backport security patches that affect their active platform/configuration. Even if that means they have effectively published 1.3.18, it’ll still be named something like 1.3.12-4ubuntu3.3 in the package manager. Even a high-impact bug fix like “makes pressure sensitivity work again in all KDE/Qt apps” won’t get backported.
Distros don’t roll new updates or releases based on versions, they snapshot the ecosystem as a whole and then smash the bugs out of whatever they got. They don’t seem to use versions to fast-track “minor” updates, nor to schedule in major merges.
One last bit of versioning awkwardness, and then I’m done: versions tend to be kind of fuzzy as it is. Although Net::Amazon::DynamoDB’s git repo has some heavy updates (notably, blob type support) since its last CPAN release, the repo and the CPAN release have the same version number stored in them. When considering development packages in an open-source world, “a version” becomes a whole list of possible builds, all carrying that version number, and each potentially subject to local changes.
Given that, there’s little hope for a One True Versioning scheme, even if everyone would follow it when it was done. I suspect there’s some popularity around SemVer simply because it’s there, and it’s not so obviously broken/inappropriate that developers reject it at first glance. It probably helps that three-part versions are quite common, from the Linux kernel all the way up to packages for interpreted languages (gems or equivalents).
Wednesday, November 26, 2014
AppArmor and the problems of LSM
AppArmor is a pretty excellent framework. It clearly works according to its design. There are only two major problems, one of which is: apps can't give out useful error messages when an AppArmor check fails.
Since LSMs insert themselves inside the kernel, the only thing they can really do if an access check fails is force the system call to return EPERM, "permission denied." An app can't actually tell whether it has been denied access because of filesystem permissions, LSM interference, or the phase of the moon, because the return code can't carry any metadata like that. Besides which, it's considered bad practice to give an attacker detail about a failing security check, thus the ubiquitous and uninformative "We emailed you a password reset link if you gave us a valid email" message in password reset flows.
Thus, the hapless administrator does something perfectly reasonable, AppArmor denies it, and it becomes an adventure to find the real error message. AppArmor tends to hide its messages away in a special log file, so that the normal channels and the app's log (if different) only show a useless EPERM message for a file that the app would seem to have access to upon inspection of the filesystem.
Adding more trickiness, AppArmor itself doesn't always apply to an application, so testing a command from a root shell will generally work without issue. Those root shells are typically "unconfined" and don't apply the profiles to commands run from them.
The other main problem is that it requires profile developers to be near-omniscient. It's nice that tcpdump has a generic mechanism to run a command on log rotation, but the default AppArmor profile only sets up access for gzip or bzip2... and even then, if it's a *.pcap file outside of $HOME, it can't be compressed because the AppArmor profile doesn't support creating a file with the compressed extension. (That can be fixed.)
It's nice that charon (part of strongswan) has a mechanism to include /etc/ipsec.*.secrets so that I could store everyone's password in /etc/ipsec.$USER.secrets ... but the profile doesn't let charon list what those files are even though it grants access to the files in question. So using the include command straight out of the example/documentation will, by default, allow the strongswan service to start... but it won't be able to handle any connections.
I had SELinux issues in the past (which share all the drawbacks of AppArmor since it's just another LSM) when I put an Apache DocumentRoot outside of /var/www. In that case, though, I disabled SELinux entirely instead of trying to learn about it.
tl;dr: it's pretty much a recipe for Guide Dang It.
Since LSMs insert themselves inside the kernel, the only thing they can really do if an access check fails is force the system call to return EPERM, "permission denied." An app can't actually tell whether it has been denied access because of filesystem permissions, LSM interference, or the phase of the moon, because the return code can't carry any metadata like that. Besides which, it's considered bad practice to give an attacker detail about a failing security check, thus the ubiquitous and uninformative "We emailed you a password reset link if you gave us a valid email" message in password reset flows.
Thus, the hapless administrator does something perfectly reasonable, AppArmor denies it, and it becomes an adventure to find the real error message. AppArmor tends to hide its messages away in a special log file, so that the normal channels and the app's log (if different) only show a useless EPERM message for a file that the app would seem to have access to upon inspection of the filesystem.
Adding more trickiness, AppArmor itself doesn't always apply to an application, so testing a command from a root shell will generally work without issue. Those root shells are typically "unconfined" and don't apply the profiles to commands run from them.
The other main problem is that it requires profile developers to be near-omniscient. It's nice that tcpdump has a generic mechanism to run a command on log rotation, but the default AppArmor profile only sets up access for gzip or bzip2... and even then, if it's a *.pcap file outside of $HOME, it can't be compressed because the AppArmor profile doesn't support creating a file with the compressed extension. (That can be fixed.)
It's nice that charon (part of strongswan) has a mechanism to include /etc/ipsec.*.secrets so that I could store everyone's password in /etc/ipsec.$USER.secrets ... but the profile doesn't let charon list what those files are even though it grants access to the files in question. So using the include command straight out of the example/documentation will, by default, allow the strongswan service to start... but it won't be able to handle any connections.
I had SELinux issues in the past (which share all the drawbacks of AppArmor since it's just another LSM) when I put an Apache DocumentRoot outside of /var/www. In that case, though, I disabled SELinux entirely instead of trying to learn about it.
tl;dr: it's pretty much a recipe for Guide Dang It.
Friday, November 21, 2014
Add permissions to an AppArmor profile
Background: I tested something in the shell as root, then deployed it as an upstart script, and it failed. Checking the logs told me that AppArmor had denied something perfectly reasonable, because the developers of the profile didn't cover my use case.
Fortunately, AppArmor comes with a local override facility. Let's use it to fix this error (which may be in
Incidentally, the command differs from the profile in this case, because tcpdump executed gzip. It's allowed to do that by the system profile, which uses
Anyway, back to that fix I promised. AppArmor provides
Ubuntu ships some files in the local directory already; we should be able to run
Fortunately, AppArmor comes with a local override facility. Let's use it to fix this error (which may be in
/var/log/syslog
or /var/log/audit.log
depending on how the system is set up):
kernel: [ 7226.358630] type=1400 audit(1416403573.247:17): apparmor="DENIED" operation="mknod" profile="/usr/sbin/tcpdump" name="/var/log/tcpdump/memcache-20141119-072608.pcap.gz" pid=2438 comm="gzip" requested_mask="c" denied_mask="c" fsuid=0 ouid=0
What happened? requested_mask="c"
means the command (comm="gzip"
) tried to create a file (given by the name
value) and AppArmor denied it. The profile tells us, indirectly, which AppArmor file caused the denial; substitute the first '/' with the AppArmor config dir (/etc/apparmor.d/) and the rest with dots. Thus, if we look in /etc/apparmor.d/usr.sbin.tcpdump
, we find the system policy. It has granted access to *.pcap
case-insensitively anywhere in the system, with the rule /**.[pP][cC][aA][pP] rw
. I used that rule to choose my filename outside of $HOME, but now the -z /bin/gzip
parameter I used isn't able to do its work because it can't create the *.gz version there.Incidentally, the command differs from the profile in this case, because tcpdump executed gzip. It's allowed to do that by the system profile, which uses
ix
permissions—that stands for inherit-execute, and means that the gzip command is run under the current AppArmor profile. All the permissions defined for tcpdump continue to affect the gzip command.Anyway, back to that fix I promised. AppArmor provides
/etc/apparmor.d/local/
for rules to add to the main ones. (Although this can't be used to override an explicit deny like tcpdump's ban on using files in $HOME/bin
.) We just need to add a rule for the *.gz, and while we're there, why not the *.bz2 version as well?
/**.[pP][cC][aA][pP].[gG][zZ] rw,
/**.[pP][cC][aA][pP].[bB][zZ]2 rw,
The trailing comma does not seem to be an issue for me. Note also that we don't need to specify the binary and braces, since the #include
line in the system profile is already inside the braces.Ubuntu ships some files in the local directory already; we should be able to run
sudo -e /etc/apparmor.d/local/usr.sbin.tcpdump
and add the lines above to the existing file. Once the file is ready, we need to reload that profile to the kernel. Note that we use the system profile here, not the one we just edited:
sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.tcpdump
I'm not clear enough on how AppArmor works to know if we need to restart the service now (I'm purposely leaving my tcpdump service file/discussion out of this post) because I restarted mine just to be safe.
Wednesday, November 19, 2014
Unison fatal error: lost connection to server
My unison sync started dying this week with an abrupt loss of connection. It turns out, when I scrolled back up past the status spam, that there was an additional error message:
This turns out to be caused by a change of serializer between ocaml 4.01 and 4.02. Unison compiled with 4.02 can't talk to pre-4.02 unisons successfully, and the reason mine failed this week is because I did some reinstalls following an upgrade to OS X Yosemite. So I had homebrew's unison compiled with 4.02+ and Ubuntu's unison on my server, compiled with 4.01.
Even if I updated the server to Ubuntu utopic unicorn, it wouldn't solve the problem. ocaml 4.02 didn't make it in because it wasn't released soon enough.
So, time to build unison from source against ocaml built from source!
One problem: ocaml 4.01 doesn't build against clang... and on Yosemite, that's all there is. I ended up hacking the ./configure script to delete all occurrences of the troublesome
Happily, that seemed to be the only problem, and the rest of the unison build (namely, the text UI; I didn't want a heap of dependencies) went fine.
(And this, friends, is why you should define a real file format, instead of just dumping your internal state out through your native serializer. For things you control, you can have much broader cross-compatibility.)
Uncaught exception Failure("input_value: bad bigarray kind")
This turns out to be caused by a change of serializer between ocaml 4.01 and 4.02. Unison compiled with 4.02 can't talk to pre-4.02 unisons successfully, and the reason mine failed this week is because I did some reinstalls following an upgrade to OS X Yosemite. So I had homebrew's unison compiled with 4.02+ and Ubuntu's unison on my server, compiled with 4.01.
Even if I updated the server to Ubuntu utopic unicorn, it wouldn't solve the problem. ocaml 4.02 didn't make it in because it wasn't released soon enough.
So, time to build unison from source against ocaml built from source!
One problem: ocaml 4.01 doesn't build against clang... and on Yosemite, that's all there is. I ended up hacking the ./configure script to delete all occurrences of the troublesome
-fno-defer-pop
flag, then redoing the install. Per the linked bug, it was added in the 01990s to avoid tickling a code generation bug in gcc-1.x. Newer clangs won't complain about this particular flag, and newer ocamls won't use it, but I chose to stick with the old (semi-current) versions of both.Happily, that seemed to be the only problem, and the rest of the unison build (namely, the text UI; I didn't want a heap of dependencies) went fine.
(And this, friends, is why you should define a real file format, instead of just dumping your internal state out through your native serializer. For things you control, you can have much broader cross-compatibility.)
Wednesday, November 12, 2014
Web project repository layout
I used to stuff all my web files under the root of the project repository when I was developing "a web site", and include lots of tricky protections (is random constant defined and equal known value?) to 'protect' my includes from being directly accessed over the web. I was inspired at the time by a lot of open-source projects that seemed to work this way, including my blog software at the time (serendipity); when one downloaded the project, all their files were meant to be FTP'd to your server's document root.
However, these days I'm mostly developing for servers my company or I administer, so I don't need to make the assumption that there is "no place outside the DocumentRoot." Instead, I designate the top level of the repo as a free-for-all, and specify one folder within as the DocumentRoot.
This means that all sorts of housekeeping can go on up in the repo root, like having composer files and its vendor tree. Like having the .git folder and files safely tucked away. Like having a build script that pre-compresses a bunch of JavaScript/CSS... or a Makefile. Or an extra document root with a static "Closed for maintenance" page. The possibilities are almost endless.
Some of our sites rely on RewriteRules, and that configuration also lands outside the DocumentRoot, to be included from the system Apache configuration. This lets us perform rewrites at the more-efficient server/URL level instead of filesystem/directory level, while allowing all the code to be updated in the repo. When we change rewriting rules, that goes right into the history with everything else.
To give a concrete example, a website could look like this on the server:
Again, this doesn't really work for letting other people publish on a shared host (rewrite.conf could conceivably do anything allowable in VirtualHost context, not just rewrite) but we own the whole host.
However, these days I'm mostly developing for servers my company or I administer, so I don't need to make the assumption that there is "no place outside the DocumentRoot." Instead, I designate the top level of the repo as a free-for-all, and specify one folder within as the DocumentRoot.
This means that all sorts of housekeeping can go on up in the repo root, like having composer files and its vendor tree. Like having the .git folder and files safely tucked away. Like having a build script that pre-compresses a bunch of JavaScript/CSS... or a Makefile. Or an extra document root with a static "Closed for maintenance" page. The possibilities are almost endless.
Some of our sites rely on RewriteRules, and that configuration also lands outside the DocumentRoot, to be included from the system Apache configuration. This lets us perform rewrites at the more-efficient server/URL level instead of filesystem/directory level, while allowing all the code to be updated in the repo. When we change rewriting rules, that goes right into the history with everything else.
To give a concrete example, a website could look like this on the server:
- public/
- index.php
- login.php
- pdf-builder.php
- pdf-loader.php
- css/
- js/
- img/
- tcpdf/
- (the TCPDF library code)
- built-pdf/
- 20141011/
- 1f0acb7623a40cfa.pdf
- cron/
- expire-built-pdfs.php
- conf/
- rewrite.conf
- composer.lock
- composer.json
- vendor/
- aws/
- aws-sdk-php/
- guzzle/
- ...
Again, this doesn't really work for letting other people publish on a shared host (rewrite.conf could conceivably do anything allowable in VirtualHost context, not just rewrite) but we own the whole host.
Saturday, October 25, 2014
MySQL 5.6 TIMESTAMP changes
So you upgraded to MySQL 5.6 and there's a crazy warning in your error log about TIMESTAMP columns doing stuff, or not, when
It's actually pretty simple: TIMESTAMP columns without any DEFAULT nor ON UPDATE clause are going to change behavior in the future, and MySQL 5.6 has an option to allow choosing whether to opt-in to that future behavior at present.
MySQL 5.6 without the
MySQL 5.6 with the
That's all.
All of the other historic TIMESTAMP behaviors, such as assigning NULL to a column declared with NOT NULL actually assigning CURRENT_TIMESTAMP, remain unchanged by this update. There are some brand-new capabilities such as fractional seconds, applying default/update clauses to DATETIME columns, and setting those clauses on more than one column in a table. However, those features aren't a change of meaning for existing definitions, so they're unaffected by the option.
explicit_defaults_for_timestamp
is enabled, or not?It's actually pretty simple: TIMESTAMP columns without any DEFAULT nor ON UPDATE clause are going to change behavior in the future, and MySQL 5.6 has an option to allow choosing whether to opt-in to that future behavior at present.
MySQL 5.6 without the
explicit_defaults_for_timestamp
option set, which is default, will continue treating a column defined as simply TIMESTAMP
(possibly also NOT NULL) as if it were defined TIMESTAMP [NOT NULL] DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
.MySQL 5.6 with the
explicit_defaults_for_timestamp
option set will behave like the future planned versions of MySQL, where TIMESTAMP
will be treated as TIMESTAMP DEFAULT NULL
and TIMESTAMP NOT NULL
will behave as TIMESTAMP NOT NULL DEFAULT 0
. Implied automatic updates will be no more.That's all.
All of the other historic TIMESTAMP behaviors, such as assigning NULL to a column declared with NOT NULL actually assigning CURRENT_TIMESTAMP, remain unchanged by this update. There are some brand-new capabilities such as fractional seconds, applying default/update clauses to DATETIME columns, and setting those clauses on more than one column in a table. However, those features aren't a change of meaning for existing definitions, so they're unaffected by the option.
Friday, October 24, 2014
PHP-FPM error logging (where FastCGI errors go)
Let's start with the catchy summary graphic:
There are a lot of streams in the worker because historically, CGI directly connected a worker's standard output and error streams (file descriptors 1 and 2 on Unix) to the web server. FastCGI kept the notions, but established new, separate FastCGI output and error streams for FastCGI-aware applications to use. It also specified that the standard output/error streams should be discarded.
So by default, that's exactly what the FPM master does: it arranges for workers' standard streams to be discarded. That's actually fine with PHP because the error logs go through the SAPI layer, and that layer writes them to the FastCGI error stream when FastCGI is active.
By default, that stream goes through to the web server, where Apache logs it to whatever error log is in effect. (Typically, I've worked on sites that have per-virtualhost logs. Especially for error logs, it's a lot easier to find the relevant messages when it's known that Host X is down.)
PHP-FPM, however, also has the option of setting per-pool php.ini values in the pool configuration. If a value is provided there (e.g. /etc/php-fpm-5.5.d/www.conf) for the
These options turn out to be entirely independent of the master FastCGI configuration setting named
There are a lot of streams in the worker because historically, CGI directly connected a worker's standard output and error streams (file descriptors 1 and 2 on Unix) to the web server. FastCGI kept the notions, but established new, separate FastCGI output and error streams for FastCGI-aware applications to use. It also specified that the standard output/error streams should be discarded.
So by default, that's exactly what the FPM master does: it arranges for workers' standard streams to be discarded. That's actually fine with PHP because the error logs go through the SAPI layer, and that layer writes them to the FastCGI error stream when FastCGI is active.
By default, that stream goes through to the web server, where Apache logs it to whatever error log is in effect. (Typically, I've worked on sites that have per-virtualhost logs. Especially for error logs, it's a lot easier to find the relevant messages when it's known that Host X is down.)
PHP-FPM, however, also has the option of setting per-pool php.ini values in the pool configuration. If a value is provided there (e.g. /etc/php-fpm-5.5.d/www.conf) for the
error_log
setting, then that pool will log to the specified destination instead of the Web server's error log. In other words, the php_value[error_log]
set in the pool configuration overrides the default behavior of logging to the FastCGI error stream.These options turn out to be entirely independent of the master FastCGI configuration setting named
catch_workers_output
. This option, in the php-fpm configuration (e.g. /etc/php-fpm-5.5.conf) controls whether the standard streams are discarded as specified, or if they will appear as lines in the master process's error log. Unless the workers are dying unexpectedly, this probably doesn't contain anything interesting.
Monday, October 20, 2014
jQuery event handlers and Content-Security-Policy
I wrote a while back about the efficiency angle of using .on() to set up event handlers, but I recently discovered a surprising additional feature: when an AJAX request to the server returns HTML, jQuery (or bootstrap?) carefully lifts out the embedded
In my case, the script just wanted to bind a handler to all the newly-added elements that were also provided in the response.
Switching to binding the handler to the modal itself and filtering the interesting children meant that not only did I get the memory and CPU improvements of binding a single handler one time, but it eliminated the script—and the need to
Before, event handlers were bound with this code in the response, via jQuery secretly calling
Afterwards, the response contained no script, and the event handlers were bound natively in my page initialization code:
The content of
<script>
elements, and runs their content via eval(). This doesn't work with a Content-Security-Policy in place, because eval is forbidden.In my case, the script just wanted to bind a handler to all the newly-added elements that were also provided in the response.
Switching to binding the handler to the modal itself and filtering the interesting children meant that not only did I get the memory and CPU improvements of binding a single handler one time, but it eliminated the script—and the need to
eval
it—from the response.Before, event handlers were bound with this code in the response, via jQuery secretly calling
eval()
:<script type="text/javascript">
$('#clientlist a').click(activateClient);
</script>
Afterwards, the response contained no script, and the event handlers were bound natively in my page initialization code:
$('#clientmodal').on('click', '#clientlist a', activateClient);
The content of
#clientmodal
was replaced whenever the modal was opened, but the modal itself (and its event handler) remained in place.
Sunday, September 28, 2014
An AWS VPN: Part III, Routing
The last two posts in this series (part I, IPSec and part II, L2TP) covered enough to get an L2TP/IPSec VPN connection up, to the point where arbitrary traffic can be exchanged between the client and the server. But, there's a missing feature yet. Remember this picture from part II?
If we assigned a third network for the VPN, how does the client know that the protected network is even back there to send traffic to? It's not the remote end of the VPN link (172.16.0.1) so it'll get routed via the gateway and fail at some point.
The answer is that someone conveniently threw in an extra hack for us: the VPN client sends a DHCPINFORM message over the L2TP connection, and the server just has to respond to it with a few vital options.
If we assigned a third network for the VPN, how does the client know that the protected network is even back there to send traffic to? It's not the remote end of the VPN link (172.16.0.1) so it'll get routed via the gateway and fail at some point.
The answer is that someone conveniently threw in an extra hack for us: the VPN client sends a DHCPINFORM message over the L2TP connection, and the server just has to respond to it with a few vital options.
Saturday, September 27, 2014
An AWS VPN: Part II, L2TP/PPP
In part I of this series, we covered the IPSec layer. With IPSec up and running, we can move on to configuring L2TP; routing will be covered in part III.
In the "usual" VPN setup, we'd cut a hole in the existing DHCP pool in the protected network to reserve as addresses to be given to VPN clients. In a /24 subnet, network gear might get addresses up to 30, the DHCP pool could run from 31-150, and other static IPs (hi, printers!) might live at 200-254, with 151-199 open for VPN access.
Unfortunately, my protected network in this scenario is AWS, which I can't actually carve random IPs out of to assign to connecting clients. I have to assign one in a different netblock and hope none of the three (the client, AWS, and my new L2TP-only address pool) conflict.
Luckily, there are three private network spaces defined, and everyone always forgets about the weird one. Let's set up the connection something like this:
Of course I didn't actually use 172.16.0.x myself, that's asking for almost as much trouble as 192.168.0.x. (Also, I guess I could have colored the NAT-to-server link green as well, because that's the same network space in my AWS setup. Too late, we're going!)
In the "usual" VPN setup, we'd cut a hole in the existing DHCP pool in the protected network to reserve as addresses to be given to VPN clients. In a /24 subnet, network gear might get addresses up to 30, the DHCP pool could run from 31-150, and other static IPs (hi, printers!) might live at 200-254, with 151-199 open for VPN access.
Unfortunately, my protected network in this scenario is AWS, which I can't actually carve random IPs out of to assign to connecting clients. I have to assign one in a different netblock and hope none of the three (the client, AWS, and my new L2TP-only address pool) conflict.
Luckily, there are three private network spaces defined, and everyone always forgets about the weird one. Let's set up the connection something like this:
Of course I didn't actually use 172.16.0.x myself, that's asking for almost as much trouble as 192.168.0.x. (Also, I guess I could have colored the NAT-to-server link green as well, because that's the same network space in my AWS setup. Too late, we're going!)
Friday, September 26, 2014
An AWS VPN: Part I, IPSec
I recently set up an IPSec/L2TP VPN. This post is about how I did it, and how I debugged each individual part. In honor of how long it took me, this will be a three-part series, one for each day of work I spent on it. Part II will cover L2TP/PPP, and part III will get into routing and DNS.
First things first: IP addresses have been changed. Some details will apply to Linux, OpenSWAN, xl2tpd, pppd, iproute2, dnsmasq, or road-warrior configurations in particular, but the theory is applicable cross-platform. I want to cover both, because the theory is invaluable for debugging the practice.
This also covers a few details about putting the VPN server on an AWS instance in EC2 Classic with an elastic IP. But first, let's take a look at the general structure we have to work with:
A client connecting from a network we don't control, over networks that are possibly evil (hence the need for a VPN), to a server that provides access to vague "other resources" in non-globally-routable space behind that server. We don't know where the client is going to be, and/or the corporate net is sufficiently large, that we can't use dirty NAT tricks to move each network "into" the other. (But, spoiler, we will take some inspiration from there.)
Without further ado, here's part I: IPSec. Part II and III are forthcoming.
First things first: IP addresses have been changed. Some details will apply to Linux, OpenSWAN, xl2tpd, pppd, iproute2, dnsmasq, or road-warrior configurations in particular, but the theory is applicable cross-platform. I want to cover both, because the theory is invaluable for debugging the practice.
This also covers a few details about putting the VPN server on an AWS instance in EC2 Classic with an elastic IP. But first, let's take a look at the general structure we have to work with:
A client connecting from a network we don't control, over networks that are possibly evil (hence the need for a VPN), to a server that provides access to vague "other resources" in non-globally-routable space behind that server. We don't know where the client is going to be, and/or the corporate net is sufficiently large, that we can't use dirty NAT tricks to move each network "into" the other. (But, spoiler, we will take some inspiration from there.)
Without further ado, here's part I: IPSec. Part II and III are forthcoming.
Thursday, September 11, 2014
OpenSSL cipher cargo culting
From the new version of my dovecot conf that dpkg installed in the trusty upgrade:
oh God my eyes, why so much bleeding!?
ssl_cipher_list = ALL:!LOW:!SSLv2:ALL:!aNULL:!ADH:!eNULL:!EXP:RC4+RSA:+HIGH:+MEDIUM
oh God my eyes, why so much bleeding!?
Friday, August 15, 2014
Debian/Ubuntu init breakpoints (How to run zerofree and power off correctly afterwards)
Editor's Note [2020-11-21]: If you just want to run zerofree, see my updated post: Using zerofree to reduce Ubuntu/Debian OVA file size.
Most of the Debian information mentions
Here’s the possible list for Ubuntu 14.04 Trusty Tahr:
From here, to run
I used to use
Most of the Debian information mentions
break=init
and then says “see the /init source for more options.” Fortunately, I can use break=init
and then dig through /init myself.Here’s the possible list for Ubuntu 14.04 Trusty Tahr:
- top: after parsing commandline and setting up most variables, prior to the /scripts/init-top script
- modules: before loading modules and possibly setting up netconsole
- premount (default for
break
without a parameter): prior to the /scripts/init-premount script - mount: prior to /scripts/${BOOT} (which is set to "local" in my VM but comments indicate it can be otherwise under casper)
- mountroot: prior to actually mounting the root filesystem
- bottom: prior to running /scripts/init-bottom and mounting virtual filesystems within the root
- init: prior to unsetting all initramfs-related variables and invoking the real init
break=init
drops to a busybox shell with the root device filesystem mounted at $rootmnt
which is /root
on my system. The virtual filesystems like /proc
are also mounted under there, in the device’s root directory, not the initramfs’ root. This is actually the state I want to be in, and digging out that list of break options was entirely irrelevant. My apologies for the unintended shaggy-dog story.From here, to run
zerofree /dev/sda1
(your device name may vary) and shut down the system correctly afterwards:- Boot with
break=init
chroot $rootmnt /bin/sh
zerofree /dev/sda1
exit
sync
poweroff
chroot
into the disk first, so that zerofree can read the mount table from /proc/mounts (it doesn’t run on a writable filesystem.) Then to clean up, I return to the initramfs where I can use its working poweroff
command.I used to use
init=/bin/sh
to get a shell inside the root mount, but then I didn't have a way to shut down the system cleanly. In the image, shutdown
wants to talk to init
via dbus, neither of which are actually running, and it simply delivers a cryptic message like “shutdown: Unable to shutdown system”. Recovery or single-user modes didn't work because the filesystem was already mounted read-write and enough services were up to prevent remounting it as read-only.
CVE-2014-0224
Apparently in spite of automatically updating OpenSSL to the newest version, my server was just sitting around vulnerable for a while. unattended-upgrades, you had one job, to get security updates applied without me pushing buttons manually.
I guess I've got to write a cron job to reload all my network-facing services daily, in case some dependencies are ever updated again. Because for all its strong points, the package system doesn't care if your packages are actually in use.
I guess I've got to write a cron job to reload all my network-facing services daily, in case some dependencies are ever updated again. Because for all its strong points, the package system doesn't care if your packages are actually in use.
Thursday, August 14, 2014
On the Windows XP EOL
I discovered that some problems people had connecting to our shiny new SHA-256 certificate in the wake of Heartbleed were not caused by “Windows XP” per se, but by the lack of Service Pack 3 on those systems.
SP3 itself was released in 2008, meaning that SP2 had a two-year “wind down” until it stopped receiving support in 2010. That means everyone who had problems with our certificate were:
Combine the latter two, and you have people running an OS who never installed the SP3 update during its entire six-year support lifetime, which is longer than Windows 7 had been available.
In light of this, I can see why businesses haven’t been too worried about the end-of-life for Windows XP. It’s clear that those affected are not running SP3 on those systems, meaning they were already four years into their own unpatched period.
And if they “just happen” to get viruses and need cleanup, that just seems to be part of “having computers in the business.” Even if the machines were up-to-date, there would still be a few 0-days and plenty of user-initiated malware afflicting them. There’s little observable benefit to upgrading in that case… so little, in fact, that the business has opted not to take any steps toward it in half a decade.
SP3 itself was released in 2008, meaning that SP2 had a two-year “wind down” until it stopped receiving support in 2010. That means everyone who had problems with our certificate were:
- Using an OS that has been obsoleted by three further OS versions if you include Windows 8.
- Using an OS that had reached its actual end-of-life after ample warning and extensions from Microsoft.
- Using a version of that OS which had been unpatched for nearly four years.
Combine the latter two, and you have people running an OS who never installed the SP3 update during its entire six-year support lifetime, which is longer than Windows 7 had been available.
In light of this, I can see why businesses haven’t been too worried about the end-of-life for Windows XP. It’s clear that those affected are not running SP3 on those systems, meaning they were already four years into their own unpatched period.
And if they “just happen” to get viruses and need cleanup, that just seems to be part of “having computers in the business.” Even if the machines were up-to-date, there would still be a few 0-days and plenty of user-initiated malware afflicting them. There’s little observable benefit to upgrading in that case… so little, in fact, that the business has opted not to take any steps toward it in half a decade.
Tuesday, August 5, 2014
GCC options on EC2
AWS second-generation instances (such as my favorite, m3.medium) claim to launch with Xeon E5-2670 v2 processors, but sometimes the v1 variant. These chips are Ivy Bridge and Sandy Bridge, respectively. AFAICT, this means that AMIs with code compiled for Sandy Bridge should run on any second-generation instance, but may not necessarily run on previous-generation instances. What little I can find online about previous-generation instances shows a mix of 90nm Opteron parts (having only up to SSE3) and Intel parts from Penryn-based Harpertown up through Westmere.
GCC changed their
Broadwell and Westmere are not explicitly supported in the older release. Based on its definition in the gcc-4.9 sources, I believe the equivalent set of flags for Westmere in gcc 4.8 would be
GCC changed their
-march
options in the recently-released version 4.9. GCC 4.8, which notably shipped in Ubuntu 14.04 (Trusty Tahr), used some inscrutable aliases; the equivalent to 4.9's sandybridge
is corei7-avx
for instance. Here's a table of the subset that's most interesting to me:GCC 4.8 | GCC 4.9 |
---|---|
corei7 | nehalem |
– | westmere |
corei7-avx | sandybridge |
corei7-avx-i | ivybridge |
corei7-avx2 | haswell |
Broadwell and Westmere are not explicitly supported in the older release. Based on its definition in the gcc-4.9 sources, I believe the equivalent set of flags for Westmere in gcc 4.8 would be
-march=corei7 -maes -mpclmul
. And naturally, the corei7-avx2
option for Haswell would be the best for targeting Broadwell.
Sunday, June 29, 2014
Fake Speed with Bitmaps
It seems that Apple has been working very hard on “hiding latency” by showing a picture of an app’s last state while the app reloads. Apps may have to opt-in to this behavior, but it seems fairly prevalent on iOS and now OS X. Specifically, I’ve noticed it on iOS in some password-locked apps, in Music Studio, and on OS X, in the App Store.
After a firmware update, logging in again “re-opens” the App Store in precisely the state it was left after reboot. Even the update being installed is still shown as available, with its “RESTART” button grayed out. That perfectly represents how the app looked… before rebooting. But it rather quickly reverts to “Cannot contact the App Store; an Internet connection is required” since the wireless isn’t up yet.
Why it needs to talk to the App Store when it could have cached the data less than five minutes ago is beyond me, but never mind that.
The point is, while OS X is busy displaying a stale screenshot, any UI interaction will be lost. Because it’s not a UI at all, it’s a highly accurate view of what it could look like. OS X remains awfully confident about the number of updates it has, even though it can’t connect to the Internet and isn’t even loaded.
This is especially noticeable on iOS where the screen doesn’t visibly change state between the picture being shown and the real UI replacing it. I have a password-protected app that always looks like an “Enter your password” screen, complete with buttons in pixel-perfect position, but if I interact with it immediately after switching to it, my touches are dropped. Instead of 1234, it only registers 234 (or even 34). Then I wait for it to check the PIN and do the unlock animation, then realize that it won’t, and finally delete each digit by mashing Delete before entering the PIN for a second time.
1234 is not my real PIN, of course; that’s the PIN an idiot would have on his luggage. Or, you know, every Bluetooth device ever with a baked-in PIN code. (sigh)
It’s also really annoying when an app takes a while to (re)load because it’s legitimately large, like Music Studio and its collection of instruments for the active tracks. The UI is completely unresponsive for several seconds, until suddenly everything happens at once, badly. The Keyboard screen often gets key stuck down, making me tap it again to get it to release. Or knowing that, I avoid touching it for a while and have to guess at when it will be responsive, not knowing how much time I’ve wasted in over-shooting that point.
In the end, using static pictures of an app for latency hiding seems like a poor user interface—because the end of the latency period is also hidden, it encourages users to try interacting early, when the result is guaranteed not to work. But instead of showing the user that, it silently fails. I’d much prefer the older “rough outlines” splash screens than the literal screenshots of late; the “ready” transition is obvious with those, when the real UI shows up.
It actually surprises me that Apple would even release UI like this, because it’s kind of frustrating to clearly have an app on-screen that’s not reacting to me. Then again, with the train wreck that was QuickTime 4.0 Player, perhaps I shouldn’t be too surprised. (Yes, that was 15 years ago or something. No, the internet will never forget.)
After a firmware update, logging in again “re-opens” the App Store in precisely the state it was left after reboot. Even the update being installed is still shown as available, with its “RESTART” button grayed out. That perfectly represents how the app looked… before rebooting. But it rather quickly reverts to “Cannot contact the App Store; an Internet connection is required” since the wireless isn’t up yet.
Why it needs to talk to the App Store when it could have cached the data less than five minutes ago is beyond me, but never mind that.
The point is, while OS X is busy displaying a stale screenshot, any UI interaction will be lost. Because it’s not a UI at all, it’s a highly accurate view of what it could look like. OS X remains awfully confident about the number of updates it has, even though it can’t connect to the Internet and isn’t even loaded.
This is especially noticeable on iOS where the screen doesn’t visibly change state between the picture being shown and the real UI replacing it. I have a password-protected app that always looks like an “Enter your password” screen, complete with buttons in pixel-perfect position, but if I interact with it immediately after switching to it, my touches are dropped. Instead of 1234, it only registers 234 (or even 34). Then I wait for it to check the PIN and do the unlock animation, then realize that it won’t, and finally delete each digit by mashing Delete before entering the PIN for a second time.
1234 is not my real PIN, of course; that’s the PIN an idiot would have on his luggage. Or, you know, every Bluetooth device ever with a baked-in PIN code. (sigh)
It’s also really annoying when an app takes a while to (re)load because it’s legitimately large, like Music Studio and its collection of instruments for the active tracks. The UI is completely unresponsive for several seconds, until suddenly everything happens at once, badly. The Keyboard screen often gets key stuck down, making me tap it again to get it to release. Or knowing that, I avoid touching it for a while and have to guess at when it will be responsive, not knowing how much time I’ve wasted in over-shooting that point.
In the end, using static pictures of an app for latency hiding seems like a poor user interface—because the end of the latency period is also hidden, it encourages users to try interacting early, when the result is guaranteed not to work. But instead of showing the user that, it silently fails. I’d much prefer the older “rough outlines” splash screens than the literal screenshots of late; the “ready” transition is obvious with those, when the real UI shows up.
It actually surprises me that Apple would even release UI like this, because it’s kind of frustrating to clearly have an app on-screen that’s not reacting to me. Then again, with the train wreck that was QuickTime 4.0 Player, perhaps I shouldn’t be too surprised. (Yes, that was 15 years ago or something. No, the internet will never forget.)
Saturday, June 7, 2014
Musing on Iterator Design
PHP’s standard library (not the standard set of extensions, but the thing known as the “SPL”) includes a set of iterators with a kind of curious structure.
At the root is Traversable, which is technically an empty interface, except that there’s some C-level magic to enable C-level things to declare themselves Traversable. This enables them to be used in
Iterator exists as the basic interface for regular PHP code to implement iteration, internally. Notably, while Traversable is a once-only iterator, Iterator is not.
For classes that wrap a collection, there’s an IteratorAggregate interface, which lets them return any Traversable to be used in their place. And who could forget ArrayObject, since a plain array (in spite of being iterable in the engine) is not actually a valid return from
There are some interesting compositions, like AppendIterator, which can take multiple Iterators and return each of their results in sequence. AppendIterator is a concrete class that implements OuterIterator, meaning that it not only provides regular iteration but the world can ask it for its inner iterator.
There’s the curiously named IteratorIterator, which I never appreciated until today, when I tried to pass a Traversable to AppendIterator. For no apparent reason, AppendIterator can only append Iterators, not Traversables. But IteratorIterator implements Iterator and can be constructed on Traversable, thus “upgrading” them.
Long ago discovered and just now remembered, PHP provides DirectoryIterator and RecursiveDirectoryIterator—but to make the latter actually recurse if used in a
The designer in me thinks this is all a bit weird. Because PHP defined the basic Iterator as rewind-able, a bunch of stuff that isn’t rewind-able (including IteratorAggregate) can’t be used in places that expect a base Iterator. Like AppendIterator. That in turn exposes a curious lack of lazy iterators. When I wrap an IteratorAggregate in an IteratorIterator,
So if I want to fake a SQL union operation by executing multiple database queries in sequence, and I don’t want to call the database before the AppendIterator reaches that point, there’s no simple way to do it with SPL’s built-in tools.
I can build a full Iterator that doesn’t query until
Or, I can build a full LazyAppendIterator that accepts Traversable and doesn’t try to resolve anything until the outer iteration actually reaches them. Then I can add IteratorAggregates that perform the query in
Those two choices amount to the exact same amount of work. In each case, I need to build a lazy iterator that tracks the “current iteration state” and launches the expensive call only when starting a new iterable/iteration sequence.
In other languages I’ve used, the basic one-pass/forward-only iterator is the type accepted by all the compositing types, and everyone must live with the idea that the iterators in use may not be seekable/replayable. In PHP, I have to live with the idea that the not all iterable things may work with all Iterators, which seems supremely odd.
P.S. They didn't pack any RegexFilterIterator in the box, either?
At the root is Traversable, which is technically an empty interface, except that there’s some C-level magic to enable C-level things to declare themselves Traversable. This enables them to be used in
foreach
loops even though they don’t support any Iterator method.Iterator exists as the basic interface for regular PHP code to implement iteration, internally. Notably, while Traversable is a once-only iterator, Iterator is not.
rewind()
is part of its interface.For classes that wrap a collection, there’s an IteratorAggregate interface, which lets them return any Traversable to be used in their place. And who could forget ArrayObject, since a plain array (in spite of being iterable in the engine) is not actually a valid return from
getIterator()
?There are some interesting compositions, like AppendIterator, which can take multiple Iterators and return each of their results in sequence. AppendIterator is a concrete class that implements OuterIterator, meaning that it not only provides regular iteration but the world can ask it for its inner iterator.
There’s the curiously named IteratorIterator, which I never appreciated until today, when I tried to pass a Traversable to AppendIterator. For no apparent reason, AppendIterator can only append Iterators, not Traversables. But IteratorIterator implements Iterator and can be constructed on Traversable, thus “upgrading” them.
Long ago discovered and just now remembered, PHP provides DirectoryIterator and RecursiveDirectoryIterator—but to make the latter actually recurse if used in a
foreach
loop, it needs to be wrapped in RecursiveIteratorIterator, because the caller of a plain RecursiveIterator must decide whether to recurse, or not, at an element with children.The designer in me thinks this is all a bit weird. Because PHP defined the basic Iterator as rewind-able, a bunch of stuff that isn’t rewind-able (including IteratorAggregate) can’t be used in places that expect a base Iterator. Like AppendIterator. That in turn exposes a curious lack of lazy iterators. When I wrap an IteratorAggregate in an IteratorIterator,
getIterator()
is called right then, before the result can even be added to an AppendIterator.So if I want to fake a SQL union operation by executing multiple database queries in sequence, and I don’t want to call the database before the AppendIterator reaches that point, there’s no simple way to do it with SPL’s built-in tools.
I can build a full Iterator that doesn’t query until
current()
is actually called, by implementing a handful of functions, each of which must carefully check the current iteration state before returning a value. These would satisfy the native AppendIterator, but be lazy themselves.Or, I can build a full LazyAppendIterator that accepts Traversable and doesn’t try to resolve anything until the outer iteration actually reaches them. Then I can add IteratorAggregates that perform the query in
getIterator()
, since I won’t be calling that right away.Those two choices amount to the exact same amount of work. In each case, I need to build a lazy iterator that tracks the “current iteration state” and launches the expensive call only when starting a new iterable/iteration sequence.
In other languages I’ve used, the basic one-pass/forward-only iterator is the type accepted by all the compositing types, and everyone must live with the idea that the iterators in use may not be seekable/replayable. In PHP, I have to live with the idea that the not all iterable things may work with all Iterators, which seems supremely odd.
P.S. They didn't pack any RegexFilterIterator in the box, either?
Tuesday, May 13, 2014
Hidden Interfaces
I fly around my world with lots of keyboard shortcuts. The hot-corner UI stuff in Windows 8 doesn't really bother me, except for that part where it's hard to hit in a windowed virtual machine... but they provided some new shortcuts like Win+C to get there, or I can run it full screen.
But there's something interesting about this: my parents have a much worse time using Windows 8.x than I do, and the gap is greater than it was in the XP/7 days. And it occurs to me that I use a lot of 'secret' interfaces that are invisible to them.
But there's something interesting about this: my parents have a much worse time using Windows 8.x than I do, and the gap is greater than it was in the XP/7 days. And it occurs to me that I use a lot of 'secret' interfaces that are invisible to them.
Thursday, April 10, 2014
IO::Scalar
Roaming the dusty halls of some legacy system tools, I found some code that takes IO-Stringy as a dependency. Only to use it for IO::Scalar... only to support sending to non-filehandles... which is an unused feature of the system. There are no callers of the carefully-built "change output stream" method.
As the icing on the cake, Perl has had native support for doing this since 5.8, by passing a scalar reference instead of a file name:
This legacy code has only ever run on Perl 5.10.0 and newer. The dependency has always been unnecessary, unused, and overbuilt.
As the icing on the cake, Perl has had native support for doing this since 5.8, by passing a scalar reference instead of a file name:
open($fh, '>>', \$string_out);
This legacy code has only ever run on Perl 5.10.0 and newer. The dependency has always been unnecessary, unused, and overbuilt.
Monday, March 24, 2014
VPN Terminology: PPTP and L2TP/IPSEC
A little reminder before I get started:
Just don’t.
With that aside…
PPP was designed for an ISP to serve dial-in lines, so it had a bit more to worry about than just ferrying IP packets. It had to assign addresses between the endpoints (IPCP) as well as ensure that Internet service was being given to a paying user by authenticating them (PAP or CHAP).
In those simpler times, one physical machine at the ISP’s end would connect to both the Internet and the telephone network.
Some people, though, needed so much internet that they wanted to dial in with multiple lines, and have packets routed down both of them. This was known as “multilink PPP” and it had a problem. What if the ISP doesn’t have just one machine handling dial-up connections?
It could be the case that a client would attempt to set up a multilink PPP session, but their calls were on different machines, and The Internet couldn’t really route to both of them. So, faced with different PPP servers, the client had little choice but to hang up, dial in again, and hope that this time, they hit the same servers.
Handling calls from the clients were the PPTP Access Concentrators, or PACs, which would then deliver traffic to the PPTP Network Server, or PNS. The “tunnel” between these two parts was a single channel/circuit (I presume) that carried the PPTP control traffic and tunneled packets between PAC and PNS for de/multiplexing.
In this scenario, the client-to-PAC connection is still regular PPP; a portion of the PPP exchange (setting up multilink) is forwarded to the PNS, who can then see all client links, even across PACs. Thus, it can route effectively. From the point of view of the Internet, it really is routing all the traffic to one machine, the PNS.
The PNS must also handle the IPCP portion of PPP setup, since it’s the server with IP connectivity.
Much of the terminology in the PPTP RFC is defined with reference to this situation, and says nothing about VPNs themselves, because the protocol itself was not originally defined as a VPN protocol.
In this case, the VPN client is playing the role of PAC and the server plays PNS again; the connection between them is a TCP channel over which all network packets are forwarded. Like regular PPP, there are control messages, and also data packets riding on the single established channel.
This is where performance trouble turns out to be lurking: if a TCP connection is forwarded, it gets forwarded reliably. It’s never going to be lost on the journey over the PPTP VPN. That inner TCP connection can’t detect the path MTU by packet loss, and this does not play well with systems that self-control their network rates based on packet loss.
(Ironically, recent bufferbloat mitigations that try to measure RTT and slow down when RTT increases are much better suited to this scenario.)
L2TP is also simplified a bit from PPTP, in that it doesn’t offer any encryption of its own. That’s why it has to be run over IPSEC to form a VPN; but otherwise, it’s conceptually near identical to PPTP.
(Of course, all those “P” abbreviations also become “L,” but actually crossing them all out cluttered the image too much.)
With encryption provided by the IPSEC layer, any weakness of L2TP becomes irrelevant. But it seems like a horrible hack, because IPSEC isn’t concerned with hosts; it’s concerned with networks and routing.
Like PPTP, the L2TP RFC is not that worried about defining all this stuff in terms of how an L2TP VPN would be set up in practice. Rather, it reads like a minimal number of changes were made to the PPTP RFC in order to make the required implementation changes obvious to people already writing PPTP VPN software.
If you’re using a pre-shared key (PSK), this oddly means that you need two passwords to connect up the VPN: one site-wide password to establish the IPSEC channel, and a separate per-user password to set up the L2TP tunnel inside.
It should be noted that, as long as this post is, it does not cover other approaches to VPNs: it remains laser-focused on L2TP. OpenVPN and sshuttle are out of scope.
Do not use PPTP
Seriously, do not use it. It’s insecure and it occasionally has terrible performance due to its underlying design. Don’t use it.Just don’t.
With that aside…
Remembering PPP
Back in the days of dialup, people used physical circuits like the telephone network to connect to the Internet. On that last hop between the home and the ISP, the protocol that ferried IP directly was PPP, the Point-to-Point Protocol.PPP was designed for an ISP to serve dial-in lines, so it had a bit more to worry about than just ferrying IP packets. It had to assign addresses between the endpoints (IPCP) as well as ensure that Internet service was being given to a paying user by authenticating them (PAP or CHAP).
In those simpler times, one physical machine at the ISP’s end would connect to both the Internet and the telephone network.
Some people, though, needed so much internet that they wanted to dial in with multiple lines, and have packets routed down both of them. This was known as “multilink PPP” and it had a problem. What if the ISP doesn’t have just one machine handling dial-up connections?
It could be the case that a client would attempt to set up a multilink PPP session, but their calls were on different machines, and The Internet couldn’t really route to both of them. So, faced with different PPP servers, the client had little choice but to hang up, dial in again, and hope that this time, they hit the same servers.
Tunneling
PPTP was largely motivated by the desire to solve this problem, by splitting Big Machines into two parts: one (or more) to handle dial-in lines, maintaining connections to one (or more?) machines which were actually on the Internet.Handling calls from the clients were the PPTP Access Concentrators, or PACs, which would then deliver traffic to the PPTP Network Server, or PNS. The “tunnel” between these two parts was a single channel/circuit (I presume) that carried the PPTP control traffic and tunneled packets between PAC and PNS for de/multiplexing.
In this scenario, the client-to-PAC connection is still regular PPP; a portion of the PPP exchange (setting up multilink) is forwarded to the PNS, who can then see all client links, even across PACs. Thus, it can route effectively. From the point of view of the Internet, it really is routing all the traffic to one machine, the PNS.
The PNS must also handle the IPCP portion of PPP setup, since it’s the server with IP connectivity.
Much of the terminology in the PPTP RFC is defined with reference to this situation, and says nothing about VPNs themselves, because the protocol itself was not originally defined as a VPN protocol.
VPN puts a network in your networks
However, if the tunnel is encrypted then we can convert it into a VPN technology by moving the tunnel off ISP equipment and onto the public Internet.In this case, the VPN client is playing the role of PAC and the server plays PNS again; the connection between them is a TCP channel over which all network packets are forwarded. Like regular PPP, there are control messages, and also data packets riding on the single established channel.
This is where performance trouble turns out to be lurking: if a TCP connection is forwarded, it gets forwarded reliably. It’s never going to be lost on the journey over the PPTP VPN. That inner TCP connection can’t detect the path MTU by packet loss, and this does not play well with systems that self-control their network rates based on packet loss.
(Ironically, recent bufferbloat mitigations that try to measure RTT and slow down when RTT increases are much better suited to this scenario.)
Layer 2 Tunneling (L2TP)
Someone decided to fix that, so an L2TP connection is carried over UDP packets instead. They implement their own reliability protocol for control messages, and rely on the tunnelled connections to handle any packet loss. That fixes the performance issues facing PPTP.L2TP is also simplified a bit from PPTP, in that it doesn’t offer any encryption of its own. That’s why it has to be run over IPSEC to form a VPN; but otherwise, it’s conceptually near identical to PPTP.
(Of course, all those “P” abbreviations also become “L,” but actually crossing them all out cluttered the image too much.)
With encryption provided by the IPSEC layer, any weakness of L2TP becomes irrelevant. But it seems like a horrible hack, because IPSEC isn’t concerned with hosts; it’s concerned with networks and routing.
Like PPTP, the L2TP RFC is not that worried about defining all this stuff in terms of how an L2TP VPN would be set up in practice. Rather, it reads like a minimal number of changes were made to the PPTP RFC in order to make the required implementation changes obvious to people already writing PPTP VPN software.
IP Security (IPSEC)
IPSEC itself establishes an encrypted channel with integrity checking; it’s just anonymous. L2TP provides a point to get a specific username/password pair after the IPSEC key exchange (IKE) is done.If you’re using a pre-shared key (PSK), this oddly means that you need two passwords to connect up the VPN: one site-wide password to establish the IPSEC channel, and a separate per-user password to set up the L2TP tunnel inside.
Sediment
There you have it: VPNs today are generally complicated and awful because people have been adapting other tools to the job, and the layers have just been accumulating.It should be noted that, as long as this post is, it does not cover other approaches to VPNs: it remains laser-focused on L2TP. OpenVPN and sshuttle are out of scope.
Monday, March 3, 2014
Bitcoin
I used to regret, just a little bit, not getting into Bitcoin as soon as I'd heard about it (2012?) But, I probably would have bought via Mt. Gox given their largeness and fame at the time.
- Gox is missing 850,000 BTC, of which 750,000 belong to customers. Per bankruptcy filing.
- Gox is also missing 2.8 billion yen, about 27.6 million USD today. Per bankruptcy filing.
- Hackers claim to have scans of passports that were submitted to Gox. Per anonymous group that has at least two bits of inside info (call recording, source code).
Thursday, January 30, 2014
vmconnect
I wrote a script called
The full script became a bit unwieldy for a blog post (and I didn't want to maintain it forever within Blogger), so I published it to github. You'll find it in my
Enjoy!
Update: _vmconnect version 1.0.3 fixes a possible data loss: if the VM was booted by _vmconnect, terminating the ssh connection it logged in with would also abort the VM. You SHOULD pull if you are using _vmconnect 1.0.2 or prior (check your
_vmconnect
that either simply logs into a VirtualBox guest, or starts it headlessly first and then logs in. With _vmconnect set as the command for an iTerm2 profile, I can open the session without caring whether the machine is running or not.The full script became a bit unwieldy for a blog post (and I didn't want to maintain it forever within Blogger), so I published it to github. You'll find it in my
bin
repository: https://github.com/sapphirecat/binEnjoy!
Update: _vmconnect version 1.0.3 fixes a possible data loss: if the VM was booted by _vmconnect, terminating the ssh connection it logged in with would also abort the VM. You SHOULD pull if you are using _vmconnect 1.0.2 or prior (check your
_vmconnect --version
output!)
Friday, January 17, 2014
Brief notes on Packer
packer.io says they build identical images for multiple platforms such as EC2, VirtualBox, VMWare, and others (with even more on the way via plugins). However, it's more about customizing a machine already running than it is about fabricating a new machine out of whole cloth and distributing the result to disparate services. That is, the AMI builder starts from another AMI; the VirtualBox builder starts from an installer ISO (with commands to make the install unattended) or an OVF/OVA export and customizes that to produce an output OVF/OVA export. This is, as I understand it, in contrast to Vagrant or Docker images, which build one image that runs on many systems.
Packer is about customizing existing machines that already exist in some form, then building the result as a new image for the same system. If you're really interested in taking "one base image" to many environments, then Docker is probably the better choice there. It's built on Linux Containers, where the virtualization happens inside the host kernel instead of a dedicated hypervisor, which makes it possible to run a bit-for-bit identical container both inside EC2 (even PV) and inside VirtualBox.
In another contrast to vagrant, packer supports a shell provisioner that lets you upload a shell script and run wild. Last I knew, which is probably woefully out of date, vagrant supported puppet by default and chef-solo as an option. Since I happen to have an existing build system constructed around shell scripts for provisioning (with a dash of perl or php for the trickier parts), albeit specifically for EC2 instances, packer might suit my codebase and philosophy a little better.
Put another way, and beating the dead horse a bit more, Vagrant and Docker are about creating One True Image and later instantiating that single image on different underlying services (including EC2 and VirtualBox). Packer is about coordinating the construction of native images for those services; it's about making predictable, repeatable changes to a running instance in the service and exporting the result for later use.
Unrelated to the above, packer is written in go 1.2 and distributes binaries for various platforms. Docker is also written in go (but I'm not sure of the version target), leaving Vagrant in Ruby as the odd man out. Having been subjected to rvm to install octopress at one point, I'm much more amenable to smaller dependency footprints... like static binaries produced by Go. (Then again, I heard later that everyone hates rvm.)
Update 14 Feb 2014: I thought of one more useful contrast among the projects. Docker and Packer are aimed toward building immutable, self-contained images. Vagrant is focused on setting up (repeatable) development environments--particularly, in how it includes only the VirtualBox provider by default, and uses shared folders to persist changes as you develop.
Packer is about customizing existing machines that already exist in some form, then building the result as a new image for the same system. If you're really interested in taking "one base image" to many environments, then Docker is probably the better choice there. It's built on Linux Containers, where the virtualization happens inside the host kernel instead of a dedicated hypervisor, which makes it possible to run a bit-for-bit identical container both inside EC2 (even PV) and inside VirtualBox.
In another contrast to vagrant, packer supports a shell provisioner that lets you upload a shell script and run wild. Last I knew, which is probably woefully out of date, vagrant supported puppet by default and chef-solo as an option. Since I happen to have an existing build system constructed around shell scripts for provisioning (with a dash of perl or php for the trickier parts), albeit specifically for EC2 instances, packer might suit my codebase and philosophy a little better.
Put another way, and beating the dead horse a bit more, Vagrant and Docker are about creating One True Image and later instantiating that single image on different underlying services (including EC2 and VirtualBox). Packer is about coordinating the construction of native images for those services; it's about making predictable, repeatable changes to a running instance in the service and exporting the result for later use.
Unrelated to the above, packer is written in go 1.2 and distributes binaries for various platforms. Docker is also written in go (but I'm not sure of the version target), leaving Vagrant in Ruby as the odd man out. Having been subjected to rvm to install octopress at one point, I'm much more amenable to smaller dependency footprints... like static binaries produced by Go. (Then again, I heard later that everyone hates rvm.)
Update 14 Feb 2014: I thought of one more useful contrast among the projects. Docker and Packer are aimed toward building immutable, self-contained images. Vagrant is focused on setting up (repeatable) development environments--particularly, in how it includes only the VirtualBox provider by default, and uses shared folders to persist changes as you develop.
Monday, January 13, 2014
Event Handler Optimization
I've been coding with jQuery for a long time. I'm pretty sure 1.3 was new when I started.
One of the things that had always been "good enough" for me was to attach event handlers en masse directly to the elements involved:
It so happens that if the page is a table of 50 customers in the search result, each with a history link, then 50 handlers get bound. jQuery takes care of all the magic, but it can't really optimize your intent. Newer versions of jQuery—that is, 1.7 and up—support a more intelligent event-binding syntax which is a beautiful combination of two things:
Not only does this attach one event handler under the hood, but that event handler will be run for any present or future children. The parent must be there for
In the presence of dynamically added/removed elements, then, it's a double optimization: not only does only one handler get bound in the first place, but the handlers don't have to be changed on a per-element basis when children are added or removed.
One of the things that had always been "good enough" for me was to attach event handlers en masse directly to the elements involved:
$('a.history').click(popupHistory);
It so happens that if the page is a table of 50 customers in the search result, each with a history link, then 50 handlers get bound. jQuery takes care of all the magic, but it can't really optimize your intent. Newer versions of jQuery—that is, 1.7 and up—support a more intelligent event-binding syntax which is a beautiful combination of two things:
- A single handler on an element containing all of the interesting ones, invoked as the event bubbles.
- A selector for the handler to filter interesting children. The handler is only called if the event occurs within a child matching that selector.
.on()
. For an idea how this works:
<div id="box"></div>
<script type="text/javascript">
$('#box').on('click', 'a.history', function (e) {
popupHistory($(e.target).closest('[data-customer]').attr('data-customer'));
});
// populate the #box with HTML somehow, e.g.:
$('#box').html('<div data-customer="3">Customer! <a class="history" href="#">View Order History</a></div>');
</script>
Not only does this attach one event handler under the hood, but that event handler will be run for any present or future children. The parent must be there for
.on()
to bind the handler to (otherwise, you'd have an empty jQuery set, and nothing would happen), but children will be detected by that handler when the event fires.In the presence of dynamically added/removed elements, then, it's a double optimization: not only does only one handler get bound in the first place, but the handlers don't have to be changed on a per-element basis when children are added or removed.
Thursday, January 9, 2014
Smart Clients and Dumb Pipes for Web Apps
After something like 8 years of coding for the web, both in new and legacy projects, across four companies and some personal projects, I've come to a philosophy on web application layout:
If you require that the client has JavaScript, then you may as well embrace the intelligence of your client-side code. Let your AJAX calls return simple messages, to be converted to a visible result for the user by code that shipped alongside that UI.
Subscribe to:
Posts (Atom)