The haze is back…

… And of course this post will not be about the haze because I’ve written enough about it. (Actually, one last thing I want to mention regarding the haze was that my office people were nice enough to give me an N95 mask during last year’s haze season because I had an asthmatic history. I ended up not using the mask because it was kinda very uncomfortable and my asthma hadn’t acted up in about a decade, but it was cool nonetheless.)

The topic of today is smoke. Like cigarette smoke.

I generally have a very dim opinion of smokers because most of them are oblivious to the sensibilities of the people around them. During national service I’ve had close contact with a lot of people, a significant portion of whom were very clearly smokers. Now, I worked in an air-conditioned office (where smoking is obviously forbidden) and the electronic biodata I had access to never included such “trivial” information like whether someone was a smoker. (See footnote.) I also never hung out around the smoking area. So how did I manage to conclude that the smokers were in fact smokers?

The answer is obvious to anyone who’s been near a smoker: the smell of tobacco. It’s an unmistakeable stench that announces to the whole world that you just had a smoke. And for logistical reasons, people tended to look for me during their breaks, which, for the smokers (I presume) included smoke breaks either before or after the consultation.

The worst part of all this is that it’s not even difficult to mask the tobacco stench: just get a bunch of mints! I don’t have a particular good sense of smell and the masking provided by typical mints is enough to fool me, so this may very well be selection bias at work, but a lot of people don’t even bother with this basic courtesy. There were some people (I would name names if I remembered them, but alas!) who kept looking for me right after their high sessions—I don’t recall them ever smelling not like smoke—and it really, really pissed me off. (Of course, there were also people who I didn’t even realise smoked for quite a while because they were polite enough to do something about their breath.)

Did I say something before about polytechnic graduates and the military? (The answer is “yes”, but I probably nuked the blog post.) Apparently, being a poly grad (versus JC) is also somewhat correlated with having a smoking habit, though I don’t have much data to support this and it’s probably some combination of selection/confirmation bias hard at work in my brain.

Another thing that used to bug me last year was that I used to take a bus from camp to AMK Hub then walk the remaining distance (about 2 km) home. Stepping out of AMK Hub was horrible—I’d almost immediately encounter hordes of smokers hanging around and smoking without any care in the world. I don’t know why they conglomerate just outside an area with high human traffic instead of having their smoke in some nearby HDB flat void deck, which, on the other hand, are practically devoid of smokers. (These observations were made in the early evenings.)

And speaking of smoking in public areas, a couple of weeks ago I was idly waiting at a bus stop, when a bus (not the one I was waiting for) came by, and some asshole who was smoking just threw her cigarette butt onto the grass to board the bus. Yo, what the hell, the ashtray is just a couple of metres away! I had the urge to just shout “hey bitch, pick up that cigarette” but I didn’t have the courage. I should work on that.

For that matter, just looking around at the ground in Singapore reveals a somewhat startling fact: a huge proportion of litter is made of cigarette butts. (And actually this turns out to be true in a huge part of the world, which goes to show how uninformed I am.) The natural solution to this is to simply outlaw cigarettes, except this won’t be done because presumably the government makes boatloads of dosh from the vice tax. (Put another way: when I do get a job (as if), would I be willing to pay extra in taxes to make up for the loss of income from the vice tax? I’m not sure.)

Footnote: Actually, I probably did have access to the paper records for such biodata, but not the electronic records, and I avoided checking paper records as far as I could for two reasons: first was that I had no business checking their records, and second was that accessing the paper records would’ve required walking a couple of metres.

Memoising recursive functions in JS

Gosh JS is such a horrible programming language. An identifier can, depending on context, refer to different objects in the same scope without being reassigned between references. But I still use JS anyway because I’m too lazy to learn GUI programming for anything else, and the HTML+CSS+JS combo is actually pretty useful because it’s so heavily riced. (Have I mentioned how cool CSS transitions look? Of course I have, and I’ll mention it again: they’re cool.) Anyway. I’ve actually written on memoising JS functions before on the old blog, but let’s pretend that never happened, shall we?

So you have a memoisation function that takes a function, and returns a memoised version of that function. This is rather straightforward, though it didn’t immediately occur to me to use JSON.stringify or [].slice because I learnt JS in a time where neither JSON nor Function.prototype.apply existed.

function memoise(f)
	var l = {};
	return function () {
		var argstr = JSON.stringify([].slice.apply(arguments));
		// f could possibly return undefined so don't just do (l[argstr] !== undefined) and instead check for membership
		if (!(argstr in l)) l[argstr] = f.apply(this,arguments);
		return l[argstr];

Because we like clichéd examples, suppose we want a function to calculate the nth Fibonacci number for nonnegative n using recursion to implement the recurrence relation and bootstrapping from F_0=0;~F_1=1.

function F(n)
	if (n < 2) return n;
	else return F(n-1) + F(n-2);

This gives a nonmemoised Fibonacci calculator, which takes \Theta(F_n) time to calculate F_n. (I’m ignoring the fact that it’s not really valid to use big-o notation here because the integers with representable predecessors only go up to 253, so n>2^{53} just leads to the function never halting.) This is terrible, because in the process of calculating F_{n-1} the value of F_{n-2} is needed, which is also used later in the computation, but the calculation has to be redone because the value is not cached.

F = memoise(F);

It’s memoised, right? Of course it is. So now the first invocation of F(n) will still take an exponential amount of time, but all subsequent calls will be fast. Yay!

… Not. This is still terrible, because calling F from outside the original F gives you the “memoised” version, but, for reasons I’m not too sure of, calling F from within the original F gives the original F instead of the memoised one. (I have two hypotheses for the reason, and I’m fairly sure at least one of them is right, but I don’t know which. One is that as the function was defined with a name, the name can only ever refer to itself or a variable within the function. The other is that there’s a distinction between F the variable (which is the memoised function) and F the function (which is the nonmemoised function) and since F is referred to using function call syntax, the function version is used. I’m already confusing myself here, but tl;dr it doesn’t work as expected.) The following is not a fix, and exhibits the same behaviour.

var F1 = memoise(function F1 (n) {if (n < 2) return n; else return F1(n-1) + F1(n-2);});

The reason is that this uses a function expression (as opposed to a function statement, as used for F), so the identifier can only refer to the function itself unless it’s shadowed by a variable with the same identifier within the function. On the other hand, the following does work.

var F2 = memoise(function (n) {if (n < 2) return n; else return F2(n-1) + F2(n-2);});

This gives a \Theta(n) Fibonacci calculator, and the reason this works is that F2 can only possibly refer to the memoised function so there’s nowhere for things to go wrong.

F2 has \Theta(n) time complexity, but also \Theta(n) space complexity because all previous values are cached. (Again, calling attention to the fact that big-o notation is invalid because while real integers use \Omega(\log n) space, JS numerical types have only a finite range and thus can fit in constant space.) In fact, a solution is to just not use recursion at all. Memoisation then becomes optional when you just want one specific Fibonacci number, though the usual benefits of memoisation for nonrecursive functions (more commonly just known as “caching”) apply if you need to call the function multiple times. In practice, you also have to make sure that accessing the cache is not any slower than just recomputing the value.

function F3(n)
	if (n < 2) return n;
	var a = 0, b = 1, c;
	while (n --> 1)
		c = a + b;
		a = b;
		b = c;
	return b;

Still \Theta(n) time, but only constant space complexity. Can we do better? Of course we can. The powers of \begin{pmatrix}1&1\\1&0\end{pmatrix} have the Fibonacci numbers as entries, so performing exponentiation by repeated squaring gives \Theta(\log n) time, and something similar is used in GMP. This is the best that can be achieved with only integer arithmetic; allowing floating-point arithmetic lets you use Binet’s formula, for a constant-time constant-space algorithm, though you’ll run into accuracy/precision issues.

Anyhow, the above memoisation function is incapable of taking an arbitrary recursive function and making it auto-cache, unless the standard trick of passing the function as one of its own arguments is used. Except that no one does this in JS so memoise can never provide a direct drop-in replacement, in which case you might as well just hard-code the caching in each function.

Python’s functools.lru_cache decorator is a nice example of a memoiser, and it’s somewhat unfortunate that it’s impossible to have this same niceness in JS.

CSS transform interpolation

For rotate I initially used rotate notation to represent a rotation. This came with the caveat that (taking the 4×4 as an example, so there are four orientations) the orientation can’t reduced mod 4 because transitioning from rotate(270deg) to rotate(360deg) is not the same as transitioning to rotate(0deg): the former does a rotation by 90 degrees clockwise (which is desired), the latter by 270 degrees anticlockwise (which is not desired).

CSS transforms do allow arbitrary affine transforms through specifying a matrix, but I was worried that browsers would linearly interpolate between two matrices for the purposes of displaying a transition. Suppose that the initial and end matrices are A and B. Even if both A and B have determinant 1, \alpha A+(1-\alpha)B could have a different determinant. Having the area change mid-rotation is clearly undesirable.

It turns out that the CSS Transforms spec does mention how to interpolate transforms, with another section for the case where direct linear interpolation cannot be used. Both Firefox 34 and Chrome 37 get this right (yay) and IE 11 gets this only partially correct (lol IE). The spec-compliant thing to do, in the case where both the initial and end matrices are rotations, is to simply do a rotation in the direction where the angle is not greater than π. (IE 11 apparently gets the matrix decomposition right, but it doesn’t seem to care about choosing the smaller rotation angle.) There is an obvious discontinuity at handling a rotation by exactly π, but the algorithm in the spec does deterministically choose which direction to rotate so I’m not too bothered by that.

I just want to say that when I was trying to search for information on using transitions with transforms, a whole bunch of not really informative websites came up near the top of the Google results. Among them included the infamous W3Schools, which is a really horrible resource if you want to learn how to do things the right way. (And not just, say, doing things in a way that works currently but could potentially break in the future… except that the WHATWG is too obsessed with being backwards compatible so things probably will never really break, which makes my point moot.)

Update: Apparently I was wrong and Firefox Nightly never handled this correctly, instead doing the same thing as IE 11. I checked ten builds between 2014-05-01 to 2014-09-07 (today’s) and none of them handled the rotation interpolation correctly. I’m not really inclined to releasing something that’d only work on Chrome since that’s not even my main browser.

Even more of that rotation thing

I finally decided to do something useful with my GitHub account, so here: rotate. (I spent a few hours wondering why GitHub Pages wasn’t working for me, only to find out that it required a verified email address and I’d never bothered verifying mine in the two whole years since I made the account.) Note that Firefox has relatively bad performance on this compared to, say, Chrome or (gasp) IE 11. Still playable, but the fancy CSS transitions aren’t as smooth and the font rendering is especially bad on Windows Firefox (though fine on Linux).

This now has multiple sizes available (4×4 and up), and the pieces now actually do rotate instead of just translating. This added orientation aspect also makes it significantly harder to solve.

The orientation may be considered to be an integer modulo 4(\mathrm{size}-3) (and this is pretty much how it’s implemented). Each move moves 4(\mathrm{size}-3) pieces and increases the orientation value of each of the moved pieces by ±1 (depending on the rotation direction), so we have the basic fact that the sum of the orientation of all pieces is an invariant.

The pieces may be divided according to their parity, and the piece positions likewise. A piece in a position of the same parity must have even orientation while a piece in a position of opposite parity must have odd orientation. In other words, a piece has only 2(\mathrm{size}-3) valid orientations for any particular position. (This is very similar to what I described for the 45°-face-turn Rubik’s Cube, where even though I describe the orientation as an integer modulo 12, only six orientations are valid.)

It’s possible and not too difficult to describe the puzzle in such a way that the orientation is modulo 2(\mathrm{size}-3) instead of 4(\mathrm{size}-3), at the cost of slightly complicating the description of a move. This would likely be ideal when implementing a solver, though, since it reduces the amount of information to be stored in a puzzle state.

Speaking of solving, I’ve managed to manually solve the 4×4 with orientation and the 5×5 without orientation, though the 6×6 has me stumped. Caring about orientation makes the 5×5 considerably more annoying to solve (without orientation, it’s actually easier than the 4×4), and while I’ve figured out a commutator to rotate a pair of pieces, said commutator is about a hundred moves and repeatedly applying that is very unfun. Likewise for the 6×6, in theory I have all that’s necessary to solve it (bringing any three pieces to any three positions, a three-piece cycle, and a rotation of two pieces), but actually executing a solve from start to end is way too tedious for me.

Getting back to the 4×4, I’ve covered the non-oriented puzzle before, and came to the conclusion that it’s solvable in at most 35 quarter turns. I briefly mentioned the algorithm I use when solving the non-oriented puzzle manually on the old blog, but since it’s dead now, I’ll just go through it again. I do the top two rows similar to how the 15 puzzle is usually solved (put a and b in, put c in d‘s position and d next to c, then rotate c and d into the correct positions together). Next the left and right columns are solved, leaving j, k, n and o; the possible cases are that it’s solved (1/6 probability), two adjacent pieces are swapped (2/3), or two non-adjacent pieces are swapped (1/6). I used to apply a Rubik’s Cube algorithm at this stage, but I found (or, I wrote a program that found) a five-move sequence to swap any two adjacent pieces which is significantly more efficient.

The orientation is actually not that big of a deal and the steps can be modified slightly to account for the orientation, except that after the final permutation step above, the orientation for the four pieces still needs to be handled. The five-move sequence messed up the orientation of more pieces than I’d’ve liked it to, so I went back to using Niklas for swapping two pieces. Coincidentally, the Sune analogue also functions as an orientation changer here, which results in a fairly simple solution.

I wonder what the diameter of the corresponding group for the 4×4 puzzle with orientation is.

Previously, previously, previously.

I have no idea how to write emails

No, I don’t mean like writing a program that handles emails or even a regexp for checking whether an email address is valid. I certainly don’t know either of those and it wouldn’t be blogworthy to mention something that’s both true for most people and not expected of me.

I literally mean the composition of email messages.

You see, I was stuck in a not-quite-a-job for two years, a fairly large chunk of which I wasn’t working at all, and a fairly large chunk of the remaining actually-working bits was reading emails, and a small chunk of that remainder was writing emails. Which was still a considerable amount of emails compared to the quantity I sent throughout six years of high school (which was probably in the ballpark of 40 to 50 messages).

However, I had pretty much zero respect for the recipients of the messages I was composing. I don’t know how many of my rants regarding this topic survived my purge of embarrassing articles, but if you read my previous posts on the topic, you know what I mean. I’ve straight-out insulted people of vastly higher rank or standing in my messages. For that matter, there were requirements on how email messages were supposed to be formatted, but as far as I can tell there was exactly one (read: 1) office I communicated with which followed the guidelines, and they also happened to be located in HQ MINDEF so I guess it’d be pretty hypocritical if they didn’t follow their own guidelines. I don’t remember the arcane rituals one must follow in order to craft the electronic mail message of greatest dignity but I’m pretty sure I violated them with the very first word of almost every single message I sent because I addressed most people with this two-letter word: “Hi”. I also addressed most officers by their given name instead of surname, which probably also violates a dozen other protocols.

I mostly didn’t give a shit because I didn’t believe in authority. The funny thing about that job was that it was compulsory for almost every male of this country, and generally people let you get away with doing whatever because, firstly, they don’t want to further antagonise someone working in a job they don’t love and secondly, unless you’re really causing a lot of trouble for them with your nonsense, the red tape involved in official punishment is simply not worth it. I figured that that was part of how I managed to avoid punishment with my outright trolling messages.

I don’t have these excuses to fall back on now.

A couple of years ago—I don’t remember whether it was on this blog, the old blog, or Twitter, or wherever the heck I stashed away dumb ideas—I flippantly mentioned that email signatures like “Best regards,\n[my name]” are absolutely worthless if they’re automatically added, because then it just becomes a bunch of meaningless words you send with every message. I did use automatic email signatures during the job stint, except I forwent the “yours sincerely” bullshit and left actually useful information like my name and job scope. Now that I’m a student, I don’t exactly have a job scope so I assume I could just leave my name. Except that email messages have a name field, so I could just omit the signature altogether. Is this a good idea? I don’t know!

All I know is that the style of email writing I’d acquired in the last two years is probably not going to be suitable, and this might be a slight problem.

Orthonormal integer-translate partitions of unity

I was thinking about this for a bit.

What are the finite functions that satisfy the following three criteria?

\displaystyle\sup\,\{x\,|\,f(x)\ne0\}-\inf\,\{x\,|\,f(x)\ne0\}<\infty\quad\text{(finite support)}\\  \forall x\in\mathbb R\quad\sum_{n\in\mathbb Z}f(x+n)=1\quad\text{(partition of unity)}\\  \forall n\in\mathbb Z\quad\int_{\mathbb R}f(x)f(x+n)dx=[n=0]\quad\text{(orthonormality under integer translation)}

For instance, we have the boxcar function x\mapsto[-1/2<x<1/2].

And then I can’t really think of anything else. It seems somewhat nontrivial coming up with such functions with larger support. I’ve seen an example somewhere, so it ought to be possible, but I can’t remember it.

There’s also the uninteresting example of partitioning the unit interval [0,1] into a finite number of smaller intervals A_k,\,k\in\mathbb Z, whence the function x\mapsto[x-\lfloor x\rfloor\in A_{\lfloor x\rfloor}] is constructed. This allows arbitrarily large supports (as defined by the range) but also seems too much like a hack and isn’t quite what I’m looking for.

A “trivial” result is that if we assume that the support is an interval (open or closed doesn’t matter) and that the function is positive throughout said interval, then this interval must have length at most 1 (otherwise \int_{\mathbb R}f(x)f(x+1)dx>0) and therefore has to be the boxcar function or a translated version thereof.

On doge

I am (still) being awfully upset over the doge meme which exploded in popularity around the start of the year. This one internet fad has ruined the words “so”, “much” and “wow” for me.

Thankfully, like most memes, it died out after a while and I haven’t been seeing so much of it lately, though the damage has long since been done.


So, today is Pi Approximation Day, and unfortunately I have nothing very closely related to π to present today.

Anyway. Let’s have a look at the Rubik’s Cube. There are effectively six moves available, which we may take to be {U,D,L,R,F,B}, as usual. (It is, of course, possible to instead use {U,u,R,r,F,f} or {U,R,F,E,M,S} as a “basis” instead, but using the standard moves fixes the centre facelets.)

It is obvious that with a normal Rubik’s Cube, you’re only allowed to turn the faces in multiples of 90°, for the very simple fact that edge cubies and corner cubies are shaped differently and you can’t just interchange them.

But what if we could? What if we turn the faces by only 45°?

It’s probably not possible to make a good real implementation of this, since, projecting the cubies onto a sphere, edge cubies and corner cubies have different latitudes, so the turns will inevitably have some bumpiness. (The path of motion is also most likely forced to be nondifferentiable.) I presume using magnets would be the most straightforward way of creating this idea IRL, and this has probably already been attempted by puzzle-modding enthusiasts.

There is also the Mixup Cube, where faces are still restricted to 90° turns, but slices can turn by 45°; corners remain corners, but edge and centre cubies are now equals. This is conceptually similar to Mozaika and a few other puzzles, except that edge cubies on the Mixup Cube have orientation. (Depending on stickering, the centre cubies might not have visible orientation, in which case the Mixup Cube’s states don’t form a group.)

Getting back to the idea of a 45°-face-turn cube, centre cubies are still fixed and edge/corners can freely mix. It’s not hard to prove that arbitrary permutations (not just even permutations) can be obtained due to the existence of an eight-cycle. As on the usual Rubik’s Cube, there is an orientation invariant; if we choose an orientation reference as pointing towards the direction of increasing latitude, U/D don’t change the orientation, while any of the other four face turns changes the orientation of four pieces by 30° and the other four pieces by 60°, resulting in zero total orientation change.

The non-centre cubies have six possible orientations. On a normal Rubik’s Cube edge orientation corresponds to a cyclic group of order 2 and corner orientation corresponds to a cyclic group of order 3; since edges can corners can mix here, this means they should have at least six orientations. While it’s mentioned above that a non-horizontal face turn would change the orientation of four cubies by a twelfth of a revolution, it turns out that that’s just an artifact of how we decided to label our reference orientation. Any originally E-slice edge cubie in a non-E-slice position necessarily has orientation an odd multiple of 30°, and likewise for any originally non-E-slice cubie in an E slice position.

The centre cubies have eight orientations. Unlike the non-centre cubies, the total orientation is not an invariant in either the 45°-face-turn cube or the Rubik’s Cube. Instead, the parity of the total orientation equals the permutation parity of the non-centre cubies for the former. This can be seen to be because every turn changes both the permutation parity and the centre orientation parity. Like the Rubik’s Cube and unlike the Mixup Cube, ignoring the centre orientation still results in a group.

The 45°-face-turn cube would have a total of 20!\cdot6^{20}\approx8.9\cdot10^{33} states if centre orientation is entirely ignored, 20!\cdot6^{20}\cdot2^6/2\approx2.8\cdot10^{35} if the parity of the centre orientation is used, and 20!\cdot6^{20}\cdot8^6/2\approx1.2\cdot10^{39} if all eight orientations are distinguishable. That said, it’s not really the number of states which determines a puzzle’s difficulty—the Megaminx has many more states than a Rubik’s Cube but can be solved in pretty much the same way.

For that matter, the availability of a 45° face turn in our described puzzle would probably make it easier to solve than a normal Rubik’s Cube. At least the permutation aspect reduces into an almost-trivial puzzle that can be solved by spamming commutators/conjugates, after which the orientation could be solved with the same technique or the usual Rubik’s Cube algorithms. Using commutators is a particularly easy “first principles” way of solving puzzles that behave sufficiently like groups, but generally interesting puzzles don’t have simple commutators to exploit. Huge parts of my two gigaminx solves were done through conjugated commutator spam, for example, and considering that they both took more than an hour, this also highlights how incredibly inefficient solving with commutators/conjugates can be when they involve more than just a few moves. In this case, taking just two faces of the 45°-face-turn cube and ignoring orientation reduces to a (5,3,5) rotational puzzle, which is easy to solve with simple commutators.

Now, suppose we also add 45° turns to slices, like the Mixup Cube. This monstrosity can mix every cubie together and in all likelihood can never be manufactured. Every piece has 24 possible orientations and every permutation of the 26 cubies is possible. I’m not sure about what the orientation subgroup is like, but I guess there are either three or six orbits since each move changes the total orientation by a multiple of 45°. Assuming that there are three orbits and that the centre cubie orientations are distinguishable, this would have 26!\cdot24^{26}/3\approx1.2\cdot10^{60} positions.

Update: I borked the HTML which caused half the post to vanish when viewed from the front page. Also expanded on some stuff.

Something else


試合数: 498
試合時間: 146.6 hrs


試合数: 271
試合時間: 106.3 hrs


試合数: 227
試合時間: 40.3 hrs

I’ve probably spent too much time playing on Tenhou. Also, as mentioned in the last post about mahjong, I dropped from 2dan to shodan in both four-player and three-player, though through some amazing amount of luck I managed to grind myself to 3dan in three-player. Three-player (with Tenhou’s rules) involves a lot of luck, anyway—a string of repeated dealerships can readily give one player a huge lead over the other two (this happens with greater frequency than in four-player simply because there are fewer players), and hand values are greatly inflated compared to four-player due to a larger proportion of tiles being yakuhai or dora and hon’itsu/chin’itsu being much easier to get. Yakuman and kazoe yakuman are also much more common; I got a kokushi musou and a suuankou tanki some time ago (along with a few more non-tanki suuankous and daisangens).

Anyway, I was thinking about this: suppose we generalise mahjong to a hand with arbitrarily many tiles (possibly restricted to 3n+2 as in normal mahjong); what is the computational complexity of determining whether a given hand can be interpreted as a group of melds and a pair? This doesn’t seem to be entirely trivial (figuring out the waits for single-suit hands with only 13/14 tiles still confuse me occasionally), but on the other hand it almost seems like it obviously should be in P.

I once again blog about how bad Windows is

Yesterday evening Two evenings ago I set the computer to sleep, with the battery about half full. My limited experience tells me the battery would theoretically take about two weeks to go from from fully charged to fully discharged when suspended, so just a day of suspension should be all right.

About an hour ago I switched it on, only to be greeted by “Resuming Windows”, which indicates that for some reason it decided to hibernate instead of sleep. Whatever, it’s not the first time this happened. Now, for some retarded reason I splurged on RAM upgrades (seriously what the hell do I need 16 GiB RAM for on a laptop) so hibernating takes some nontrivial amount of time, and so does resuming from it. But okay, whatever.

Then right after it finished starting up, Windows hurfed a durf about configuring updates and promptly—by which I mean without a prompt—restarted on its own.


Unrelatedly, I had the QET two days ago. I actually tried to do it seriously for a while before they announced that there was only fifteen minutes left on the test, at which point I just gave up because it’d’ve been impossible to finish the rest of what I had in mind with that little time. Guess I’ll have to take extra English modules.

RIP uptime

> 12:14:59 up 63 days, 18:52, 4 users, load average: 0.70, 0.61, 0.50

Guess I’ll take the old computer off the grid for a while.

Windows a shit, part 3

I just remembered I already wrote two posts on this topic, so I’m just going to call this part 3.

IE 11 does keep keyboard layout within a window consistent (IE 8 doesn’t)

J/k, it doesn’t. The location bar apparently keeps track of keyboard layout separately. Yes, that is as baffling as it sounds.

Anyway, I have a couple of things to say regarding package managers and Windows 7’s lack of an inbuilt one. Package managers are freakin’ awesome and make installing and updating things a lot less annoying. Instead, what I’m doing is I’m searching on Google for an executable to download (and 90% of the first page of results is inevitably shady) and then running the installer (quite often without “setup” or “install” in the name, because who needs to know whether that’s the installer or the program itself?), only to realise the retarded installer didn’t add the program to PATH (because having to type out the full path to a CLI tool every time is very user-friendly).

Speaking of PATH.

[screenshot: the Windows environment variable editor]

The “official” way of editing PATH is to open up this dialogue box and edit whatever is in that text field. Notice how it can’t display one full path from a typical Windows setup; it’s not even resizeable. Do I need to say any more regarding how dumb this is?

[screenshot: also the Windows environment variable editor]

Oh, and the dialogue from which the previous dialogue is launched is this one. It’s so long that it has a scrollbar, but you’re not allowed to use the scrollwheel on your mouse to scroll it. Every time I want to edit PATH (which is depressingly often) I have to use my mouse to drag the scrollbar. You know, that action that leads to a lot of jeering and mockery when people who know about scrollwheels see you doing that.

Now, about ramdisks. I tried ImDisk and it didn’t work. So now I’m using this “SoftPerfect RAM Disk” thing, which does work, but doesn’t exactly do what I want. Ideally the ramdisk shouldn’t actually be using RAM if it isn’t filled, but SoftPerfect RAM Disk allocates it all at once because it needs to be formatted as FAT/NTFS/something else Windows can read. (This also means that it takes a nontrivial amount of time to initialise if the ramdisk is a couple of gigabytes large.) By the way, Linux has inbuilt ramdisk support and treats the ramfs/tmpfs filesystem types as first-class citizens among other filesystem types. (Memory usage for ramfs/tmpfs is also dynamic, of course. Linux is actually sane.)

Because I had the brilliant idea of formatting both of my external hard disks as ext4, I can’t use them on Windows directly. So I plug them in to the old computer, set up win-sshfs on the new one, and access files over SSHFS like any sane person wouldn’t. This turns out to be awful for a couple of reasons. One, the router/modem SingTel provided is absolute horseshit and can’t handle speeds beyond around 6 to 7 megabits/second. Not even within the home network. Two, win-sshfs itself is not exactly the best software for SSHFS ever.

It has some documented quirks like not recognising symlinks, and that’s okay because at least those are documented. It also has some not-documented quirks like crashing if you happen to select multiple files and access them simultaneously at 4 o’clock on a Sunday afternoon. That, for example, is not okay. Furthermore, unlike the *nix SSHFS, it apparently doesn’t cache at all, so everything is always slow. (It has an approximately 14-second delay on opening previously-unaccessed files, but my experience with SSHFS on Linux hasn’t been much better. That said, said previous experience with SSHFS on Linux involved servers located in another continent, which contributed at least a little to the delay.)

Also, re: gdipp; I have that disabled now. I can’t stand the broken font rendering anymore. I mean, whatever Windows is doing is obviously also broken, but it’s less so. It’s 2014 already, why don’t we have 200 ppi displays on the desktop? Instead we get phones with absurdly and unnecessarily high resolutions.

Anyway, the last couple of paragraphs don’t really relate directly to Windows. One of the other things I was going to hurf durfs about (and forgot when I initially published this post) was that Windows does not provide workspaces, which is a common Linux DE feature I use a lot. Everything on the main workspace, IRC on a secondary workspace. Now, I usually use this Windows laptop with an external monitor attached. It turns out that the stand that came with the U2412M isn’t very tall at all so a dual-monitor setup would force me to look at the laptop monitor at a relatively extreme angle (which makes the colours look bad because the laptop monitor is TN) and that is already somewhat unpleasant to use. I did, however, experiment with it for a while.

My major gripe is that Windows does not, by default, use a colour scheme that allows easy differentiation of active and inactive windows. (I don’t know if this is related to the whole laptop-uses-TN-so-colours-all-look-like-junk thing, but hey.) So what ends up happening is that sometimes I’m looking at the wrong monitor and typing things into the wrong window. This is frustrating and basically a thing that shouldn’t be happening. In theory using two monitors should provide a better workflow than using two workspaces, but this turns out not to be the case with Windows. (Do people with multiple-monitor setups actually use multiple monitors concurrently? I find it hard to focus on what I’m doing if I have too many other things open.)


Tomorrow I will stop procrastinating things and get the things I need to get done done.


We’ll see when the time comes.

(By “tomorrow” I naturally mean when I wake up, not literally tomorrow.)

Anyway, I have to go take this “Qualifying English Test” thing because I couldn’t be bothered with English classes in school before. I had a look at the sample question and the topic seemed awfully dreary. Failure is pretty much guaranteed at this point since I suck at writing (despite having produced over a thousand blog posts). Or maybe it’s that I suck at writing about things I don’t care about.

Yeah, let’s go with that.

Something about functional equations

\displaystyle f:\mathbb R\to\mathbb R\\  f(x)=f(1/x)\\  f(0)=0\\  f(1)=1\\  f'(0)=1\\  f(-x)=-f(x)

Assuming f is smooth, solve for f.

A nonsolution is f(x)=\exp(-(\log|x|)^2)=|x|^{-\log|x|} with the removable singularity at x=0 removed. It’s smooth, but f'(0)=0. (In fact, every derivative is zero at x=0.) Another nonsolution is f(x)=[|x|\le1]x+[|x|>1]/x, which satisfies every condition other than smoothness.

What I tried was to expand a Taylor series around x=1 then solve for the coefficients using only the constraint of f(x)=f(1/x), but this was fruitless. Let f(x)=\sum_{k\ge0}a_k(x-1)^k. Then:

a_1=0\\  a_2+a_3=0\\  a_3+2a_4+a_5=0\\  a_4+3a_5+3a_6+a_7=0\\  a_5+4a_6+6a_7+4a_8+a_9=0\\  \vdots

Any finite truncation of this list of equations would have more unknowns than equations and hence be unsolvable. From another point of view, this means that there are infinitely many degrees of freedom to exploit in order to make f satisfy all the other conditions, and I’m not sure what exactly to do here. (Incidentally, this led to the previous post about symmetric polynomials, where the relation to polynomials stem from treating the LHS as generating functions.)

One sort-of-obvious thing to note is that we somehow have to choose the coefficients such that they don’t grow too quickly, so we can have a nonzero radius of convergence. For instance, fixing a_k=0 for all even k\ge4 and a_2=1 the odd coefficients end up being:

a_3=-1\\  a_5=1\\  a_7=-3\\  a_9=17\\  a_{11}=-155\\  a_{13}=2073\\  \vdots

Which is A001469 which grows superexponentially and therefore has a zero radius of convergence. Nope, useless. On the other hand, fixing a_2=a_4=\cdots and a_3=a_5=\cdots and adding the oddness constraint results in a_2=1/2, which leads to f(x)=\frac12(x+x^{-1}), which has a pole at x=0. Durr.

Anything involving \log x is automatically unnice due to the singularity at x=0, even though converting it to a function of (\log x)^2 seems like the obvious course of action.

Update: Clearly any analytic function is not going to make the cut since it’d have to be symmetric in x and 1/x, so the function can be expressed as a Taylor series in terms of x+1/x, which necessarily causes a discontinuity at x=0. That said, explicitly constructing a function satisfying all the requirements is not difficult through the use of bump functions and the like; what I wanted was a simple function that can be defined without resorting to such hacks.

Something about symmetric polynomials

In the middle of researching something I discovered that (1+x)^{2n+1}-x^{2n+1} can be expressed as a polynomial in x(1+x) for certain small values of n. (This is reminiscent of and probably also related to Faulhaber’s formula and the Bernoulli numbers, incidentally.)

The very first thing to note is that this is not a symmetric formula in 1+x and x. On the other hand, it is symmetric in 1+x and -x, whence we may apply the fundamental theorem of symmetric polynomials which implies that (1+x)^{2n+1}-x^{2n+1} may be expressed in terms of (1+x)+(-x) and (1+x)(-x). Obviously, since the former term is just 1, this also means that (1+x)^{2n+1}-x^{2n+1} is a polynomial function of -x(1+x) and hence also of x(1+x).

I’m ashamed to admit that noticing this took me about an hour (interrupted by a shower, but still). The previous approach I had was—ugh—induction, which didn’t work out because I copied a sign wrongly somewhere. Having fixed that, a proof by induction is simple, and also suggests the more general statement that (1+x)^n+(-x)^n is a polynomial in x(1+x)… and this general statement makes the symmetry extremely obvious.

(Among other goofs I made today, I wrote 26 for 30−2 somewhere. That was dumb, and what made it even dumber was how long it took me for notice. I might be getting just slightly out of touch with basic arithmetic.)