WordPress, what the hell

I swear the HTML in half my posts are getting broken in some way every time I try to edit them in the new editor.

Thanks, but no thanks; I want to stick to the old editor, where what I see is what is served to the browser.

Other than double line breaks being converted to paragraphs, but I can live with that. It’s less painful than the alternative of typing <p> every time. (LaTeX support too!) But see, what I want is well-defined semantics I can easily replicate in my head, not something I have to delve into WordPress’s source code to divine.

This is why I don’t like “smart quotes” either; quite inevitably, the algorithm gets them wrong the moment you deviate even slightly from typical English usage. Unfortunately, WordPress.com doesn’t offer an option to disable them, so guess what, you’ll have to live with them. In fact, most of the “smart” autoreplacements WP tries to do have backfired at some point.

Did you know that, if you substitute numerals for a and b, “axb” gets rendered with a multiplication sign instead of the Latin letter ‘x’? Of course, this also fails the moment you type hexadecimal literals that happen to not have digits above 9. Oh, and this can’t be disabled, because why would anyone ever write about hexadecimal literals?

Chairs, part 2

Seriously, do you even think about the chairs you sit on every day? I do.

Recap:

[The] designs of some chairs (most?) are terrible.

Some make my butt ache after a while. (This usually applies to uncushioned chairs.)

Some don’t fit the table they’re meant to be used with.

Some get stained too easily. (By sweat, dirt, etc.)

Some wear too readily.

I got a new chair in my room some time last year. It fails the very first point, and very badly so. The feet also have sharp edges and I initially got many scratches on my heels because that’s just what sharp edges do. Those sly bastards.

But disregard that. I want to talk about my butthurt.

Since it’s the school break, I haven’t been stepping out of the house very often. For that matter, I don’t even step out of my room often either. It’s probably a safe estimate that I spend around fourteen hours a day seated on this chair. (That probably explains the butthurt aspect of it pretty well.)

Sometimes I adjust my sitting position to deal with the aching, and then within ten minutes I’m not even sitting on the chair anymore. I could be squatting on the floor, or half-lying on the chair, or even standing up. Anything but sitting. This is not really a solution.

Sometimes I sit on my hands, but then I can’t type when my hands are between my ass and the chair. Also, I’m kind of heavy and my hands don’t take too well to having over 50 kgf resting on them. So I don’t really do this often either.

Sometimes I sit on my feet. I feel that this is the least bad option, and guess what I’m doing now? This is still fairly uncomfortable, but it does relieve my butt of some pressure. The downside is that the skin on the outside of my ankles is now in very bad condition from constantly being pressed against the fabric of the chair, so presumably this is not a good thing to do over extended periods of time.

It’s not like I’m fat or anything. I’m unfit but not even close to overweight, and I haven’t lost or gained a significant amount of weight since about 2011. It can’t possibly be that I’m exerting that much more pressure on my butt than a normal person sitting on this chair would be.

I have no ideas and I must scream

[screenshot: the blog draft list]

There’re some repeats because WordPress is terrible at autosaving. Anyway, stream of consciousness rambling incoming.


Foreword (added 30th June): I am probably going to regret not making this post private, but at the current moment the blog has had exactly zero page hits (from people other than myself) since about three posts ago, so this is not a worry I’m going to have right now.

Does a foreword added after the fact still count as a foreword?


Being off IRC made me feel lonely. Those people constituted over 90% of my human interactions.

Because, y’know, this happened: (slightly edited for anonymisation purposes)

<A> I want to keep people like you two as my friends, but I also want to keep as friends the people who are directly targeted by transphobia.

The “you two” was referring to me and another fansubber. For context, this was regarding a transphobic software maintainer who people demanded to be taken off the project, in spite of his being one of the main contributors.

I don’t know how to deal with a threat of “I don’t want you as a friend anymore”.

Was it even a threat? Seemed like one.


Despite the fact that I’d previously chosen to remain mostly silent on IRC regarding such matters, I decided to butt in.

I like the idea of meritocracy. If you’re good at something, you’re good at it, even if you’re otherwise a terrible human being. There are certain crackpots in the fansubbing scene I have no qualms working with; I may think they are batshit insane, but damn if they’re not awesome at what they do. (I mostly keep my opinions to myself, because (i) I don’t want to stop working with them and (ii) I’m a gigantic hypocrite.)

This is why I took offence to the calls to have the maintainer removed from his project. I don’t agree with transphobia, but what does that have anything to do with a Ruby-to-JavaScript compiler?

(Well, that sounded a lot like “I’m not a racist, but”, which is the usual excuse racists come up with for their racism. But I swear I’m not!)


Most of my immediate family members are, to a non-negligible extent, homophobic, transphobic, and racist. If I’d wanted to pursue the career of social justice activism, the only logically consistent place to start is with my family. Is that a sign that I should start at home? Do I want to make dinnertime conversations 9001 times more awkward by talking about how gays and Indians aren’t actually ruining everything in the world?

I mean, I hardly even talk to anyone other than my sister, and even she seems to have this “social justice = loltumblr” link in her head. I don’t feel like bothering to explain all the subtle biases in language that may cause these minority groups to feel slighted.


That said, I did attempt that once with my sister in a bus ride recently.

Some guy wearing an SAO shirt (lol, SAO) sitting in front of us turned around to get a quick glance at us(?). I’m not sure if that was prompted by what we were talking about, but it didn’t look like he was just stretching.


One thing A-san pointed out was that there were only two factions, “status quo” and “improvement”, and my attitude clearly does not put me in the latter.

Having thought over this for a while, I don’t agree with this assessment. Maybe it’s just my gut feel telling me “I don’t want to be associated with all the homophobes/transphobes/etc. because they’re awful people”, but then I’m too socially apathetic to actually want to improve things. Besides, are they really all awful people?

Does a single character flaw make a person undeserving of respect? Can’t we respect a person for all his contributions to society while simultaneously condemning his societal transgressions?


I think social justice activism has its worth in terms of raising awareness. If this sounds suspiciously similar to slacktivism, that’s probably because slacktivism is all about raising awareness and then doing nothing else. But social justice is a problem that’s solved exactly through awareness. If everybody is aware of the discrimination minority groups are facing, the law will naturally follow and the discrimination will stop.

It’s not like, say, climate conservation “awareness” where everyone continues to use electricity-guzzling smartphones and whatnot. That’s truly slacktivism. (Yeah, I still remember a schoolmate wrote a blog post about Earth Hour slacktivism in 2009. He deleted his blog last year, unfortunately.)


Or, okay, that’s maybe too optimistic a view. I think there are roughly three categories of people.

People who know about discrimination, and (try to) act towards resolving it.

People who know about discrimination, and participate in it.

People who don’t know about discrimination, but participate in it.

People who don’t know or don’t care about discrimination.

Well, four is roughly three, to an order-of-magnitude approximation. I think I’m apathetic enough to qualify for the fourth category, but then there’s this here blog post which probably steers me slightly towards the first.

Raising awareness does nothing for the second group—they already do it knowingly!

Or, I’m not thinking of the right type of awareness here. What they need to be educated on is perhaps the fact that homosexuals, transgenders, etc. are people in their own rights and that their sexual orientations or gender identities should not affect any other aspect of their lives.

It’s the flip side of the meritocracy coin.


I guess the last thing I want to bring up is that there’s this famous quote that, on the internet, nobody knows you’re a dog. Yes, it even has its own Wikipedia article.

I don’t mean to trivialise the experiences of the targets of discrimination by comparing them to dogs, which unfortunately is probably the immediate implication from bringing up the quote in this context. The point is that the Internet allows people to be judged by their words rather than by their identity, and this is a concept that appeals to me strongly.

But this does still have a disparity in that those in the conventionally discriminated groups are not free to reveal their statuses as being in those groups if they want to remain free of discrimination, while those not in the conventionally discriminated groups are. To be clear, this is not limited to those with a majority/minority dichotomy; even today females (which, in case you forgot, constitute about half the human race) are discriminated against in online discourse.


As a cis white male—

Wait, sorry, I’m not white. I’m Chinese. Still the majority race around here, anyway.

So, as I was saying, as a cis yellow male, taking the stance of social apathy is possibly not quite as neutral as it might initially appear. I already have all these privileges heaped onto me, so proclaiming that it’s all not really a big deal is somewhat weird.

Is the privilege real? I don’t know, but I recall a significant gender imbalance back in high school. It’s one of those things where the immediate conclusion is that “girls aren’t as into science (and maths)” but why the heck should that be true? Isn’t it really just the case that societal pressure pushes them away from science?


Anyway, I feel like my pondering over such issues is not going to make a significant impact on the world either way, and I have other things to think about that interest me a lot more. Again, this might be post hoc rationalisation, but I believe I can better effect a betterment of society through all the research I do than by poorly trying to raise awareness as a social justice activist. There’re enough activists already, and I’ll leave them to figure out how best to do what they have to do.

(But my thesis is that I don’t care about society, so the betterment of society is very awkward to write or even think about. Is there a way to reconcile this with my views?)


Addendum (2015-06-30):

Because I tend to be in this emotionally charged state when I start writing blog posts, it often happens that I forget what I want to write about and then end up talking about irrelevant things that, at a particular moment, are of more interest to me, but also neither support nor oppose my thesis.

Does having a transphobic figurehead prevent potential transgendered coders from joining the project for personal reasons? I think the answer to this question is an undebatable “yes”, but what the heck, people probably debate (have debated?) it anyway. If a person with considerable power claims your identity is a lie, you’d feel uncomfortable working with them. I have personal experiences I believe are close enough to this to validly extrapolate.

On the other hand, from a purely pragmatic point of view, the vast majority of people are neither transgendered nor transsexual, which is to say that even if you exclude them, you’re not really missing a lot. It feels wrong saying this, but at the same time the reasoning doesn’t immediately jump out as being wrong.

On the other other hand, if you keep excluding minorities like that, it might turn out that very few people are simultaneously in all the majority classes (exponential decay in the number of classes you consider?), so you end up not working with anyone else at all. Or in another direction, if you try to discriminate too hard, you end up being a minority yourself. So you could just take a principled stance from the very beginning and refuse to exclude anyone at all, but that’s a lot easier said than done.

Do you want to exclude trans people? No? Cool. (If you answered yes, skip to section g64.)

Now here’s the hard question. Do you want to exclude transphobes?

Yes? Cool; you’d be popular with the social justice crowd.

No? Cool; you’d be popular with the not-social-justice crowd.

Just kidding; of course that’s a gross oversimplification of the current situation. What happens if you don’t exclude transphobes is that they continue to poison your community and indirectly exclude trans people from joining, which goes against the answer to the previous question. This was not something I had in consideration at the start of the discussion with A-san, and I’m glad he brought it up.

It’s reminiscent of the trolley problem. You could, through not excluding transphobes, end up excluding other people, but at least you wouldn’t be the one doing that directly.

“It’s not my fault!”

“It’s just an artifact of how society functions!”


The fundamental problem here looks a bit like the failure to project the aforementioned principled stance onto everyone. If everyone could accept trans people, we wouldn’t even be having a discussion about transphobia! Maybe being this principled is wrong, but my mathematical intuition says it’s not.

There’s a utility cost in choosing not to exclude certain groups of minorities. This is obvious: there are more transphobic people than there are trans people, so purely utility-wise, you should opt to keep the transphobes. But there’re only so many minority groups you can reasonably discriminate against, at which point further discrimination reduces utility. By symmetry, either you exclude every group or no group at all, and the latter has a larger utility overall, so the natural conclusion is that no minority group should be excluded.

There’s a bit of a false dichotomy and “by symmetry” is not technically an argument that works here, so I’m not really sure why I was initially convinced by this line of thought. It’s still an appealing ideal, but it’s also plagued with an implementation impossibility.

Maybe I’ll add more to this post as my grey matter continues to process all this information. (Or, no, I want to stop thinking about this and get back to hyperbolic geometry, which is cool and infinite and infinitely mindbending, as opposed to thinking about people, which is just infinitely mindbending and not very cool at all.)

H

I’m looking at the WordPress post editor right now (duh, that’s how I write almost all my blog posts) and the date is 28th June, which incidentally is also the expiry date on a loaf of a bread in the kitchen downstairs. But what the heck, we’ll probably keep it for a few days longer anyway.

Consider the Platonic solids. There’re exactly five of them, and it’s not hard to prove that. I mean, we of course need some non-self-referential definition of the Platonic solids, which could be just “a convex polyhedron that is flag-transitive”, and then now we have to define what a flag is, but y’know, Wikipedia exists.

There aren’t a whole lot of possible vertex figures. If we had m n-gons around each vertex, the sum of angles around a vertex would be…

Quick refresher: for a regular n-gon, its sum of internal angles is (n−2)π, which follows immediately from triangulating its vertices; each individual internal angle would thus be (1−2/n)π.

m(1−2/n)π, and convexity forces this to be a positive value strictly smaller than 2π.

m (1 − 2/n) < 2
1 − 2/n < 2/m
1/m + 1/n > 1/2

Whaddya know, it turns out to be symmetric in m and n! (But of course—the roles of m and n are switched when we take the dual, and flag-transitivity is preserved under duality.)

If m=2 or n=2, then we get either a dihedron or a hosohedron. These are infinite families of degenerate polyhedra (in Euclidean space).

There are exactly five other pairs of values of m and n satisfying the given constraint: m=n=3 (tetrahedron); m=3, n=4 (cube); m=3, n=5 (dodecahedron); m=4, n=3 (octahedron); and m=5, n=3 (icosahedron). Of course, we don’t have a priori knowledge that this constraint is sufficient to guarantee that a polyhedron exists, so the question here is this: is there a way to prove that these do give the Platonic solids, without considering each of them individually?

It’s a bit like how you can have a vertex configuration of 5.5.10 in the Euclidean plane (i.e. two regular pentagons and a regular decagon around a vertex), but that refuses to extend into a tessellation of the plane; what is it about regular polyhedra that allows us to conclude that the aforementioned vertex figures give a tessellation of the spherical plane?

What if we look at it from a graph theory perspective? Let V, E, F be the number of vertices, edges, and faces of a (supposed) convex polyhedron with vertex configuration nm. We have that 2E=mV and 2E=nF, and since a polyhedral graph is necessarily planar, VE+F=2.

E (2/m + 2/n − 1) = 2
E = 2 / (2/m + 2/n − 1)

And then it just so happens that, for all five pairs of values for (m,n), E works out to be an integer. I mean, of course it does; the five Platonic solids exist. But what is the underlying reason? Is there even one? I’m not being enlightened here.

On another note, I’d been pondering about how polyhedra would be like in hyperbolic space. Like, in Euclidean space, you can’t have a vertex configuration of 64 and have the figure close up (in fact, as a hyperbolic tiling, the number of tiles grows exponentially with distance), but in hyperbolic space, provided that the hexagons are large enough, the sum of angles around a vertex would be smaller than 2π. (In fact, arbitrarily small.) But does it close up, or does it still behave like a hyperbolic tiling?

Ninja edit: Naturally, because of the above formula for the number of edges in terms of m and n, even if it does close up, it cannot be topologically equivalent to a sphere, but then what could it be? One question resolved, and another one appears.

Ninja edit the second: I am slightly ashamed that I didn’t make the connection that 6.28 is an approximation of τ and that it was Tau Day when I started writing this post. But no matter, we have, uh… ∛250 Day tomorrow! (That joke doesn’t really work more than once, does it?)

Hype

Shortly after I wrote my two blog posts about hyperbolic geometry, HyperRogue, a video game featuring 2D hyperbolic geometry, was featured on Hacker News.

You know, making a video game out of it was what I was thinking of too, but I guess it’s just natural that I wasn’t the first to think of it. Not that anything would’ve come out of my ideas on what to program; judging by how many of them were actually completed, I’d say roughly 95% of my ideas never bear fruit. (Mostly due to laziness, hurhur.)

One interesting fact I learnt from the HyperRogue blog is that a circle’s area grows exponentially in its radius, not just quadratically. One semi-intuitive explanation for this is that we can have things like hyperbolic tilings, where the branching factor tends to some constant greater than one. There’s a lot of space far from the origin.

Also, a circle of radius r (measured in the hyperbolic metric, not in the Poincaré disk model) would have circumference 2\pi\sinh r, which agrees with Euclidean geometry when r is small (since \sinh r\sim r+\Theta(r^3)). In other words, if you were to somehow transplant an observer used to Euclidean geometry into hyperbolic geometry, where things scale as \tfrac1{\sinh d} rather than \tfrac1d (taking d to be the distance), he’d conclude that things are further away from him than they really are. I think having such an “expansion” visual effect would be cool for visualising hyperbolic geometry; looking top-down at the Poincaré disk feels like cheating, because that’s not how it looks like for an observer living in the model itself.

Well, at least that’s how it’d work for a first-person point of view.

Performing such a projection would involve five steps: translate the observer to the origin, convert to polar coordinates, convert the radial coordinate to the hyperbolic distance (r\mapsto2\tanh^{-1}r), apply sinh scaling (r\mapsto\sinh r), and lastly convert back to Cartesian coordinates. This doesn’t sound too hard.

Weaving deltahedra

Today Yesterday Two days ago, I disconnected myself from IRC and closed the two tabs of attention whoring that were Twitter and Ask.fm. I didn’t do this in an attempt to be productive (long story short, I got tired of internet arguments about Opal, transphobia, etc.), and I guess you can tell from the title of this post what I ended up doing.

This time I divided an A4 sheet into six strips parallel to the long side, which left space for 13 triangles. I was thinking of doing a kistetrahedron, which only needed three strips of 12 triangles, so this left a surplus of three strips. As for the kistetrahedron, I did “successfully” fold it, but it didn’t hold together very nicely and there’s a (relatively) huge opening opposite the initial apex I started to weave from. Trying to slide the last strip of triangles in also proved to be extremely problematic; so much so that I had to resort to using tweezers to pull it through, thoroughly crumpling the paper in the process (though most of the damage is on the inside, where it’s mostly invisible). The dihedral angle of a triangular pyramidal apex is much smaller than that of a square pyramidal apex (70.5 degrees and 109.5 degrees, respectively), and paper doesn’t like being subjected to such abrupt changes in curvature.

So that turned out pretty badly, but part of the reason was that I started with using only two triangles for the initial apex, so the last strip had one more triangle than strictly necessary. I think this would be much less of a problem if I start with three triangles for the initial apex, though there’s one subtlety here to be taken into account: one direction of weaving will lead to the loose ends dangling on the outside, rather than being invisibly hidden inside. The whole reason I started with two triangles for the initial apex was to avoid thinking about this problem, but that backfired.

Nota bene: The strips close up after making a round round the polyhedron, so we may also think of them as being rings that we temporarily cut open in order to weave them. (Except the cutting is not really temporary because I don’t have the ability to glue the ends together when they’re inside of the finished model, and that they weren’t originally rings anyway. But this is pedantry, and you might as well complain about the use of a parenthetical note within a parenthetical paragraph.)

It’s tempting to suppose, since this is a structure made of three rings and the Borromean rings are also threefold, that the three rings actually do have a Borromean structure, and it turns out that they do! Well, probably, assuming my diagrams of the line graph of the tetrahedral graph is correct. Labelling any two strips, one strip always weaves over the other, so in effect it’s always “on the outside”. For extra fun, this contradicts another one of my diagrams showing that, of the three triangle locations in an apex, two triangles from the first strip are over the second strip and the reverse holds for the third triangle. wakarimasen lol

Nota bene (because I love italicising Latin-origin loanwords): I define an “apex” as a pyramid with equilateral triangular faces (except for the base) augmented onto a base polyhedron. So for a kistetrahedron (or kisoctahedron, or kisicosahedron), all the apexes are tetrahedra, and for a kiscube, the apexes are square pyramids (i.e. half of a regular octahedron).

As I mentioned before, if we treat the regular octahedron as a kis-(square dihedron), we can get a weave of it out of four strips of six triangles. I used two of the three leftover strips from making the kistetrahedron for this, and I did in fact succeed at weaving an octahedron. Only that it didn’t have chiral octahedral symmetry, just square dihedral symmetry (i.e. the dihedral group of order 8), because there was indeed a parity problem as I had suspected.

An octahedron is effectively six apexes that share triangles, and for the resulting model to have chiral octahedral symmetry (pretending that we magically glue up the ends of the strips when we’re done), all apexes must have the same orientation; i.e. they must either weave all clockwise or all anticlockwise. However, weaving one apex clockwise forces its four adjacent apexes to be woven in the opposite order. Furthermore, the octahedral graph is not bipartite, so we don’t even get something like chiral tetrahedral symmetry out of this if we start with one apex woven in a specific direction. If we fix two antipodal apexes and pretend the other four apexes aren’t “true apexes”, then we get only square dihedral symmetry. (Actually, if you weave one apex clockwise and the opposite anticlockwise, the resulting model should still have square dihedral symmetry, but of a different kind—one is as a subgroup of O(2), and one is as a subgroup of SO(3) has only cyclic symmetry because the strips cannot close up to form rings.)

However, it kind of looks like it’s possible to weave the strips into an octahedron with chiral tetrahedral symmetry if we somehow weave them “halfway”. I don’t even know if that makes sense at all, but I guess I’ll figure out when I try folding it.

Now that I think about it, the same parity problem would prevent a weave of an icosahedron using six strips of ten triangles: two adjacent pentagonal pyramids would be forced to have opposite orientation, but the icosahedron graph is not bipartite. Bummer.

Speaking of which, I have an origami icosahedron too, though it’s obviously not made from weaving strips of paper. This one has chiral tetrahedral symmetry and is made from 8 main pieces (all congruent, but divided into two groups of 4) and 6 hidden pieces. The hidden pieces aren’t strictly necessary, but the model has annoying visible gaps otherwise. The main pieces are shaped like large triangles and are made from irregular hexagons where the interior angles are all 120 degrees, and the side lengths alternate between 1 and 2; the hidden pieces are strips of four triangles with the two end triangles folded inwards.

I find that “three-dimensional” symmetry groups are more visually appealing than “two-dimensional” symmetry groups. I mean, even when looking at the cyclic groups, C2 feels a lot more simple/boring than C3 and above, because C2 can also be treated as a one-dimensional symmetry group. Naturally, this has important consequences throughout mathematics, some of which I think I’ve briefly mentioned before, but that’s not the point here. Like, I probably could easily make a model of an icosahedron with C5 symmetry, but what the hell, that’s totally two-dimensional. If I wanted pretty two-dimensional symmetries I’d draw them, not make 3D models. (One potential title for this post was “My symmetry group is more symmetrical than yours”, which I scrapped because… I forgot about it until just now. Also, that’d be more appropriate for talking about group automorphisms, which is a really interesting topic I don’t understand well yet.)

PS: For some reason, since I last updated it, Firefox has been rendering lots of things with the fallback sans serif font, which for me is DejaVu Sans. I’m surprised at how much more readable this has made the WordPress post editor.

Addendum: This post previously referred to chiral tetrahedral symmetry as pyritohedral symmetry, which is wrong. The former is the subgroup of the latter restricted to proper rotations.

Hyperbole

[screenshot]

I kinda got the Poincaré disk model code working, I think. It’s still not completely correct (the “straight lines” as seen above should be circular arcs), but let’s leave that as a TODO.

I worked out the equations for constructing a Möbius transformation from its values at 1, ω, ω2, and though it wasn’t immediately apparent to me, applying a discrete Fourier transform to the three equations given by those values makes them easier to handle.

Let f(x)=(ax+b)/(cx+d), where we want to solve for the values of a,b,c,d\in\mathbb C given that f(1)=\alpha,f(\omega)=\beta,f(\omega^2)=\gamma.

Expanding everything gives

\displaystyle\begin{array}{rcccrcc}a&+&b&=&\alpha c&+&\alpha d\\  \omega a&+&b&=&\omega\beta c&+&\beta d\\  \omega^2a&+&b&=&\omega^2\gamma c&+&\gamma d\end{array}

and then applying a DFT to these equations gives the following:

\displaystyle\delta:=\alpha+\beta+\gamma\\  \epsilon:=\alpha+\omega\beta+\omega^2\gamma\\  \zeta:=\alpha+\omega^2\beta+\omega\gamma\\  \epsilon c+\delta d=3b\\  \zeta c+\epsilon d=0\\  \delta c+\zeta d=3a

There are three equations in four unknowns, so this is an underdetermined system. We may start with \zeta c+\epsilon d=0; since we don’t know a priori whether \epsilon or \zeta is nonzero (though we know they can’t both be zero), we can choose to pick a solution that doesn’t involve dividing by either value, such as c=-\epsilon and d=\zeta. (We can also branch on whichever of \epsilon and \zeta has the larger absolute value, but why bother when we already have a branchless solution?) We can substitute these into the other two equations to get a=(\zeta^2-\delta\epsilon)/3 and b=(\delta\zeta-\epsilon^2)/3.

We could also have used c=\epsilon and d=-\zeta, but that makes the identity case return the coefficients a=d=-1,b=c=0 rather than the simpler a=d=1,b=c=0.

So that’s all and well, but it turns out that this three-point sampling/reconstruction was unnecessary in the first place! Of course, the reason you still see the above equations getting included at all is that I’d already done them, coded them, then realised their uselessness.

What I discovered was that a Möbius transformation that preserves the unit disk does, in fact, have a simple special form that reduces the number of degrees of freedom, provided that the determinant ad-bc is real and positive. Specifically, we have that d=\overline a and c=\overline b. Such transformations are closed under composition as well. The total number of degrees of freedom is four, with the extra d.o.f. corresponding to the determinant. Enforcing that the determinant ad-bc=|a|^2-|b|^2 equals 1 removes this extra d.o.f.

Written this way, it looks exactly like split-quaternions, and that linked Wikipedia article itself links to an article titled “Hyperbolic motion”, with a “Disk model motions” section.

Well.

The two solutions given by the constraint |a|^2-|b|^2=1 are negatives of each other but give equivalent rigid motions, and in this sense it’s clearly an analogue to the usage of quaternions in representing a rigid motion on spherical 2-space.

Given that we can “lift” spherical 2-space into (the rotation group of) Euclidean 3-space, where we can use the nice properties of the quaternions to do things, is there something similar for hyperbolic 2-space and split-quaternions?

Rigid motion

So I’m trying to visualise 2D hyperbolic geometry, but I’m running into a gigantic problem with how to even represent a rigid motion.

My requirements are that:

  • a rigid motion should be continuous in its parameters everywhere;
  • there should be a finite number of real parameters;
  • the composition of two rigid motions is simple;
  • if there are more parameters than there are degrees of freedom, it should be easy to somehow “project” an illegal rigid motion onto the space of rigid motions (and this projection should be continuous almost everywhere);
  • and simple motions should have easily-defined parameters.

I have no idea how to go about this. For 2D spherical geometry, the group of rigid motions is isomorphic to the orthogonal group O(3), or if we’re considering only proper rotations, that’s isomorphic to SO(3). Quaternions provide a ridiculously nice way of representing elements of SO(3), with the one snag that it’s really a double cover of SO(3) so every element of SO(3) has two representations as quaternions. Other than that, it satisfies all the requirements above.

The group of unit quaternions has three degrees of freedom, which is exactly the same number as SO(3). Without restricting to unit quaternions, there’d be four degrees of freedom, and projecting a quaternion onto the set of unit quaternions is simply a matter of normalisation. It is, quite literally, projecting through the origin. Not that this normalisation is even necessary: applying a rotation to a point is done by conjugating the point (converted into a quaternion) with the rotation quaternion, so the absolute value drops out of the equation entirely.

Using a rotation matrix is not so nice. It satisfies all properties except the fourth, sort of. Projecting a matrix to orthogonality is “easy”, but it involves potentially numerically unstable steps with a very huge space of pathological cases (~vague weasel words~) because it has too many extraneous degrees of freedom.

Using Euler angles is extremely bad and you should feel bad for even thinking about that. (Gimbal lock, wtf?)

Anyhow, the aforementioned representations are for proper rotations in Euclidean 3-space / rigid motions in spherical 2-space. Hyperbolic 2-space does not have a similarly nice embedding into Euclidean space that I know of.

Actually, what about Euclidean 2-space? In other words, the Euclidean plane. Using rotation matrices for 2D rotations is actually “ideal” because it has trivial reductions in d.o.f. (the diagonal elements must be equal and the antidiagonal elements must add to zero), which makes it identical to using complex numbers. We can then represent any rigid motion with a unit complex number representing the rotation, and another representing translation, id est we get a representation as a complex affine function x\mapsto \hat{r}x+t. Four real parameters (so one extra d.o.f.), but it’s easily normalised.

With hyperbolic 2-space, using the Poincaré disk model and treating the points as complex numbers, a proper rigid motion is exactly a Möbius transformation that preserves the unit disk. (The improper rigid motions, i.e. reflections, are anticonformal and so cannot be represented as Möbius transformations.) One problem is that this constraint on the Möbius transformations is not easy to formulate directly in terms of the coefficients, and there’s also the problem of projecting an illegal Möbius transformation into a legal one.

The shortest translation taking any point p\in\mathbb C,\,|p|<1 to the origin is given by

\displaystyle\left(x\mapsto\frac{x-p}{-\overline px+1}\right)

which is isomorphic to

\displaystyle\begin{pmatrix}1&-p\\-\overline p&1\end{pmatrix}.

Matrices of this form are not closed under multiplication (even up to normalisation), so we get as an immediate conclusion that hyperbolic translations do not commute. This should not surprising, given that spherical translations don’t commute either.

And then I don’t know where to go from here.

It’s known that a Möbius transformation can be uniquely defined by where it sends three distinct points, so we could pick three ideal points and use where they map to as a representation. Under the assumption that the transformation is nondegenerate, the three destination points are pairwise distinct. Because rigid motions preserve the disk, the destination points are guaranteed to also be ideal points and hence provide one d.o.f. each, for a total of three d.o.f. We also have that three distinct points uniquely define a generalised circle (i.e. a circle or a line), so the destination points’ being ideal and distinct also guarantees that the unit circle is preserved.

If the three destination points have the same winding order (as in clockwise versus anticlockwise) as the source points, then this maps the unit disk onto itself; otherwise, it maps the unit disk to the complement of itself in the complex plane (union the boundary). There’s no nice way of projecting the latter case onto the former case, so this idea fails the fourth requirement. It also behaves pathologically if any two of the destination points are very close to each other, but I suspect pathological behaviour is both theoretically impossible to avoid and practically unlikely to happen. (Sounds paradoxical, but we get the same “pathology” in Euclidean 2-space if we’re translating by so large an amount that some other information is lost to rounding error. It’s not a problem on the 2-sphere because we have quaternions that’s bounded and finite.)

Additionally, it’s not clear how to compose two transformations specified in such a manner, short of converting them to linear fractional transformations then converting back.

This still seems like an idea worth pursuing, though. What choices of ideal points can we use? Maybe the three cube roots of unity, or {1,i,−i}?


On another note, considering the Poincaré disk model as a projection of a hyperboloid sheet onto two dimensions, notice that the other sheet projects onto the outside of the unit disk, and we could use that as an alternative model with almost exactly the same properties.

^3

I got a 21.63-second solve on my Rubik’s cube yesterday afternoon, which I think is my current record. Things somehow just turned (har) out to be so that the last layer was solved just by doing a half turn. Have I ever mentioned that, in the six or so years since I switched to OCLL-PLL last layer, I’ve never bothered memorising the four G permutation algorithms? I still haven’t, and given that these come up 2/9 of the time, it hurts my average time considerably. Not that my average is particularly good anyway (36.0 seconds for mean of fifty), though it’s somewhat better than before. (Note: 36 seconds is awful for someone who’s been “speedcubing” for as long as I have.)

My method is pretty much the same as before: (1) a “partial cross” of three edges, (2) two corner-edge pairs (i.e. a 3×2×2 block), (3) edge orientation (a la Petrus), (4) F2L, (5) corner orientation, (6) PLL. Sometimes at the first step I find that the fourth edge is either in place or almost in place, at which point I switch to “normal” F2L. This actually tends to cost me extra time rather than saving anything for three reasons.

The first is that, with four edge slots rather than only two, there’s a much higher chance that at least one of the middle layer edges is in already in the middle layer, but at the wrong location. The second is that I have extra algorithms to insert an edge if the corner is already in, but they can’t be used if I need to keep the whole cross intact. The third is that my corner/edge pairing method involves a fixed edge orientation, which I normally assume to already have been done at step 3, so it hurts my inspection speed to determine which orientation the edge is in simply because I’m not really used to it.

I do have my own variation of the cage method, but I’m rather slow with it and the opportunity to switch to the cage method doesn’t exactly come up often. Basically, unless the whole first-layer-except-one-edge was already done (which I’d suppose is extremely rare), I wouldn’t switch to using the cage method. I’ve occasionally done line+EO, a bastardisation of EO+line that’s really not a good idea at all. (It’s essentially the same as Petrus, except I start fixing bad edges after getting two first-layer edges done. The number of bad edges follows a parity-conditioned binomial distribution with mean 5, so there are usually quite a few bad edges to fix, making this far less efficient than Petrus.)

At some point (maybe around 2009?) I discovered one neat trick for F2L, which is what I still use. By putting in the corner piece in with the wrong orientation, the corresponding edge piece is easily manoeuvred into a position where, by correcting the orientation of the corner piece, both pieces are solved at once. There are two bad cases here, though. If the both pieces are already in the correct locations, but the corner is misoriented, it takes me a while to recognise which direction to turn to fix it, and if the corner is already correct but the edge is still on the last layer, I use a nine-move algorithm which requires repositioning the cube. These aren’t problems inherent to the method, but to how I apply it, so it’s something I’m working on.

As for PLL, other than the G permutations, I’ve recently started training myself to not do an AUF (a preliminary U face turn) while recognising which PLL it is. (This is essentially recognising left cosets versus right cosets, and PLL is the exact example I gave before.) I’m not sure if this has been an actual boost in speed yet, though. I’m still mixing up the A (three-corner cycle) and V (diagonal corner swap) permutations, and R permutation is annoying to recognise.

Let’s see. I try to group the pieces into 1×2 (or larger) blocks and see where the blocks are. The edge-only PLLs are kind of easy because they’d been part of my repertoire since the moment I learnt how to solve the cube; we’ll disregard those.

If there are no blocks at all, it’s the E permutation. (Or H or Z, but these are edge-only permutations.)

If there’s a 2×2 block, it’s either A or V. If any of the faces “opposite” of the block have matching colours on the corner cubies, it’s A, and which of the two cases of the A permutation it is depends on which of the two faces has matching colours. (I also use this recognition method in my cage variant for large cubes, so this is not new to me.) Otherwise, it’s a V.

If there are two parallel 1×2 blocks, it’s a T.

If there are two perpendicular non-touching 1×2 blocks, it’s a Y.

If there are two perpendicular 1×2 blocks and a P pentomino (or equivalently, an L tetromino), it’s a J.

If there are four 1×2 blocks, it’s an N. Also characterised by the colours being either correct or opposite (i.e. white vs yellow, red vs orange, blue vs green); the only other permutation with this property is the edge-only H permutation.

If there’s a 1×3 block, it’s an F. (Or a three-edge cycle.)

The rest of the permutations need “broken” shapes to recognise under this scheme, where two adjacent corners have matching colours but the edge in between doesn’t match either corner. Without this addition, both the R and G permutations are just recognised as “one 1×2 block”. And actually, distinguishing A and V permutations can also be considered under broken blocks.

If there’s a broken L tromino (e.g. 127 on a numpad), it’s an R.

If there’s a 1×2 block and a broken 1×2 block, it’s a G; which of the four cases can be determined by the relative positions and directions of the blocks, though I don’t know any of those four algorithms so it doesn’t really matter.

And just for completeness’s sake, if there’s a 3×3 block, we’re already done!

I have two different three-edge cycle algorithms memorised, one of which is the optimal algorithm that I learnt as the beginner’s method (F2 U/U’ M’ U2 M U/U’ F2), and the other by replacing F2 with M2 (M2 U/U’ M’ U2 M U/U’ M2). They affect different edges, so this sometimes lets me avoid having to turn the whole cube. Also, since I do the G permutation in two steps, the first half sometimes ends in a setup turn for one of the three-edge cycle algorithms already. I have yet another version of the three-edge cycle using only M and U with even more moves, but my hands are slow enough that the benefit from not having to readjust to the F face doesn’t make up for the extra moves needed.

Actually, speaking of which, I usually tend to rotate the whole cube rather than only the last layer, even though the latter is finger-trickable. Maybe this is a bad habit I need to fix, along with all the other bad habits like using a method for block building without actually building blocks and not making use of inspection time at all.

PS: I was going to herp a derp a bit about a slightly more theoretical aspect to cube solving but I guess you get to live with this real-life cube solving bullshit because it’s late and I’m going to bed.

No d30

I don’t think I’d ever mentioned this, but the origami rhombic triacontahedron I made 1.5 years ago fell apart after a couple of days. Or maybe it was weeks. But whatever, the important thing was that it came apart and I never bothered to put it back together. I mentioned that I planned for 32 pieces, but I obviously don’t keep scrap paper around so the paper meant for the two extra modules was just discarded. Fast forward to some time last month, when I just took the scattered pieces out on a whim and tried putting it together.

Then I found out there were only 24 pieces left.

Then it clicked that 24 is twice of 12 and I could make two deformed rhombic dodecahedra out of those. Which I did. Amusingly enough, those are holding up much better than the proper rhombic dodecahedron I have sitting on my stationary holder, because having it shaped like a nonconvex tetrakis cube with triangular faces is naturally more rigid than one with rhombic faces. I would’ve made one into a triakis octahedron, but the obtuse faces deny me.


I spent a huge chunk of yesterday morning (as in the 4 am kind of morning) funsubbing stuff, so I was mostly burnt out by the afternoon, and for :reasons: I couldn’t take a nap in my room, so I took a sheet of clean A4 paper (to fold into stuff) and one half-filled with scribbles relating to PRNG nonsense (to work out formulae if necessary).

I thought I’d try weaving strips of equilateral triangles into some kis-Platonic solid, something Herng Yi once showed me many years ago. (I presume it can be done for any “kis”ed edge-transitive convex polyhedron.) Back during NS, I’d attempted a stella octangula, but manipulating the darned strips was extremely troublesome. I don’t remember exactly why they were annoying to work with, but it’s probably a combination of: each strip had 18 triangles and so had to be correspondingly thin if I were to make them out of A4-size paper; the “final” shape of the strips could not, individually, hold up by itself under gravity; the intermediate shape with multiple strips weaved didn’t like gravity either; and weaving three strips at a corner is kind of difficult.

Using A4 paper is actually rather limiting because of the first reason above. If you want strips with more triangles flush with the long edge of the paper, obviously they have to be thinner, and small triangles are not nice to play with. Wouldn’t make for a nice model either. Suppose we normalise the length of the short edge (nominally 210 mm) to be 1, so the long edge (nominally 297 mm) is √2. If we divide it into n strips along the long edge, each strip would be able to accommodate ⌊√6n⌋−1 equilateral triangles. Because the only way of dividing without leaving extraneous marks on the paper is to divide by a power of two, this gives the following kinds of strips:

  • n = 1: 1 triangle.
  • n = 2: 3 triangles.
  • n = 4: 8 triangles.
  • n = 8: 18 triangles.
  • n = 16: 38 triangles.

Actually, now that I’ve worked it out, maybe the first reason was not quite right. I mean, I remember that I used sixteenth strips, but my visual memory is hardly reliable so it’s fairly likely that I’m full of shit or I’m confusing it with something else that needed triangular strips, like my hexaoctaflexagons. But anyway. Eighth-width strips is sort of a sweet spot because the triangles are neither too large nor too tiny and the Platonic solids made from them would be just about desktop-decoration size… whatever that means.

I decided to go with making a tetrakis cube (6 strips of 12 triangles), and then I spent a whole afternoon messing with the strips of paper, trying to weave them together. What I found out was that if you try to weave the strips one at a time, it will be extremely difficult to even get the third strip in at all. I also initially thought there was a parity problem after the first failed attempt of weaving the strips together, but it turned out that that was because I’d drawn multiple diagrams of different handedness to figure out how to weave the strips, and I was referring to the wrong diagrams.

I was rather sleep deprived, so I accepted my impossibility “proof” and momentarily gave up on trying to weave it into a cube, instead seeing if I could weave it into a lattice. Not that six strips were enough to make much of a lattice (a fact which I didn’t notice(!) until I had the fourth strip woven in), but it did quickly lead me to realise that my faux impossibility proof was wrong, at which point I resumed trying to get it into the shape of a tetrakis cube.

I resorted to using some tape to hold the apexes of the pyramidal caps together while weaving the strips, but the structure remained rigid even after the removal of the tape so this isn’t really cheating! Getting the last strip into the last apex involved sliding it through the gap it was supposed to pass through, which I guess is really the only way to get it in the structure at all once the other five apexes were taped up. (“Sliding” here involves disregarding the crease patterns already made on the strip.) I suppose that if I were working in a zero-gravity environment it could be possible to just somehow arrange the strips appropriately then push them together to form the final structure directly, but well, I’m on Earth.

A tetrakis tetrahedron would need 3 strips of 12 triangles, where each strip itself already covers all 16 faces of the final shape.

A stella octangula would need 4 strips of 18 triangles. (The total number of triangles is the same as for the tetrakis cube as the cube and octahedron are duals.)

A pentakis dodecahedron would need 10 strips of 18 triangles, and a triakis icosahedron would need 6 strips of 30 triangles. The former seems like an interesting idea, and the latter is probably not practical due to strip size reasons as well as having lots of tetrahedral spikes to take care of.

I mentioned above about kis‘d edge-transitive polyhedra, and the regular octahedron can be considered to be a kis square dihedron; I’ve tried (a long time ago) to weave 4 strips of 6 triangles into an octahedron before without success, but maybe I was just doing it wrong, because it seems like it shouldn’t be too different from the tetrakis cube, just with four fewer apexes. (Or maybe a parity problem will come up.) On the other hand, triangular and pentagonal bipyramids are out, because a strip would go through all the faces multiple times before returning to its original position and direction. The exact conditions for an edge-transitive polyhedron such that its kis form has a “strip weave” seems hard to concisely state directly, but there aren’t a lot of edge-transitive polyhedra anyway.

The edge-transitive polyhedra (restricting ourselves to those topologically equivalent to a sphere) consist of the regular polyhedra, the cuboctahedron, the icosidodecahedron, the rhombic dodecahedron, the rhombic triacontahedron, the dihedra and the hosohedra. The kis-(digonal dihedron) is also a hosohedron, and the hosohedra cannot be woven because the apexes have fewer than three triangles around them. kis-(n-gonal dihedra) are not embeddable in Euclidean space if n>6 (assuming equilateral faces), and n∈{3,5,6} can be ruled out.

I’m not sure whether the kis-(rhombic dodecahedron) and the kis-(rhombic triacontahedron) can be embedded in Euclidean space with equilateral faces at all. Gut feel says no, but gut feel is also wrong pretty often when it comes to geometry. Other than the Platonic solids, this leaves the kis-cuboctahedron and the kis-icosidodecahedron to consider. It seems that the former should be weaveable with 8 strips of 18 triangles and the latter with 20 strips of 18 triangles. 8 strips of 18 triangles is just about what can be cut from an A4-size sheet, so maybe I’ll try that next.

On PCG

A couple of posts back, when I was hurfdurfing about using a mul-xor function as a PRNG, I did some post hoc research and I was quite surprised to see a preprint posted just this year about a new construction of PRNGs called PCG. Most of the literature I’ve come across tended to be from multiple decades ago, so the recency was quite refreshing.

The paper is kind of long (46 pages) and the first twenty pages offer pretty much nothing new. Good if you’re new to the subject, boring otherwise, but I’m not one of ACM’s reviewers anyway so who cares.

To get to the construction, we must first explain the concepts of the state transition function and the output function. A deterministic PRNG must have some state x\in X, which is updated on every iteration with the state transition function f:X\to X, and some output function o:X\to Y which spits out a pseudorandom number given the PRNG state.

The output set Y is determined by what the PRNG is being used for. For a PRNG where X=[0,2^{64}-1]\cap\mathbb Z, if a random real number in the range [0,1) is desired, one may be obtained by setting o:x\mapsto x/2^{64}, or if one desires a 32-bit integer output, o:x\mapsto\lfloor x/2^{32}\rfloor could be used. Et cetera, et cetera.

What we used for IRP (circa 2009) was a PRNG based on rule 45 with a state of 251+8 bits (251 bits for the cellular automaton, 8 bits to maintain a counter), where the output function was to select the one of the 251 cells pointed at by the counter. This was really wasteful because it took something like thirty 32-bit operations for just a single bit of output. Needless to say, it was very slow, and in retrospect, this was not a good idea. Rule 45 is not reversible, and making certain handwavey assumptions, the expected period length would be around the square root of the cardinality the state space, which meant that roughly half of the 251 bits were useless right from the start! (Actually, the reason for using 251 cells instead of 256 was that there are fewer not-uniquely-reversible states when the CA is on a grid of prime size, which is related to the fact that a cyclic group has no nontrivial subgroups when it has a prime number of elements (and 251 is prime). Maybe I can dig out some older writing on this topic and expand that into another blog post.)

With polynomial congruential generators where the modulus is a power of two, the output function is typically of the form x\mapsto x\gg k for some integer value k, where the doubled less/greater than signs refer to a bit shift operation. The case where k=0 is equivalent to returning the whole state at every operation. The reason the high bits are kept instead of the low bits is that the low bits have a smaller period (cf. congruential generators blog spam from the last few weeks) and also that, under reasonable assumptions, there is a \sim2^{-n/2} discrepancy in two-dimensional equidistribution in the nth-least-significant bit.

Side note: the PCG paper uses \triangleleft and \triangleright for representing bit shifting, which serves to make the paper harder to read because no one else uses that notation. Seriously, don’t invent new notation for things that have already had an existing notation for multiple decades unless your new notation is somehow a thousand times better.

More specifically, we can prove by a parity argument that, other than the second-least-significant bit, no bit can be exactly equidistributed in two dimensions; the second-least-significant bit necessarily follows a 0-0-1-1 cycle if the congruential generator has full period, and this is a de Bruijn sequence. For the set of all full-period congruential generators of m\ge1 bits (recall the definition of a “congruential generator” given in a previous post), looking specifically at the most significant bit, it must flip an odd number of times over half the period, but for two-dimensional equidistribution to hold, a necessary (but not sufficient) condition is that it must flip 2^{m-2} times over half the period. This value is odd iff m=2. There are only two distinct two-bit full-period congruential generators (given by x_{n+1}=(x_n\pm1)\,\bmod\,4), both of which have the high bit following a 0-0-1-1 cycle. QED.

Still focusing on just half the period, the number of high-bit flips over all full-period congruential generators is a parity-conditioned binomial random variable. Conditioning a binomial distribution B(l,1/2) on parity doesn’t affect the kth central moment if l>k (proof omitted), so the variance of the number of high-bit flips is 2^{n-3}, which gives the standard deviation over the whole period as 2^{(n-1)/2}. Waving our hands around a bit, we may assume that a similar result holds even when we consider certain more restricted classes of full-period congruential generators, such as those defined by an affine recurrence (LCG) or more generally, a polynomial recurrence.

In light of the above, we have as a corollary that no congruential generator can have exact n-dimensional equidistribution for any n greater than one, though this is generally not a practical concern for an LCG with well-chosen parameters. The first novel thing the PCG paper proposes is a method to chain multiple LCGs in almost-lockstep so that, by outputting their states in succession, multidimensional equidistribution is achieved (section 4.3.4). However, this proposed method only works up to two dimensions. The author wrongly claims that for n chained LCGs each with period 2m, the result has period 2mn and is n-dimensionally equidistributed, but using the given construction, it’s true only up to n=2. Larger values of n do not increase the period beyond 22m or improve equidistribution properties.

It’s not a fatal flaw by any means, and in fact the paper proposes an alternative later on, but we can fix this at the cost of making it slightly more complicated. Instead of:

// mult is a constant congruent to 1 mod 4
// inc0/1/2/3 are constants congruent to 1 mod 2
state[0] = mult*state[0] + inc0;
state[1] = mult*state[1] + inc1*(1+(state[0]==0));
state[2] = mult*state[2] + inc2*(1+(state[1]==0));
state[3] = mult*state[3] + inc3*(1+(state[2]==0));

We keep track of the “carry” bit in a separate variable and make sure we carry for a particular word only if the previous word also had a carry:

state[0] = mult*state[0] + inc0;
carry = state[0]==0;
state[1] = mult*state[1] + inc1*(1+carry);
carry &= state[1]==0;
state[2] = mult*state[2] + inc2*(1+carry);
carry &= state[2]==0;
state[3] = mult*state[3] + inc3*(1+carry);

With the former scheme, the second word (in the paper’s terminology) breaks lockstep once every 2m iterations and so has a period of 22m, but this takes on the zero value 2m times over its period, which also means that the third word breaks lockstep that many times in 22m iterations and so has the same period; by induction, every subsequent word also has the same period. (I handwaved the precise argument about breaking lockstep, but you get the point.) The latter scheme reduces the frequency lockstep is broken accordingly, which also means that for the third word and onwards, it will stay “synchronised” with the previous word for much longer periods on average.

The author suggests using the same multiplier for all the base generators. My gut feel tells me there’s something very wrong with this suggestion, because merely changing the increment amounts to conjugating by a constant addition. To wit: (x\mapsto ax+b)=(x\mapsto x-\tfrac{b-1}{a-1})\circ(x\mapsto ax+1)\circ(x\mapsto x+\tfrac{b-1}{a-1}). Actually, I guess this means I’m wrong too, because this was a statement I previously made and I just proved myself wrong: a-1 is an even number so dividing by it is not a valid operation with an even modulus. Oops! Nevertheless, if the 2-adic order of a-1 is small (and it should be—LCGs suffer from severe correlation defects if it’s too large), there will still be many increments which are conjugates under constant addition, and if the base generators happen to line up nicely, they will stay lined up (in that state[i]-state[0] remains constant) for roughly 2m iterations before diverging again. See: sample with modulus 16 (scroll to around line 543).

The obvious way of remedying this is to use increments that aren’t “conjugates” of each other (though this may still result in a similar set of weak states), or simply to use different multipliers. The author does mention in a footnote that “this scheme […] can add additional statistical flaws due to the uniformity of the underlying generators […]”, though this explanation unfortunately misses the point.

Section 5 explores different output functions when applied to an LCG. The notation and terminology is unconventional and confusing, so this was really hard to digest. Here, have a screenshot:

[screenshot: join asterisk split offset boxed triangle wtf]

Yes, I was very bothered by how the boxed triangle didn’t have the box exactly aligned with the triangle. There isn’t much else to say about this section on the whole, really. Interesting ideas here and there (if you tear through the derp notation) and statistical tests to compare the output functions.

Section 7.1 is where a new method of extending a single base generator (with an internal state of one or two words) is introduced, still for the purposes of exact equidistribution and period lengthening. The proposed method is to maintain a separate chunk of state of maybe a few dozen words which is updated with some transition function once every M iterations (where M is the period of the base generator), effectively serving as a counter; the output function then combines the values of the base generator’s state with this counter to produce one pseudorandom word.

You may or may not have noticed that, regardless of how much extra state is added, only one word is output at every iteration, so this scheme doesn’t even provably provide exact multidimensional equidistribution. Indeed, it’s not difficult to explicitly construct a generator with this scheme that does not have two-dimensional equidistribution. Furthermore, as also pointed out in the paper, if M is large (say, 264, or even for M=232), the counter effectively never gets updated and over periods not much longer than M, the statistical properties of this extended generator would be very similar to the base generator alone. In other words, it would provide practically no additional entropy.

An alternative (which is also suggested in the paper) is to apply some sort of an update function on the extra state with period a factor of M, and applying another update function every M iterations, as in the section 4.3.4 construction described earlier in this post. This would work, but then you run into the problem of having lots of state to update on every iteration just to produce one random word, which is, in short, slow and inefficient.

Side note: a “word” refers to the basic integer type in use, which might be a 32-bit integer, a 64-bit integer or even a 128-bit integer, depending on context. I kinda forgot to define this earlier.

Quite bizarrely, the paper references cryptographic security multiple times, while not even having a single proof of immunity against common stream cipher attacks. Or a proof of anything at all; the whole paper is entirely empirical. Theoretical attacks on PRNGs are incredibly difficult—on one hand you don’t want “nice math” and on the other hand it’s hard to prove things without “nice math”—so the lack of proofs is understandable, but it’s still weird to see crypto mentioned more often than as just a single concluding remark.

Experiments in embarrassing myself

(This post is a continuation of the previous.)

I’m not very familiar with lambda calculus (in the sense that I feel really uncomfortable working with Church numerals etc.), so I initially dismissed using a functional Turing tarpit as a route of proving that JS is Turing-complete. But Iota and Jot exist, and assuming that they are Turing-complete with eager evaluation, then we sort of trivially have an immediate result. (The problem here is that I don’t know whether the Turing-completeness of Iota/Jot/SKI crucially depends on evaluation style; I recall something about the Y combinator needing lazy evaluation to work, though I’m not entirely sure about this.)

I translated the code on the linked page into more modern JS; naturally, I don’t hold the copyright on this, insofar as it’s copyrightable at all.

let K = x => y => x;
let S = x => y => z => x(z)(y(z));

let basis = c => c(S)(K);
let geach = c => fn => arg => c(fn(arg));

function iota(string)
{
	function process(position)
	{
		if (position >= string.length) throw 'Invalid Iota string';
		if (string[position] !== '*')
			return {position: position + 1, value: basis};
		let fn = process(position + 1);
		let arg = process(fn.position);
		return {position: arg.position, value: fn.value(arg.value)};
	}
	return process(0).value;
}

function jot(string)
{
	let value = x => x;
	for (let position = 0; position < string.length; position++)
	{
		if (string[position] === '0') value = basis(value);
		else value = geach(value);
	}
	return value;
}

Iota has a slight problem because it uses the string length as part of the state, so rewriting it to use a doubly-linked list is somewhat tricky. If you parse it backwards though, the asterisk operator changes from a prefix operator into a postfix one, which can be handled with a simple counter (I think). On the other hand, Jot simply iterates over each character, which makes it much more straightforward to convert.

function jot2(ll)
{
	let value = x => x;
	while (true)
	{
		value = [basis,geach][+ll.val](value);
		if (ll.next) ll = ll.next;
		else break;
	}
	return value;
}

Not knowing much of lambda calculus, I’ve not adequately tested any of the code presented here, and while they should be correct, I make no guarantee. Anyway, since this doesn’t scan backwards at any point, we can even go further and accept any iterator as an input, rather than a linked list, without too much effort. This is left as an exercise for the reader.

Barf

I started a blog post on the other computer, got sleepy, suspended it, went to brush my teeth, stopped being sleepy, read TV Tropes for a couple of hours, took a shower, and now here I am writing a new post because I can’t be arsed to check if the autosaved draft is up-to-date.

Suppose you want to write a Turing-complete interpreter for brainfuck in JavaScript. This is actually a rather nontrivial task. (I’ll be going by the current ES6 draft spec, because no other version of the spec is relevant right now.)

The first problem you encounter is that the only numeric type in JS is not a bigint type, which means you’ll have problems if you want to index into an array or string whose size is not explicitly bounded independent of input. Actually, the situation is much more dire: strings have an explicit length limit of 253−1 and arrays have an explicit length limit of 232−1. So there’s really no problem indexing, but that’s only because it’s now a problem in getting storage with unbounded size!

Note that in ES5.1, the latest version of the spec as of now, there’s no length limit on strings. It merely specifies that a string is “a finite ordered sequence of zero or more 16-bit unsigned integer” (section 4.3.16, String value), “zero or more characters” (section 7.8.4, String Literals), and “[an] ordered finite sequence of zero or more 16-bit unsigned integer values” (section 8.4, The String Type).

One idea that’s been floating in my head for at least a few years was to use a doubly-linked list to implement the tape, rather than an array. After a few minutes of furious keyboard slamming, I got this far.

function interpret(bf)
{
	let left = [], right = [], stack = [];
	for (let i = 0; i < bf.length; i++)
	{
		switch (bf[i])
		{
			case '[': stack.push(i); break;
			case ']': if (stack.length === 0) throw 'unmatched brackets'; left.push(stack.pop()); right.push(i); break;
		}
	}
	if (stack.length > 0) throw 'unmatched brackets';
	let cell = {val:0};
	let ip = 0;
	while (ip < bf.length)
	{
		switch (bf[ip])
		{
			case '+': cell.val = (cell.val + 1) & 255; break;
			case '-': cell.val = (cell.val - 1) & 255; break;
			case '<':
				if (!cell.prev) cell.prev = {val:0, next:cell};
				cell = cell.prev;
				break;
			case '>':
				if (!cell.next) cell.next = {val:0, prev:cell};
				cell = cell.next;
				break;
			case '[': if (cell.val === 0) ip = right[left.indexOf(ip)]; break;
			case ']': if (cell.val !== 0) ip = left[right.indexOf(ip)]; break;
			case '.': putstr(String.fromCharCode(cell.val)); break;
		}
		ip++;
	}
}

Worked on the first try. Here’s the output of a script I wrote by hand to print a triangle of asterisks, which I think I posted on /g/ once upon a time. (The timestamp on triangle.b says 15th February 2013, so it’s probably around then.)

js> interpret('++++++[->+++++>+++++++>+++<<<]>++>>++[->+>+<<]>[-[->>+>+<<<]>>>[\
-<<<+>>>]<<[->>+>+<<<]>>>[-<<<+>>>]<<[-<<<<<.>>>>>>-<]>[->+>+<<]>[-<<<<<<.>>>>>\
>]>-[-<<<<<<<.>>>>>>>]++++++++++.[-]<<<<<]')
                   *
                  ***
                 *****
                *******
               *********
              ***********
             *************
            ***************
           *****************
          *******************
         *********************
        ***********************
       *************************
      ***************************
     *****************************
    *******************************
   *********************************
  ***********************************
 *************************************
***************************************

Hell if I remember how exactly it works, and since that’s not even the point of this post, I won’t go into that.

And then you might (reasonably) object: I’m storing the indices of the square brackets in arrays, so this code can never allow more than 232−1 pairs of brackets. Well, that can be done away with; we could just scan the input string for matching brackets, and since strings have length bounded by 253−1, we can guarantee that our index and bracket matching counter won’t overflow.

Wait.

String lengths are bounded! That means this can only possibly accept brainfuck programs with at most 253−1 characters. The number of such programs is doubly-exponentially large, but still finite, which rules out Turing-completeness. Well, shit. (Note: this was a thing I noticed after I started writing this post.) Guess that means we have to use a linked list for the input too! We’ll need some kind of counter for matching brackets that is guaranteed to not overflow, but that doesn’t seem too hard.

function makell(l)
{ // make a doubly-linked list out of an array-like object and return the first element
	if (l.length < 1) return;
	let first = {val:l[0]};
	let el = first;
	for (let i = 1; i < l.length; i++)
	{
		el.next = {val:l[i], prev:el};
		el = el.next;
	}
	return first;
}

function interpret2(bfll)
{
	let cell = {val:0};
	while (true)
	{
		switch (bfll.val)
		{
			case '+': cell.val = (cell.val + 1) & 255; break;
			case '-': cell.val = (cell.val - 1) & 255; break;
			case '<':
				if (!cell.prev) cell.prev = {val:0, next:cell};
				cell = cell.prev;
				break;
			case '>':
				if (!cell.next) cell.next = {val:0, prev:cell};
				cell = cell.next;
				break;
			case '[':
				if (cell.val === 0)
				{
					let match = [false];
					while (match)
					{
						if (!bfll.next) throw 'unmatched brackets';
						bfll = bfll.next;
						if (bfll.val === '[') match = [match];
						if (bfll.val === ']') [match] = match;
					}
				}
				break;
			case ']':
				if (cell.val !== 0)
				{
					let match = [false];
					while (match)
					{
						if (!bfll.prev) throw 'unmatched brackets';
						bfll = bfll.prev;
						if (bfll.val === ']') match = [match];
						if (bfll.val === '[') [match] = match;
					}
				}
				break;
			case '.': putstr(String.fromCharCode(cell.val)); break;
		}
		if (bfll.next) bfll = bfll.next;
		else break;
	}
}

The makell helper function is just to convert a string into a linked list, because typing out a linked list literal by hand would be way too cumbersome. It fails to convert array-like objects with a length property exceeding 253, but that doesn’t matter because we’re assuming our input is already a linked list. The interpret2 function takes as input the first element of a linked list containing individual characters of a brainfuck program, which may exceed 253−1 in length.

js> triangle = makell('++++++[->+++++>+++++++>+++<<<]>++>>++[->+>+<<]>[-[->>+>+\
<<<]>>>[-<<<+>>>]<<[->>+>+<<<]>>>[-<<<+>>>]<<[-<<<<<.>>>>>>-<]>[->+>+<<]>[-<<<<\
<<.>>>>>>]>-[-<<<<<<<.>>>>>>>]++++++++++.[-]<<<<<]')
# output omitted
js> interpret2(triangle)
                   *
                  ***
                 *****
                *******
               *********
              ***********
             *************
            ***************
           *****************
          *******************
         *********************
        ***********************
       *************************
      ***************************
     *****************************
    *******************************
   *********************************
  ***********************************
 *************************************
***************************************

Yup, it works. There we have it: a theoretically Turing-complete brainfuck interpreter written in JavaScript. As a corollary, JS is Turing-complete.

Even more scatter plots

Inversive congruential generators modulo 217, with all 216 odd-valued points plotted. The first two are given by x−1+2 and the other two by x−1+45678, where the first/third plots have lag 1 and the second/fourth plots have lag 2. (Edit: actually, I just noticed I forgot to comment out x ^= x>>>1, which provides a little bit of mixing, so these plots are not exactly of the ICG, but of the ICG conjugated with this xor-shift.)

ICGs have, in some sense, an optimal lack of lattice structure when the modulus is prime, but obviously that doesn’t carry over to prime power moduli. This statement apparently applies to both recursive ICGs (defined by a transition function x_{n+1}=f(x_n)=ax_n^{-1}+b) and to explicit ICGs (defined by x_n=(an+b)^{-1}).

We can prove that, for a recursive ICG modulo 2^m\ge8 with the transition function f(x)=ax^{-1}+b, it attains the maximum period of \phi(2^m)=2^{m-1} if and only if a\equiv1\pmod4 and b\equiv2\pmod4. The “only if” part is trivial. The rest will be proven by induction (boilerplate: “suppose this holds for some m blah blah blah”).

Lemma: 1 has exactly four square roots modulo 2m+1, namely, ±1 and 2m±1.
Proof: That the four given values are square roots is easily verified. Let x be some square root of 1 not congruent to ±1, and let x=a2^b\pm1 such that a is an odd integer and b\ge2. x^2=a^22^{2b}\pm a2^{b+1}+1\equiv1\implies 2^{b+1}\equiv0\implies b=m.

Since reciprocation gives a perfect matching among the 2^m-4 odd non-unity-square-roots, considering the 2^{m-1}-2 odd numbers between 3 and 2^m-3 (inclusive), an even quantity of these must map within the same set (i.e. they don’t toggle the high bit), and so the remaining numbers (which toggle the high bit) must be even in quantity as well. By the bijectivity of reciprocation, we similarly have that an even number of odd numbers with the high bit set toggle the high bit under reciprocation.

Using the same trick as before of summing the overflow over the period of m bits, we note that the overflow from reciprocation cancels itself out, as does overflow from the high bit of b and overflow from multiplying by a, which leaves overflow from adding b, but we already know that that’s 2^{m-1}\cdot2\pmod{2^{m-1}\cdot4}, or equivalently, 2^m\pmod{2^{m+1}}, which is exactly the same as saying that the high bit is toggled every 2^{m-1} iterations, thereby showing that the maximum period of 2^m is attained with the given conditions.


With “small” values of b, the ICG seems to have bad structure, which might be explainable with the help of Laurent series. For instance, we have 1/(x+2)=1/x(1-2/x+4/x^2-8/x^3+\cdots), which is actually a finite series because 2 is nilpotent. Or maybe it makes more sense to defer to the fact that the transition function is a fractional linear transformation and do something about that.

Reciprocation

Let’s say you want to find the multiplicative inverse of an odd number modulo some power of two. We’ve covered this before in the post about summing 2-adic series for π.

Start with a 1, then apply Newton’s method for division. It’s fast; it works.

But we can micro-optimise it. If we start with a 1 (as I did in the code snippet in the earlier blog post), the first iterate gives 2−x, and we can check that this guarantees only two correct bits. (Id est it is correct modulo 4, but not modulo 8.) So we’re using one subtraction to get to two bits of precision, but that turns out to be one more subtraction than necessary.

Behold: x alone already has three correct bits, because an odd square is always 1 mod 8. If we use this, we save on one subtraction and roughly 1.5 iterations (exact value being log2(3)), for a net saving of 2.5 subtractions and 3 multiplications over starting with 1.

In the event that the number of bits is fixed and known in advance, we can hardcode the number of Newton’s method iterations necessary, and since the number of bits is usually itself a power of two, we save only one iteration rather than two with the above optimisation. For a 32-bit inverse (where the least significant bit is 1), four iterations are needed, which gives a total cost of 8 multiplications and 4 subtractions. Without the optimisation, five iterations would be needed, and assuming we also hardcode the first iterate, it’d be a total cost of 8 multiplications and 5 subtractions.

Well, that’s not very impressive, but that’s why it’s called a micro-optimisation, right?

I did a brute-force search over cubic polynomials with small coefficients providing more than six bits of precision, and found just (16n−6)x+(7−16n)x3, where n is any integer. This gives seven bits, but going from six bits to seven is not enough of a win to justify having an extra multiply.