Addiction

I didn’t want to admit this, but I got addicted to cubing once again, and for just about the whole of July I’ve done roughly nothing of value because I spent a lot of time playing with this cube-shaped toy. So much that my nails are breaking and chipping.

I never was much of a speedcuber, and while getting a better cube has improved my time by a few seconds, I still haven’t managed to break the 30-second barrier consistently. I’ve been assuming that this could be attributed to my fingers being slower than the “average” person’s; after all, I also use the Dvorak keyboard layout and I’m only just on par with the average Qwerty user in the same age group. (I’m a lot faster than people who don’t need to type fast, but that’s not something to brag about.) I don’t know, but that’s not really the point!

My right thumb nail cracked, which was when I switched to playing one-handed, achieving a completely unremarkable average of ~80 seconds. And then my left pinky nail cracked, but I continued anyway. This probably forced a bad form because I was actively trying to avoid using my pinky and the weird posture I adopted is already burnt into my muscle memory.

There’s something ridiculously appealing about solving cubes. With the standard 3×3×3 cube, it takes less than 20 seconds to scramble and I can solve it in under 40 seconds, so a solve-scramble cycle pretty much always takes around a minute. A single solve is not a huge time investment, and it’s easy to fall into the trap of “one more won’t take long, so I might as well go for it”. It’s easy enough to squeeze in a minute here and there, and then for the past few weeks I was carrying the cube with me practically everywhere throughout the house all the time. (Other than the washroom. That’d’ve been unhygienic.) Even if I had to use my right hand for whatever, that left one hand free to cube.

Cubing, however, is not an activity that only involves the hands. It involves attention too, and that meant that I was constantly being distracted from everything else I should’ve been doing. That sucked.

I have a megaminx too, and my most recent timings were about 400 seconds average. I don’t remember if that was better or worse than my timings years ago, but again, that’s not the point! Doing a megaminx solve or a 4×4×4 solve is a considerably larger time investment, not just because the solving takes five to ten times longer, but also because the scrambling takes that much longer too. The difficulty of fairly scrambling a megaminx without a computerised random number generator puts me off from it, which is a bit sad because I feel like I’m done with cubing and I won’t improve any further, so I want to have fun with dodecahedroning (still with the knowledge I won’t even get near world record speeds!), but it’s harder to sink time into that.

I don’t know what to make of this. So I’m sick of cubing, but it’s a “fast” time killer so I do it anyway and I accidentally killed too much time.

2GLL ∩ ZZLL

Is this a thing? Probably not. It doesn’t seem very useful, because the preceding step would be simultaneously phasing and permuting corners, and I don’t think anyone’s ever bothered generating algs for that. That also corresponds to a coset space of 18 elements, so there wouldn’t be too many cases.

If we fix the corner permutation, 2GLL ∩ ZZLL is a subgroup of order 108. Thanks to symmetry there are a lot fewer cases. (m/n means there are n cases and I know m of them. It’s not a probability, because not all cases are equiprobable.)

Oriented corners (1/27): 3/3 cases.
Sune (8/27): 4/4 cases.
U (4/27): 3/3 cases.
T (4/27): 3/3 cases. (One case is shared with U.)
L (4/27): 2/4 cases.
Double Sune (2/27): 2/3 cases.
Bruno (4/27): 1/3 cases.

So that’s a total of 21 cases (not counting the trivial case), 16 of which I know, and the probability of hitting a known case is 43/54 ≈ 80%. (Update: now I’m at 19 cases out of 21, with the only two I haven’t bothered learning being the only-all-corners-misoriented cases, which also happen to have the longest algs.)

If we look at all the 2GLL cases, the break down is more like:

Oriented corners: 4/4 cases.
Sune: 8/12 cases.
U: 4/7 cases. (p = 1/2)
T: 4/7 cases. (Again, one case shared with U; p = 1/2)
L: 2/8 cases. (p = 1/6)
Double Sune: 4/5 cases. (p = 11/12)
Bruno: 2/7 cases. (p = 1/4)

26 out of 48 nontrivial cases covered, with probability 83/162 ≈ 51% of hitting a known case.

Man, I think it would be pretty fun to tally up all the algs I know. I recently picked up a bunch of COLLs too, but like I said before, some of them are plain useless. Only two out of six of the Sune COLL cases have short and very fast algs, so it’s on average better to just use Sune/Niklas and do PLL. (The diagonal-corner-swap PLLs can always be avoided too, which saves about a third of a move.) I’m not particularly intent on learning the whole set of COLLs because memorising things is hard; of the 33 nontrivial cases, I know 21 of them. Since CP/2GLL and COLL/EPLL are distinct substeps of the last layer, there’s some overlap between the list of COLLs with the list of 2GLLs above.

Counting the PLLs and non-edge-preserving OLLs too, that puts the total number of last layer algs I know at about 60, and most of them are not what I’d use in a speedsolve in the first place. I’m kinda just learning COLL and 2GLL “for fun”, so there’s that, though COLL is sorta useful on larger cubes with the sandwich method.


But anyway, I wanted to mention something about antisymmetry. I think antisymmetry’s pretty cool.

Take the Rune algorithm, for instance. The effect of the move sequence is antisymmetric, but the move sequence itself (R U R’ U R’ U’ R2 U’ R2 U2 R) is not, which is kind of lame. On the other hand, consider this Sune + H perm alg: (R U R’ U’ (R’ U’ R) U (R U’ R’) U R U’ R’). Here, not just the effect is antisymmetric; the move sequence is too!

There’re some other examples, like U M’ U2 M U being the x-axis mirror of its inverse. This is an edge 3-cycle; by conjugating this with F2 or M2, we get an edge 3-cycle in one layer. There’s also Bruno (R U2 R2 U’ R2 U’ R2 U2 R), which is the z-axis mirror of its inverse.

R U2 R2 U2 R’ U2 R U2 R’ U2 R2 U2 R is another example that’s the z-axis mirror of its inverse.

R U R U R U’ R’ U’ R’ U’, an edge 3-cycle, is the (x−y)-axis mirror of its inverse. Incidentally, conjugating this with R U’ gives the standard 2-gen U perm alg, but the resulting move sequence is not antisymmetric.

Niklas (L’ U R U’ L U R’) is the x-axis mirror of its inverse.

F R’ U2 R F’ R’ F U2 F’ R (an L3C/COLL alg) is the (x−z)-axis mirror of its inverse.

This antisymmetry seems to make memorising the alg easier, because it effectively compresses the alg length by half.


I’m apparently really bad at counting cases so some of the values I’ve reported before might be wrong. I don’t count mirrors separately (I just mentally swap the roles of my hands) but most websites do, so it’s hard to corroborate my values with external sources. Grain of salt, etc.


So there’s this double Sune with H perm edges alg I found with Cube Explorer that’s pretty cool. It goes (r U2 r’ U’ r2 R’ U R’ U’ r2 R’ U’ R’), and if you replace all the wide turns with single-layer turns, then this becomes just a double anti-Sune. Alternatively, we can rewrite it as a double anti-Sune with inserted M-slice turns (M’ R U2 M R’ U’ M2 R U R’ U’ M2 R U’ R’).

Executing this alg doesn’t require any regrips at all, which is extra cool. (Same holds true for the usual anti-Sune, actually, and this extends to applying the anti-Sune twice.) We can keep the left hand holding the LF and LB edges, and start with the right thumb on DR, index on UFR, middle on UR. Its inverse, which was what Cube Explorer actually found, does require regrips; this disparity comes from how fingertricking a U’ turn is harder than a U turn with the right index finger.

It could be done with just R and U turns, but that’d need 15 moves instead of only 13.


I am pleased to announce that since I last posted about cubing, my left ring fingernail snapped, my right index fingernail chipped off, and both my left pinky nail and my right thumb nail have cracks along their sides.

Actually, no, I’m not pleased at all. The moral of the story is that speedcubing, stickerless cubes, and long nails do not go together. I’m assuming the presence of stickers actually provides a little bit of cushioning for the nails impacting ABS at high speed because this is the first time my nails are getting worn down and broken so rapidly, but it could just be that I’ve been spending a lot more time on cubing than I’ve ever had before.

PAD, 2015 edition

Sorry if this seems a bit uninspired because I put this off for a whole week and only started this morning, and ended up with nothing I hadn’t already come up with a few months ago.

Remember when I asked about noncontrived series for π with an initial term of 22/7? Well, here’s a not-quite-example.

\displaystyle\pi=\frac{22}7-4\sum_{k=0}^\infty(-1)^k\frac{4k+1}{(2k+3)(2k+5)(4k+7)(4k+11)}

Obtained from forcing a splitting ratio in transforming the Gregory-Leibniz series so that the first term is 11/14 (22/7 divided by four) and choosing the free parameter so that the quadratic coefficient in the numerator vanishes. It’s certainly contrived! However, it’s rather coincidental that the first two terms under the summation actually cancel each other out (1/(3⋅5⋅7⋅11) = 5/(5⋅7⋅11⋅15)), so we could equally well rewrite the above as

\displaystyle\pi=\frac{22}7-4\sum_{k=0}^\infty(-1)^k\frac{4k+9}{(2k+7)(2k+9)(4k+15)(4k+19)}.

PLL timings, more last layer notes, and some more FMC

Preface: Maybe you’re sick of reading about all this cubing stuff already, but I’m not, so you’ll have to bear with it. Usual programming will resume eventually.


G perm not included because I still can’t immediately recognise them. For the rest, I did ((alg) y)12 or something similar, to avoid “optimisations” when the first and last moves are the same. Times given are for doing the algorithm 12 times (along with 12 cube rotations). I have no idea how large the standard deviation is, so maybe take that to be around 1.5 seconds. (I could do more runs to get the sample standard deviation, but all this speedcubing has been filing my right index fingernail down faster than it grows.)

35.60 U perm, cw, ftm/stm-optimal (F2 U M’ U2 M U F2)
35.18 U perm, ccw, ftm/stm-optimal (F2 U’ M’ U2 M U’ F2)
33.08 U perm, cw, stm-optimal, 〈M,U〉 2-gen (M2 U’ M’ U2 M U’ M2)
35.47 U perm, ccw, stm-optimal, 〈M,U〉 2-gen (M2 U M’ U2 M U M2)
38.77 H perm, stm-optimal, 〈M,U〉 2-gen (M2 U M2 U2 M2 U M2)
43.10 H perm, ftm-optimal including AUF (L R U2 L’ R’ y L’ R’ U2 L R)
45.70 H perm, ftm/stm-optimal excluding AUF (R2 S2 R2 U R2 S2 R2)
44.25 Z perm, qtm-optimal (L F U’ R …)
40.77 Z perm, qtm-optimal (F R U R’ …)
50.63 Z perm, ftm/stm-optimal (M S2 M’ D’ M2 u M2)
41.41 A perm, ccw, ftm/stm-optimal (R2 B2 …)
41.39 A perm, cw, ftm/stm-optimal (R’ F R’ …)
35.16 A perm, ccw, ftm/stm-optimal (l’ R’ D2 …)
36.98 A perm, cw, ftm/stm-optimal (l’ U R’ …)
49.27 E perm (l U’ R’ D …)
45.32 F perm (R’ U R U’ R2 …)
35.84 T perm, ftm-optimal, right-handed, 〈U,R,F〉 3-gen (R U R’ U’ …)
39.03 T perm, ftm-optimal, left-handed, 〈U,L,F〉 3-gen (L’ U’ L U …)
35.81 J perm, unflipped, qtm-optimal, 〈L,U,R〉 3-gen (L’ U R …)
37.95 J perm, flipped, qtm-optimal, 〈L,U,R〉 3-gen (R U’ L’ …)
42.91 J perm, unflipped, qtm/ftm-optimal (B2 L U L’ …)
41.57 R perm, ftm-optimal, UBL/UL block (R U2 R’ U2 …)
40.96 R perm, ftm-optimal, UFL/UL block (R’ U2 R U2 …)
51.40 N perm, right block, ftm-optimal, 〈L,U,R〉 3-gen (R U’ L …)
51.35 N perm, left block, ftm-optimal, 〈L,U,R〉 3-gen (L’ U R’ …)
39.00 V perm, ftm-optimal, (R’ U R’ d’ …)
40.24 Y perm (F R U’ R’ …)

Writing CSS is hard, so deal with this stupid layout. I presume this is quite slow compared to “real” speedcubers, and there are some aberrations like my U perm being the fastest even though it’s not an 〈R,U〉 2-gen algorithm, and all the serious speedcubers seem to swear by 2-gen algorithms. (Note: “2-gen” means it’s in a subgroup generated by two moves.) Cases I know multiple algorithms for have been listed multiple times (U, H, Z, A, T, J) ; there are slight surprises like the standard A perm being slower to execute than my rotated version, but that’s probably because the rotation conjugated with the y-axis rotation I applied after every algorithm simplified to a single z-axis rotation. The net effect is that the average amount of cube rotating in this speed test is less than what a real solve would entail.

(Friendly reminder that in cubing notation, up/down is the y-axis and front/back is the z-axis, just in case Rubik’s cubes weren’t confusing enough.)

Sorting these by speed, it goes like this: U perm (both types), A perm (anticlockwise cycle), J perm (right-handed), T perm, A perm (clockwise cycle), J perm (left-handed), H perm, V perm, Y perm, Z perm, R perm (both types), F perm, E perm, N perm (both types), and finally G perm (all four types).

Well, G perm is only slow now because I still suck at recognising it (and using dumb mnemonics…), but given that it’s the most common PLL (with a 2/9 chance of coming up), I’ll probably get better over time.


The very tiny amount of corner control I exert during corner orientation (by keeping a 2×2 block if it exists) restricts the PLL in special cases to A/J/V, all of which are among the faster ones execution-wise for me, and V also happens to be the rarest. (Probabilities of occurrence are 1/3, 1/3 and 1/6, respectively.) It probably doesn’t make sense to learn all the cases because it’s pretty rare to begin with. Unless I switch to a 2-look/3-look last layer that starts by solving a 2×2 block…

How would that work? I’ve been pondering a bit since I blogged about it. Let’s assume edge orientation is done, as usual. If there are no misoriented corners, go straight to PLL, which can be done in two looks with a bare minimum of {A,U,J,V}, where A, J, V are already needed later on anyway. Otherwise, there is at least one misoriented corner. Picking any such corner as a reference, the two edges that should be adjacent to it can be in 12 possible (relative) locations. This gives a total of 24 cases to learn, or 12 if mirrors aren’t included. Not too big, and I bet most of the cases can be dealt with with a single algorithm applied in different locations.

The next step is a combination of L3C and “L3C plus permutation parity”. The former is well-studied and has 22 cases. As for the latter, the two swapped edges give us a fixed orientation in space, so there are 32⋅3 = 27 cases. If you exclude mirrors, L3C has 12 cases and L3C+parity has 15 cases. (None of the L3C+parity states are symmetrical, except for the V perm and V perm + diagonally misoriented corners; similarly, in L3C, only the two-corner pure rotation states have reflection symmetry.)

To elaborate a bit on splitting PLL into CPLL and EPLL, H/Z can be done with two U perms (apply any U perm, and there will be three wrong edges left). All the diagonal-corner-swap cases can be done with V followed by U; for the Y perm, the only thing to “watch out for” is that this can end with a H perm instead of a U perm if you swap the wrong pair of corners. The adjacent-corner-swap cases can be classified by edge permutation, where F and T have Z-permuted edges (apply any J perm to fix corners), and the rest have U-permuted edges (apply any A perm to fix corners) or correctly-permuted edges (exactly the A perm). (Actually, all the cases in the U-permuted edges category can be handled with the J perm, but the previous sentence gives a one-liner summary that completely avoids special-casing.)

This has net 40 algorithms (not counting mirrors) for a purely 2-look (edge-oriented) last layer. For comparison, OCLL/PLL has net 20 algorithms, also not counting mirrors. Another problem here is probably that recognition of the first half involves finding the two adjacent edges, and since the edges are oriented, their colours won’t be on the U face.


[excised into new post for organisation and readability purposes]


On another note, I had a go at FMC again. (I got sick of failing at BLD.)

Scramble: F2 L2 F2 L2 U B2 D F2 L2 U2 R2 D’ F2 L D L’ D2 L R2 F’ D’ U2
(actually Cube Explorer generated a different scramble, but I screwed up while applying it, so I manually scrambled it some more, then fed it to Cube Explorer to get this)

First attempt (44):
B’ D R2 B’ U2 D2 L2 D (8; block)
U’ L U D’ B’ D (6; edge orientation)
B2 L’ B2 L2 (4; one corner-edge pair)
D B D’ B D B2 D’ (7; edge permutation using Sune)
U’ B’ L2 B’ R’ B L2 B’ R B2 (10; corner 3-cycle)
D’ B2 D U L R B2 L’ R’ (9; H perm)

Second attempt (not counting intermediate attempts that used more moves) (41):
B’ D R2 B’ U2 D2 L2 D (8; block)
U’ L U D’ B’ D (6; edge orientation)
B2 L’ B2 L2 (4; one corner-edge pair)
R B R’ B R B’ R’ B R B2 R’ (11; corner orientation using double Sune)
(D L’ D’ L)3 (12; corner swaps)

Third attempt (31):
R B2 (2; 2×2×2 block with U2 pre-move)
F’ D2 R F’ (4; 2×2×3 block)
R D2 R’ D’ R2 D2 R2 U2 (8; first two layers and undoing premove)
F L2 B’ L’ B L’ F’ (7; edge orientation using wide anti-Sune)
L’ D’ L D’ L D L2 D L2 D2 L’ (11; 2GLL using Rune)

This was actually an unusually good run, though I also exceeded the one-hour limit by quite a lot. I’m running a search for the optimal F2L and Cube Explorer’s getting stuck at depth 14, so 14 moves is actually optimal. I didn’t bother trying to fix the bad edges after making the block, because the F2L pieces happened to magically line up and, well, an optimal F2L happened. The last layer part is coincidentally also 14 moves optimal, so there’s a wastage of four moves in my solution. That said, I exhausted all the two-edge-flipping OLLs I knew and the right wide anti-Sune was the only one that left me with a last layer I already knew how to solve optimally.

I don’t have a “very huge” repertoire of last layer algorithms (the edge 3-cycles, some corner 3-cycles, the PLLs, random short OLLs, random 2GLLs, random COLLs, etc.) so this “brute-force through algorithms until something works” technique tends to fail more often than it works. Petrus’s site has a list of forty short-ish ZBLL algorithms such that all edge-oriented last layer cases can be solved with at most two of these algorithms. Sounds kinda useful, but >memorising 40 algorithms. It took me seven years to get around to memorising all of OCLL and PLL, and that’s only 28 algorithms!

Oh, and while I was trying to practise the Roux method, all the blocks just lined up, I switched back to Petrus, and I got a 30-move solve without even backtracking. That probably wasn’t a good scramble.

BLD

And now that I’ve played with FMC, it’s time to move on to blindfolded solving. (Except without the blindfold. I just close my eyes.)

I’ve been trying to learn the 3OP method, where the orientation of all the pieces is solved first, then conjugated 3-cycles are used to permute all the pieces to their correct locations. (Caveats: it can be the case that the edge permutation parity and corner permutation parity are both odd, in which case an extra algorithm is needed to swap two corners and two edges; also, while the two-pairs-of-swapped-corners (or of swapped edges) case can be handled with two 3-cycles, the 3OP method prefers to solve them directly.)

Why 3OP? Because it doesn’t require learning any new algorithms! I’ve long since known how to create a conjugated commutator to flip any two edges, with an additional four-edge-flip algorithm I picked up while trying to learn Roux. Corner rotation is a bit more annoying (and where I mess up a lot), but doable. The permutation 3-cycles as well as the parity algorithm are all from PLL.

Anyway, so far I’ve managed to (not very reliably) memorise and execute the solve up to permuting the corners, but not the edges. Well, considering the amount of information involved, it goes something like this:

Edge orientation: 11 bits (exactly)
Corner orientation: 7 log2 3 ≈ 11.09 bits
Corner permutation: log2 8! ≈ 15.30 bits
Edge permutation: log2(12!/2) ≈ 27.84 bits

So yeah, edge permutation is more entropic than the whole cube’s orientation (which I already have some trouble doing), and contributes over 40% of the memorisation load. Or it would, if not for the fact that humans tend to be inefficient at memorising raw data. There’s already a lot of “data expansion” anyway.

My memorisation system is exactly the same as in the 3OP guide, except I tag all the edges with integers from 0 to 11 and the corners from 0 to 7; the guide uses one-based indices (and a different corner ordering). Edge orientation is memorised as a sequence of which edges need to be flipped, and corner orientation is done by grouping corners into ccw/cw pairs and cw/cw/cw or ccw/ccw/ccw triplets. Permutation (for both corners and edges) is done by memorising the cycles.

Memorising as cycles is inefficient if you judge solely by the number of bits needed, but it’s very easy to determine which pieces to apply a 3-cycle to and to update the permutation mentally after applying said 3-cycle. In case the first three pieces in the cycle are inconveniently located, one can also rotate the whole cycle mentally (e.g. from (0,3,4,1) to (1,0,3,4)) without affecting the actual permutation. Applying a 3-cycle to the first three pieces in the cycle drops the second and third numbers from the cycle, so the amount of stuff to memorise reduces as the solve progresses. (The pattern continues for larger cycles, but larger cycles need more setup moves (more setup moves ⇒ more memorisation) and I haven’t memorised any 5-edge-cycle algorithms anyway.)

The way I do this is to visualise where the three pieces of a cycle are by tapping them, applying and memorising a bunch of setup moves, applying the algorithm proper, then undoing the setup moves. Sometimes I forget the setup moves when trying to undo them (especially if they involve whole-cube rotations) and then the whole attempt is screwed.

One gigantic problem with BLD is that I can’t tap into my visual memory to apply algorithms, which is something I’ve been relying on since I learnt how to solve a cube. I can’t even know if I’m doing something right or wrong until the very end, when I look at the cube and go “dammit, I screwed up again”. (This happens about two-thirds 90% of the time.)

Oh, and my stickerless GuHong still has the DaYan logo sticker on the white centre (it’s not completely stickerless), so that’s one face I can identify by touch, which is actually cheating. Not that it helps much; I can’t rely on it to get the cube back to the correct spatial orientation when I forget which rotations I used as setup.

It’s only been two days since I started, so maybe I’ll be able to actually execute a full solve eventually. Possibly even under ten minutes!

FMC

Scramble: R’ F U D L’ B U’ D L2 B U R2 D2 R2 U2 L’ D2 R’ D2 L2 F2 L R’ D R U D L’ F2 R F’ D’ R’ F2 R’ F2 R U2 F2 U2 D2 R F2 R’ B R F B R L D R L’ U’ D2 F’ U2 B U2 B’ R2 B R2 L2 D2 F’

(That’s three two-phase scrambles concatenated, just so you know I wasn’t cheating or anything.)

Solution (45 moves):
B’ U’ B’ L’ F U L2 U’ R D2 F D U F U’ (block)
B’ D B F’ R’ F (edge orientation)
R D’ R’ D2 R D2 R D R2 D R2 D2 R’ D2 R (first two layers + edges; I actually had no idea what was going on here)
L2 U L’ D2 L U’ L’ D2 L’ (last three corners)

I tried. For what it’s worth, building the block can be done in at most 9 moves; say, R’ L U R2 U’ F U’ B2 L2. (I only checked the same block I used in Cube Explorer, so presumably among the other 11 possible block choices, some of them will use fewer moves.)

There was already a 2×2 block when I was done with edge orientation and I tried both ways to extend it into F2L, but this gave the shortest solution. I also tried the inverse state, but that didn’t help either.

And then I hit the (self-imposed) one-hour time limit.

FMC is hard.

Things

I’ll be taking Algebra I this coming semester, and maybe I’ll start to actually understand group theory then.

Though now that I’m looking at the timetable, wtf, MA2202S has two three-hour tutorial+lecture sessions and MA2214 Combinatorics and Graphs I has 6-8 pm lecture sessions. This is awful, but maybe I’ll pull through somehow. The other maths modules up for consideration this semester are MA3205 Set Theory and MA3220 Ordinary Differential Equations, though there isn’t really a very wide choice because most of the interesting stuff depends on Algebra I or Analysis II; other than the four aforementioned modules, the remaining options are MA3236 Non-Linear Programming and MA3264 Mathematical Modelling.

For that matter, I’m actually done with the maths major requirements at the 2000 level, except for being short of Algebra I. Anyhow, I’ll have to start filling in all the non-maths requirements too, so I guess I’ll be taking some chemistry and computer science modules to fill up the science-but-not-maths area, and the rest will be… bleh. I kinda don’t really want to think about those, so maybe let me just postpone that for two more weeks, when module bidding actually starts.

Looking at the chem modules, it looks like the level-1000 ones all have five hours of lectures per week. And if I’m going to take Inorganic Chemistry 1, that’ll give me a gigantic seven-hour block of non-stop lectures. Maybe I should take Organic Chem 1 next semester instead. Carbon’s a pretty cool guy and it doesn’t afraid of anything.

There’s CS1010S and CS1101S, which are both called “Programming Methodology”, but according to the descriptions, the latter has a focus on functional programming, while the former is for imperative programming. To be honest, I don’t really get the distinction; apparently things like map and filter belong to functional programming, but I use them a lot in what I believe is an imperative style. But anyway, these two modules conflict with each other, and the higher-level CS modules seem to mostly depend on the former. You know that xkcd strip on dependencies? That’s pretty much what I’m seeing.

(Maybe if apt can solve sudoku, it can solve module dependencies too! Or is that being too optimistic?)

At any rate, that’s four modules out of five covered, assuming bidding goes well and all that stuff. The fifth will probably be some “general education” or “Singapore studies” module, which I’ll decide later.

Next semester (the one starting next year) will have MA3110S Analysis II and lolenglish, so that’s two modules out of five set in stone. (Yeah I’m kinda predicating this on my passing of English this semester, but that shouldn’t be a problem, now that I know what it’s like!) Conditioned on being able to get CS1010S this semester, I’ll be taking CS1020 Data Structures and Algorithms I next semester. Or there’s the six-MC module CS2020 Data Structures and Algorithms Accelerated, which depends on an A- grade on CS1010*. Either way, that’ll fulfil the faculty requirements of 16 MCs in at least three different categories. (I already took Physics I, so that’s 4 MCs and one category down.)

The remaining two modules to take would be MA3233 Combinatorics and Graphs II along with some other random general education module, maybe. (MA3201 Algebra II is an option too.) I don’t really like this.

I have six non-science modules to take and I don’t really want to take more than one per semester, but there’re only four semesters left. Six semesters if I’m going the Honours route, which I probably will be, but I think applying for Honours depends on having cleared all the non-science requirements as well. This semester will see me taking two, which still leaves four modules over three semesters. Thanks to the pigeonhole principle, either I’m wrong about the Honours thing and I can take one in the fourth year, or there must be some subsequent semester where I’ll still have to take at least two non-science modules.

If I’m not taking Set Theory or ODEs this semester, which is most likely the case, I’ll be taking them two semesters later. Plus MA3265 Introduction to Number Theory and CS4232 Theory of Computation. (CS4232 actually only depends on any MA2xxx module, so I could take it this semester if I wanted, but I’ll wait and see.) Plus one random module. Or actually, maybe I’d rather fit MA3209 Analysis III here. See two paragraphs below.

And then the semester after that (i.e. the sixth semester) would be whichever of MA3201 and MA3233 I didn’t take earlier, along with MA3111S Complex Analysis I. At this point I’d be done with the level-3000 major requirements, and then I have to care about SPM completion requirements. I missed out on MA2101S, so I’ll most likely have to take either MA5203 Graduate Algebra I or MA5205 Graduate Analysis I (or even both) for SPM. The former depends on Algebra II, and the latter depends on MA4262 Measure and Integration which depends on Analysis III.

I could streamline Graduate Algebra I into the fifth semester if I take Algebra II in the fourth, but then that’d push Combinatorics and Graphs II out to the sixth. If not, it’ll have to be in the seventh semester if I take it at all. As for Graduate Analysis I, the earliest possible scheduling would be to take Analysis III in the fifth semester, and Measure and Integration in the sixth, and then the module itself in the seventh. If I don’t take Analysis III in the fifth semester, that’ll force Graduate Analysis I beyond the first four years of study.

tl;dr:

Third semester: MA2214, MA2202S, CS1010S, CM1111, lolenglish, wildcard.
Fourth semester: MA3110S, CS1020, MA3201 or MA3233, lolenglish, wildcard. (Possibly CM1121 Organic Chemistry 1.)
Fifth semester: MA3205, MA3220, MA3265, CS4232 or MA3209, wildcard.
Sixth semester: MA3111S, MA3233 or MA3201, MA4262 (if possible), undecided, wildcard.
Seventh, eighth semesters: ?????

Funny how I was going to write a post about the modular group, and then it suddenly went into modules to take in the coming semesters.

Observations on posting on reddit

I’ve had a reddit account for about half a year now and I semiregularly post in /r/askscience, because, y’know, they’re cool folk. Mostly. I’m not going to out my user name here, but it probably wouldn’t be too hard to find it. (Filter threads by flair, scan through a bunch of posts and click through the posters, corroborate with information written in the rest of this post, etc.)

My speciality is mathematics, and it goes without saying that most of my posts are in that area. I do have some amount of domain-specific knowledge in computer-y stuff, so the majority of the rest of my posts are under the Computing label. Off the top of my head I answered a particle physics question once, and that was just bullshitted from some general knowledge I have of basic particle physics.

I prefer to not weigh in on topics I’m not an expert in and I’d rather not try to present an opinion or statement unless I’m reasonably sure it’s valid. (Which makes my post on particle physics really suspect, because I think it’s the kind of post an AU version of myself would get mad at for being all pop science if I’d chosen to do physics instead of maths.) There’re all kinds of people who read /r/askscience and it’d be horrible if I wrote something incorrect and people actually believed me, because then I’d be the one contributing to how pop science is killing actual science.

So, on to the main topic.

reddit’s Markdown variant is horrible for posting maths stuff in. Almost every poster who writes equations gets tripped over by the superscripting syntax being not very well thought out. You can use brackets to signify a bunch of arbitrary characters to be superscripted… as long as said characters don’t contain any right brackets. (Pop quiz: what’s a common notation for high-order derivatives of a function?) And of course, that’s only for the minority of people who’ve thought as far as putting brackets at all; very often the equations just show up as superscripting half of the whole equation rather than just the exponent, because reddit treats everything up to the next whitespace character as part of the superscript, unless the first character after the caret is a left bracket. I think these people either never look at what their post looks like or have given up on trying to get reddit to render things meaningfully.

There’s a partial workaround in using Unicode lookalikes as bracket substitutes, but this makes me feel dirty inside because it’s just wrong. You can’t escape the brackets in the superscript syntax, even though Markdown usually supports escaping characters everywhere else by prepending a backslash.

Proposed solution: TeX in some form, like maybe MathJax. But 99.999% of the rest of reddit have no use for TeX. Is catering to niches worthwhile? Or heck, just let us write HTML. I still can’t remember the order of square brackets and parentheses to use for writing a link in Markdown, but I sure remember <a href>. “Lightweight” rich-text formats like Markdown ironically have more cognitive load because you have to keep the parser state in your mind while writing in it.

/r/askscience is sort of special because it’s (allegedly) one of the more heavily moderated subreddits in reddit, and other than submitted questions sometimes being an abomination of unintelligible not-quite-English words, I think the post quality is generally decent. Some time last year (?) it became one of the “default front page subreddits”, and now that highly-voted threads receive a lot of visibility, those threads also have a much higher proportion of inane comments than the average /r/askscience thread. I don’t remember if the situation was as dire before it became a default subreddit, but it probably wasn’t.

I have no idea how comment moderation works in /r/askscience, or for that matter, anywhere else in reddit. I used to assume that all the visible moderation was done by users upvoting and downvoting comments along with the occasional [deleted] comment, but there quite often are “invisible” comments that contribute to the comment count but are completely hidden. (Not just the “click to expand comment” kind of hidden usually seen with heavily-downvoted comments, but really completely hidden.)

So, I have 39 posts at the moment, and my comment karma is at a staggeringly high… 41. As I understand it, comment karma is a thing that goes up when people upvote you. This also means I’ve received barely more than one upvote per comment, which is pretty offputting because I see posts not-quite-answering the question with a one-liner response receiving more upvotes simply because they were earlier. So maybe getting karma is a bit like a race on who hits F5 fast enough to get new questions first. It also penalises people who spend time on writing comments or are slow writers (e.g. myself), though this is a lot less harsh than on a certain fast board of 4chan. (Or, I guess part of it is that I refuse to post an answer if I feel that someone else has already adequately answered it.)

Another thing to note is that not as many people are into maths as they are into, say, physics or astronomy. Despite being on the extreme end of the “hard sciences”, mathematics seems too hard for most people to digest. So that’s another reason my posts don’t have much visibility, and hence the low upvote count.

But I think the main contributing factor to gathering upvotes in /r/askscience is whether you’re a panelist, which shows up as a colour-coded rectangle containing your field of research in it, next to your user name in every post. It shows up every time, so even if you’re an engineer who’s not really qualified to talk about (for example) number theory, people who’re /r/askscience regulars will automatically mentally connect your flair (that’s what the coloured rectangle is called) to your post and assume you know what you’re talking about, and the upvotes just come flowing in.

The tl;dr of it is that I’m not optimising my posts for upvotes and consequently, I’m not getting a lot of upvotes. And then for some reason I’m kinda upset about this, even though it’s really my own fault.

On the comment layout, I think it’s too linear, which leads to horrible things happening when people optimise their posting behaviour for visibility (because duh, that gives you upvotes and thus karma), like replying to the top-level comment with the highest score instead of making a new top-level comment, because you can ride on its coattails for instant visibility. And then the top-level comment has over 9000 children because almost everyone else came to the same conclusion, so the net effect is that if you write a top-level comment, you’re not just contending with other top-level comments for visibility. You’re competing with the whole chain of comments which will continue to appear above your new post, and since readers have to scroll through all that junk to even see your comment, your post has low visibility and will not receive any significant number of upvotes.

(Not a personal experience, because most of the time the superpopular threads are about things I can’t or don’t want to comment on, but I suspect this issue turns off some people. And fwiw, I usually lose interest and close the tab by the time I scroll halfway through a gigantic tree of replies to a single top-level comment.)

A tree-like layout might be cool, but that’d most definitely require horizontal scrolling to function reasonably, and horizontal scrolling is a gigantic no in user interfaces.

4chan’s threads are completely linear (there isn’t even a concept of parent or child posts as in reddit), but at the same time threads don’t go beyond 600 posts very often and most posts are short, so inline post expansion makes them plenty readable. Someone tried to make a 4chan archive that rendered threads in a tree-like manner and I think he’s completely mistaken as to how people post on 4chan, because threads tend to be more like DAGs than trees, and trying to force a tree model onto a DAG doesn’t go well. There’s the occasional loop, especially during GET seasons, but I haven’t seen much of them lately. This paragraph doesn’t really have anything to do with reddit but I already wrote it, so you have no choice but to read it.

^3, part 3

Will I ever finish writing up part two? Who knows. It probably isn’t worth reading anyway; it was just an excursion into trying to use the Roux method and failing terribly in terms of both move count and speed.

I saw that the latest WCA regulations had introduced a 60-move limit on the fewest moves challenge (FMC), because some guy kept doing 69-move solves and some other people were doing plain solves without even optimising for the number of moves. (CFOP optimised for finger tricks probably comes to around 80 moves per solve.) I’ve got a tiny amount of cubing theory in my head, so I figured I’d try this out. I manage to get within 60 moves around 80% of the time without backtracking or making notes on paper, so the 60-move limit is kinda lax, really.

I filled in my algorithm repertoire a bunch of COLL (orient and permute corners, preserving edge orientation) algorithms, along with the last OCLL algorithm (orient corners, preserving EO) I hadn’t memorised a short version for. The 13 PLLs (or equivalently, the 72 cosets of the U subgroup wrt to the PLL subgroup) can be classified into the following three categories:

All corners positioned correctly (1/6): solved (1/12), U perm (2/3; 9f*), H perm (1/12; 10f*), Z perm (1/6; 12f*). Average 8.83333f*. (Edit: H perm is nine moves optimal when combined with a U quarter turn, so this should be average 8.75f*.)

Adjacent corner swap (2/3): A perm (1/6; 9f*), J perm (1/6; 10f*), R perm (1/6; 13f*), G perm (1/3; 12f*), F perm (1/12; 14f), T perm (1/12; 14f*). Average 11.666667f, 11.583333f*.

Diagonal corner swap (1/6): E perm (1/6; 15f), N perm (1/6; 14f*), V perm (1/3; 14f*), Y perm (1/3, 17f). Average 14.833333f, 13.666667f*.

(Those marked with an asterisk are the cases I know FTM-optimal algorithms for, though for the H, Z, and J cases I usually use longer versions for speed purposes.)

I haven’t entirely familiarised myself with the G permutation algorithms yet, but even counting those, the only category for which there’s zero chance of me falling into a case where I don’t know the optimal algorithm is the first category, which also has the lowest mean move count. Two-thirds chance of hitting the shortest nontrivial PLL, and both H and Z perm can cancel any non-U/D quarter turn. (This happens with asymmetrical algorithms for symmetrical states–you can choose which direction to apply the algorithm from without affecting the result.) This makes permuting the corners ahead of time useful, but it’s also of limited use because this can lengthen the COLL algorithm by more than it’d be expected to save.

For example, Eve (R U2 L’ U’ R’ U L R U2 R’) takes two more moves than the shortest corresponding OCLL (and using it would reduce to a adjacent-corner-swap PLL, for a total saving of ~5 moves which wastes ~0.8 moves), and Dy (R U2 R2 L’ U2 L U L’ U2 R2 U’ L R’) takes four more moves than the shortest corresponding OCLL (headlights; you get an adjacent-corner-swap PLL rather than EPLL, but this still saves ~1.5 moves). I’m not sure why I ever bothered learning Dy, now that I’ve got numbers crunched showing that it’s useless most of the time.

Sometimes when I’m stuck I just throw Niklas (L’ U R U’ L U R’) at it from all eight directions and see if I get something I have an algorithm for. This tends to be a waste of time because I don’t have a lot of 2GLL algorithms in my head.

Oh, I also memorised a few 2GLL algorithms: Benny (+mirror, +inverse), Rune (+inverse). I still can’t immediately recognise it, though I’ve had to come up with bizarre mnemonics like “Benny is Sune but with that edge next to the pair of a side colour” and “likewise for its inverse, except this time it’s anti-Sune that has the edge next to the pair with a side colour”. It’s uh, not very fast pulling these mnemonics out of memory, not to mention that I don’t have any muscle memory for these algorithms. (Rune is sort of a hybrid between Sune and Bruno so if I let my muscle memory run wild, my fingers end up executing the wrong algorithm.)

I’m bad at memorising things, so I think I’ve long since plateaued in cubing speed; for FMC where speed matters a bit less, I can rely on my weird mnemonics more. It seems very roughly that if you reduce the number of looks/steps, the number of algorithms to memorise grows exponentially, and the 7 OCLLs + 21 PLLs for a two-look last layer are all my brain can handle. Well, 17 PLLs, not counting the four G perm algorithms that haven’t been committed to muscle memory yet. Or only 5 OCLLs and 12/13 PLLs, not counting mirrors and inverses.

The full set of 57 OLL algorithms (edge orientation and corner orientation) is too large for me to remember and quickly recognise. My main method does edge orientation up front, but in case I miss a pair of bad edges in the last layer (yes, this happens to me sometimes), I use a two-look OLL instead. I do remember a few OLL cases (#20, #28, #33, #37, #43/44, #45, #46, #57), but it’s hardly something to rely on. Other than #43 and #44 being mirrors of each other, the rest of those cases are symmetrical and thus show up less often than the asymmetrical cases. My rationale for learning them is roughly this: #20, #28 and #57 are ELL algorithms as well (they don’t affect corners); #43/44 and #45 are six-move sequences I learnt way back for EOLL; #33 and #37 are part of the T perm algorithm; and (the inverse of) #46 is part of the Z perm algorithm I use. These are all fewer than ten moves (other than #57) and the symmetry makes recognition easier. (There’s no “this is the left-handed case and for this turn R clockwise first” mnemonic bullshit involved!)

Anyway. Even for FMC I’m probably plateauing too, because I refuse to learn whole sets of hundreds of algorithms just to shave off a few moves in a competition format I’ll likely not be participating in, ever. At the same time, I’m not good with heavily intuition based methods like Roux, and even that requires a whole bunch of algorithms for the last four corners.

On another note, there’s a pretty big problem with memorising large sets of algorithms. OCLL and PLL (roughly) correspond to coset spaces of cardinalities 27 and 72 respectively, both of which are relatively tiny. This means that within about 72 log 72 ~ 310 solves, all PLLs will be covered with reasonable probability. (Naturally, this goes for the OCLLs too.)

This does not scale well.

COLL corresponds to a coset space of cardinality 162, so covering this would take ~820 solves. 2GLL corresponds to a coset space of cardinality 324, which would take ~1900 solves to cover. The whole ZBLL is a coset space of 1944 elements, and covering that would take ~15000 solves.

If your preferred route of rote memorisation is through repeated practice solves, you will have no choice but to dedicate multiple hours of your life every day to solving cubes just so “unused” algorithms don’t get forgotten.

Is there a more even split between the coset space sizes than OCLL/PLL? The only option is for sizes of 36 and 54, but I’m not sure how that’d come about. Maybe edges + one corner, followed by L3C? But wait, that’s also a 72-27 split.

Addendum (2015-07-07):

So if we want a 36-54 split, the obvious way is to take a 72-27 split and somehow double the size of the second coset space. So maybe instead of L3C, we could have “last three corners and permuting the UR, UB edges”, where the first step is solving a 2×2 block on the last layer. There’s a reasonable amount of freedom in choosing which corner to build a block off (four choices there!), and I suspect that’d reduce the algorithm count involved, but then these coset spaces inherently have less symmetry and so that’d increase the algorithm count.

Like, PLL is 72 cosets, sure, but thanks to lots of symmetry it works out to be only 13 algorithms. (The symmetries involved are rotation, reciprocation, and reflection.) Stages like L3C (27 cosets) disable rotation (almost) entirely, and has one less line of reflection to use. Actually, I’m not sure what the point I’m trying to make is, because L3C has only eight algorithms. The Speedsolving.com wiki page just looks long because it lists inverses and mirrors too.

Or maybe that’s my point. Most of the PLL cases are either self-inverse (all but U, A, G), have mirror symmetry (H, Z, F, T, E ,V, Y), or have mirror antisymmetry (all but G, J, R, N). The only oddball is G, which is also why it’s usually considered as four cases. Under reduction with only rotational symmetry, there are 22 L3C cases, compared to 21 PLL cases. (These numbers don’t count the solved case.) I don’t know what to make of this.

Anyway, I sometimes end up in this “2×2 block last layer” state with the other three corners not all oriented. Inspecting the positions of the corners is hard for me (how do people claim that COLL is easier to inspect than PLL?) so I just use some L3C to orient the corners, though not necessarily the correct L3C algorithm. In the case of three misoriented corners, that’s the 7-move Niklas; otherwise it’s one of the OCLL algorithms I normally use anyway. This leaves PLL restricted to the following cases: solved, A, J, V. This is easier to inspect and avoids the annoying G perm, which, incidentally, I’m still fumbling over a lot. The real advantage is that, despite my inability to recognise L3C cases, this allows me to skip PLL with a positive probability, whereas if I stuck with my usual Sune + PLL strategy, the chance of a PLL skip would be exactly zero.

Getting back to the symmetry bit, there’s another alternative for two-look last layer (with edges oriented) called NMLL, which focuses on a specific pair of side colours (e.g. red/orange stickers if the last layer is yellow), separates them to the correct side faces (i.e. the Kociemba subgroup 〈U2, D2, F2, B2, L, R〉 intersect the last layer subgroup, where the red/orange facelets are on the red/orange faces), then permutes the pieces accordingly. This has plenty of cases (41 for separation and 15 for permutation) and it’s rather daunting. It’s still a two-look system, so why does it involve so many more cases than OCLL/PLL?

NMLL was designed for fast recognition in cases common with blockbuilding methods like Petrus or ZZ, where sometimes you try to get the cube into a state where it’s one L or R face turn from being solved, because the blocks were just already there and it’d be a waste to break them up. The “last layer” pieces no longer all have the same last layer colour, which makes recognition harder. This issue is sidestepped if, instead of using the U face colour to recognise, we use the L/R face colours. NMLL goes one step further and, rather than just having a fast recognition system while using the old algorithms (that exists too, and is called NMCLL), uses new algorithms “tailor-made” for the recognition system.

Not sure what purpose that last paragraph of exposition served, but anyway. The edge-oriented last layer subgroup has 7776 elements. (That this is 65 is coincidental, but also a cool fact.) NMLL’s separation step is equivalent to orienting the corners a certain way (27) and permuting the UL/UR edges into the UL/UR slots (\tbinom42=6), so this has 27⋅6 = 162 cosets.

The permutation step has 4!⋅2! = 48 elements as a (sub)group, and since 162⋅48 = 7776, I probably didn’t make a mistake. We can also consider cosets of the 〈U2〉 subgroup, of which there are 24. Of the 24 cosets, we could use Burnside’s lemma to calculate how many are distinct up to rotation/reflection/reciprocation, but I’m lazy so I’ll trust the enumeration done by whoever came up with NMLL, and that works out to fifteen unsolved cases plus the solved case.

Also, I lied. I have no idea how this ties in to symmetry and case counts.

Knapsack encryption

I think I wrote an implementation of the Merkle-Hellman knapsack cryptosystem once upon a time because it was simple and it was mentioned in a number theory book I had. (Which is also currently not in my possession.) Maybe I did it in TI-BASIC; who knows.

The gist of it is that the underlying difficult problem is the NP-complete subset sum problem. A superincreasing sequence, for which the subset sum problem is solvable in polynomial time, is generated, and then scrambled by modular multiplication with a constant. The resulting sequence is (in general) not superincreasing, so one would hope it takes superpolynomial time to solve subset sum for that. The scrambled sequence is published and used to encrypt messages; a message can be decrypted by multiplying both the message and the scrambled sequence with the multiplicative inverse of the original constant, converting it into an easy case of the subset sum problem. Adversaries without knowledge of this trapdoor cannot decrypt messages in this direct manner.

The details are all in the linked Wikipedia article, so go read that.

This cryptosystem has been broken over three decades ago, just in case you get funny ideas about how it’d be cool to implement it.

Superincreasing sequences are very special, in that among the space of all possible strictly increasing sequences (of a given length within a given bound), they are very sparsely distributed. (We need only consider strictly increasing sequences because ordering doesn’t matter.) That this cryptosystem is insecure amounts to saying that multiplying a superincreasing sequence with a single constant is insufficient to mask the specialness of the sequence; they’re still special enough that there exists a poly-time algorithm which does not work for the subset sum problem in general.

Adi Shamir’s approach is, interestingly enough, not to find the inverse of the original constant and the modulus used; instead, he proposes a method to find some (multiplier, modulus) pair such that the scrambled sequence becomes superincreasing once again, without necessarily recovering the original superincreasing sequence.

You could just go read his paper on how he broke that, really. So go read that too. (Oh, you don’t have an IEEE subscription?)

He mentions that the original publication by Merkle and Hellman used a construction where the superincreasing sequence was of a 100-bit integer, a 101-bit integer, all the way up to a 199-bit integer. I haven’t actually bothered reading that paper, so let’s see if I can come up with a reasonable way of constructing superincreasing sequences like that.

Note that not every such sequence is superincreasing; for instance, if we start with (2^{99},2^{101}-1,2^{101}), the sequence fails to be superincreasing. Such sequences with local superincreasingness failures don’t look particularly pathological, so we can safely assume that this is not a reasonable way to construct superincreasing sequences. If you construct the sequence one element at a time, you might end up having the later elements of the sequence being bunched near powers of two.

Here, have a lemma: if you pointwise multiply a strictly increasing sequence of real numbers with successive powers of 2, the resulting sequence is superincreasing.

This suggests the following algorithm. Let \mathrm{rand} be a random number function returning a value in the half-open interval [0,1). Generate a hundred uniformly distributed integers in the interval [2^{99},2^{100})\cap\mathbb Z at random and sort them in increasing order; let these be a_i for i\in[0,99]\cap\mathbb Z. Generate new random numbers for duplicates, if any, so that every integer is distinct.

Let b_i:=\lfloor(a_i+\mathrm{rand}())2^i\rfloor. In other words, let the top 100 bits of b_i be a_i, and fill the rest of the bits randomly. By the above lemma, b is a superincreasing sequence.

Mind you, I’m not convinced that this is a particularly good way of generating superincreasing sequences for knapsack encryption. I mean, it’s not very relevant since it’s already broken anyway, so maybe the question is more of “does such a choice of sequences enable attacks significantly simpler than Shamir’s with high probability?”.

Rotations

I’ll keep this post short because I’m actually procrastinating something else to write this. Also, I don’t know group theory.

For starters, you should know that a rotation about the origin in two dimensions can be represented with a single complex number with unit magnitude. Applying a rotation to a vector is just converting the vector to a complex number, and then multiplying it with the corresponding rotation complex number, then reinterpreting it as a vector. Easy.

Since for two-dimensional rotations about the origin we can use complex numbers, maybe for four-dimensional rotations we can use unit quaternions? And indeed we can, but with quite a different construction. It’s not as simple as taking a vector interpreted as a quaternion and then multiplying it with some quaternion.

The quaternions, unlike complex numbers, do not commute under multiplication. We could come up with an explicit four-dimensional example using two elements of SO(4), but it’s probably easier to consider the subgroup SO(3), where we can take orthogonal rotation axes and by rotating around these axes in incommensurable angles, the noncommutativity follows directly. But no, (lack of) commutativity is not really the point, though it does highlight one flaw in the above naïveté. It matters whether we choose to left-multiply or to right-multiply, but there’s no reason to prefer one over the other, is there?

The point is that elements of SO(4) can be treated as 4×4 special orthogonal matrices, and Gram-Schmidt tells us that this has six degrees of freedom. Unit quaternions provide only three.

It turns out that remedying this is actually pretty easy. SO(4) has this special property where rotations can be factorised as a left-isoclinic rotation composed with a right-isoclinic rotation, where left-isoclinic and right-isoclinic rotations always commute. And it turns out that the isoclinic rotations can be represented as quaternions, and applying a rotation is then left-multiplying by the left-isoclinic rotation and right-multiplying by the right-isoclinic rotation.

To steal an equation from Wikipedia, P' = Q_\text{L} P Q_\text{R}.

This actually gives a double cover of SO(4), because you could multiply the quaternions on both sides by -1 without changing the overall rotation; the factorisation into isoclinic rotations is unique modulo that.

I don’t get why double covers appear all the time when talking about rotations. It’s a bit like the concept of half spin, which shouldn’t make sense, but somehow vaguely does.

Dropping one dimension, we’ve already covered the fact that a single quaternion suffices to represent an element of SO(3), and again this is a double cover. If we take generators of SO(3) for, say, the chiral icosahedral subgroup (which is isomorphic to the alternating group A5), translate them to quaternions, then consider the group generated by that (union {−1} if that’s not already included) under quaternion multiplication, we get the binary icosahedral group. Note that while the binary icosahedral group and the (achiral) icosahedral group both have 120 elements, they are not isomorphic to each other, nor is either of them isomorphic to the symmetric group S5, which also has 120 elements.

We can do this for any subgroup of SO(3).

One “uninteresting” example is the cyclic group, where applying this process gives the binary cyclic group, which is isomorphic to the cyclic group of twice as many elements. The binary dihedral group (aka dicyclic group), on the other hand, is not so trivially related to the dihedral group. It’d be nice if we could somehow visualise quaternions because I don’t feel like I have an intuitive grasp of what’s going on here. Wikipedia has a whole bunch of commutative diagrams but who can read that abstract nonsense?

Further reading: Wikipedia: Rotations in 4-dimensional Euclidean space.

WordPress, what the hell

I swear the HTML in half my posts are getting broken in some way every time I try to edit them in the new editor.

Thanks, but no thanks; I want to stick to the old editor, where what I see is what is served to the browser.

Other than double line breaks being converted to paragraphs, but I can live with that. It’s less painful than the alternative of typing <p> every time. (LaTeX support too!) But see, what I want is well-defined semantics I can easily replicate in my head, not something I have to delve into WordPress’s source code to divine.

This is why I don’t like “smart quotes” either; quite inevitably, the algorithm gets them wrong the moment you deviate even slightly from typical English usage. Unfortunately, WordPress.com doesn’t offer an option to disable them, so guess what, you’ll have to live with them. In fact, most of the “smart” autoreplacements WP tries to do have backfired at some point.

Did you know that, if you substitute numerals for a and b, “axb” gets rendered with a multiplication sign instead of the Latin letter ‘x’? Of course, this also fails the moment you type hexadecimal literals that happen to not have digits above 9. Oh, and this can’t be disabled, because why would anyone ever write about hexadecimal literals?

Chairs, part 2

Seriously, do you even think about the chairs you sit on every day? I do.

Recap:

[The] designs of some chairs (most?) are terrible.

Some make my butt ache after a while. (This usually applies to uncushioned chairs.)

Some don’t fit the table they’re meant to be used with.

Some get stained too easily. (By sweat, dirt, etc.)

Some wear too readily.

I got a new chair in my room some time last year. It fails the very first point, and very badly so. The feet also have sharp edges and I initially got many scratches on my heels because that’s just what sharp edges do. Those sly bastards.

But disregard that. I want to talk about my butthurt.

Since it’s the school break, I haven’t been stepping out of the house very often. For that matter, I don’t even step out of my room often either. It’s probably a safe estimate that I spend around fourteen hours a day seated on this chair. (That probably explains the butthurt aspect of it pretty well.)

Sometimes I adjust my sitting position to deal with the aching, and then within ten minutes I’m not even sitting on the chair anymore. I could be squatting on the floor, or half-lying on the chair, or even standing up. Anything but sitting. This is not really a solution.

Sometimes I sit on my hands, but then I can’t type when my hands are between my ass and the chair. Also, I’m kind of heavy and my hands don’t take too well to having over 50 kgf resting on them. So I don’t really do this often either.

Sometimes I sit on my feet. I feel that this is the least bad option, and guess what I’m doing now? This is still fairly uncomfortable, but it does relieve my butt of some pressure. The downside is that the skin on the outside of my ankles is now in very bad condition from constantly being pressed against the fabric of the chair, so presumably this is not a good thing to do over extended periods of time.

It’s not like I’m fat or anything. I’m unfit but not even close to overweight, and I haven’t lost or gained a significant amount of weight since about 2011. It can’t possibly be that I’m exerting that much more pressure on my butt than a normal person sitting on this chair would be.

H

I’m looking at the WordPress post editor right now (duh, that’s how I write almost all my blog posts) and the date is 28th June, which incidentally is also the expiry date on a loaf of a bread in the kitchen downstairs. But what the heck, we’ll probably keep it for a few days longer anyway.

Consider the Platonic solids. There’re exactly five of them, and it’s not hard to prove that. I mean, we of course need some non-self-referential definition of the Platonic solids, which could be just “a convex polyhedron that is flag-transitive”, and then now we have to define what a flag is, but y’know, Wikipedia exists.

There aren’t a whole lot of possible vertex figures. If we had m n-gons around each vertex, the sum of angles around a vertex would be…

Quick refresher: for a regular n-gon, its sum of internal angles is (n−2)π, which follows immediately from triangulating its vertices; each individual internal angle would thus be (1−2/n)π.

m(1−2/n)π, and convexity forces this to be a positive value strictly smaller than 2π.

m (1 − 2/n) < 2
1 − 2/n < 2/m
1/m + 1/n > 1/2

Whaddya know, it turns out to be symmetric in m and n! (But of course—the roles of m and n are switched when we take the dual, and flag-transitivity is preserved under duality.)

If m=2 or n=2, then we get either a dihedron or a hosohedron. These are infinite families of degenerate polyhedra (in Euclidean space).

There are exactly five other pairs of values of m and n satisfying the given constraint: m=n=3 (tetrahedron); m=3, n=4 (cube); m=3, n=5 (dodecahedron); m=4, n=3 (octahedron); and m=5, n=3 (icosahedron). Of course, we don’t have a priori knowledge that this constraint is sufficient to guarantee that a polyhedron exists, so the question here is this: is there a way to prove that these do give the Platonic solids, without considering each of them individually?

It’s a bit like how you can have a vertex configuration of 5.5.10 in the Euclidean plane (i.e. two regular pentagons and a regular decagon around a vertex), but that refuses to extend into a tessellation of the plane; what is it about regular polyhedra that allows us to conclude that the aforementioned vertex figures give a tessellation of the spherical plane?

What if we look at it from a graph theory perspective? Let V, E, F be the number of vertices, edges, and faces of a (supposed) convex polyhedron with vertex configuration nm. We have that 2E=mV and 2E=nF, and since a polyhedral graph is necessarily planar, VE+F=2.

E (2/m + 2/n − 1) = 2
E = 2 / (2/m + 2/n − 1)

And then it just so happens that, for all five pairs of values for (m,n), E works out to be an integer. I mean, of course it does; the five Platonic solids exist. But what is the underlying reason? Is there even one? I’m not being enlightened here.

On another note, I’d been pondering about how polyhedra would be like in hyperbolic space. Like, in Euclidean space, you can’t have a vertex configuration of 64 and have the figure close up (in fact, as a hyperbolic tiling, the number of tiles grows exponentially with distance), but in hyperbolic space, provided that the hexagons are large enough, the sum of angles around a vertex would be smaller than 2π. (In fact, arbitrarily small.) But does it close up, or does it still behave like a hyperbolic tiling?

Ninja edit: Naturally, because of the above formula for the number of edges in terms of m and n, even if it does close up, it cannot be topologically equivalent to a sphere, but then what could it be? One question resolved, and another one appears.

Ninja edit the second: I am slightly ashamed that I didn’t make the connection that 6.28 is an approximation of τ and that it was Tau Day when I started writing this post. But no matter, we have, uh… ∛250 Day tomorrow! (That joke doesn’t really work more than once, does it?)

Hype

Shortly after I wrote my two blog posts about hyperbolic geometry, HyperRogue, a video game featuring 2D hyperbolic geometry, was featured on Hacker News.

You know, making a video game out of it was what I was thinking of too, but I guess it’s just natural that I wasn’t the first to think of it. Not that anything would’ve come out of my ideas on what to program; judging by how many of them were actually completed, I’d say roughly 95% of my ideas never bear fruit. (Mostly due to laziness, hurhur.)

One interesting fact I learnt from the HyperRogue blog is that a circle’s area grows exponentially in its radius, not just quadratically. One semi-intuitive explanation for this is that we can have things like hyperbolic tilings, where the branching factor tends to some constant greater than one. There’s a lot of space far from the origin.

Also, a circle of radius r (measured in the hyperbolic metric, not in the Poincaré disk model) would have circumference 2\pi\sinh r, which agrees with Euclidean geometry when r is small (since \sinh r\sim r+\Theta(r^3)). In other words, if you were to somehow transplant an observer used to Euclidean geometry into hyperbolic geometry, where things scale as \tfrac1{\sinh d} rather than \tfrac1d (taking d to be the distance), he’d conclude that things are further away from him than they really are. I think having such an “expansion” visual effect would be cool for visualising hyperbolic geometry; looking top-down at the Poincaré disk feels like cheating, because that’s not how it looks like for an observer living in the model itself.

Well, at least that’s how it’d work for a first-person point of view.

Performing such a projection would involve five steps: translate the observer to the origin, convert to polar coordinates, convert the radial coordinate to the hyperbolic distance (r\mapsto2\tanh^{-1}r), apply sinh scaling (r\mapsto\sinh r), and lastly convert back to Cartesian coordinates. This doesn’t sound too hard.