Minimising the size of an algorithm set for a two-alg ZBLL

Some preliminary information: ZBLL is the state of the cube where only the U layer is left to be solved and the edges are oriented (i.e. there’s a cross formed). This can be considered to be a subgroup of 4!2⋅33/2 = 7776 elements, but a common convention is to somehow quotient out the effect of U turns (aka “adjust U face”, AUF), which can be done by considering the subgroup of order 7776/4 = 1944 with the ULF corner fixed (but possibly misoriented). There are other schemes to reduce the number of states to consider, but this is the simplest that still preserves the group structure. (Note: I don’t think there’s a surjective homomorphism from the whole ZBLL group to this subgroup, but we don’t need that anyway.) The number of “looks” is defined as the number of times the solver has to inspect the cube state to decide what algorithm to use, but note that this is not a very precisely defined term as, technically, blindfold solves are “one-look” but involve very many substeps and possibly visualising the cube state mentally.

The ZBLL group is itself a subgroup of the whole last layer group of index 8, as there are eight possible edge orientations. An earlier search by a forum member Tempus led to a set of eighteen algorithms (up to AUF, mirroring and inversion) such that for every state in the LL group, either this state is in the set or there exist two algs such that their composition gives the desired state.

Such a one-look two-alg system provides a reasonable tradeoff between a fully one-look system and a fully two-look system. It retains the relatively low algorithm count needed for a two-look system (a full one-look LL needs over 1200 algs, compared to only 44 for the two-look system CLL/ELL) while allowing for optimisations in choosing different pairs of algs for the same result. Furthermore, it may even require fewer algs than a two-look system for comparable efficiency. However, memorising the optimal choice of alg for any given state is still necessary (so it’s still 1200+ cases to remember), which may be why two-alg systems have not gained much traction.

ZBLL is a much smaller group in comparison, and unsurprisingly also a lot more tractable. About two weeks ago, I discovered a set of 12 algs such that all combinations of two algs (i.e. the square of the set) is exactly the whole ZBLL group. This was inspired by my laziness to learn all the COLLs and all the 2GLLs, so I just combined the Sune cases from either (5 from COLL, 8 from 2GLL, one alg shared between the two) and realised this was enough to span ZBLL. This combination apparently had the lowest known alg count of any two-alg ZBLL system known then (at least, I did my part in reading all the thousand-odd posts in the concept/method/substep thread on’s forum), and more importantly, it doesn’t even require memorising over a hundred cases to be applied; it’s more like only six cases in addition to all the algs. (I have a post written elaborating on this system which I’ll clean up and publish when I have the time to.)

Yes, had the lowest known alg count. More recently, last weekend I wrote a program to test the ZBLL coverage of any given input alg set in ZBLL, and after some experimentation, I found a set of just ten, six of which were heuristically-chosen “seed” algs that I already knew (Sune, another 2GLL, G perm, Buffy, L3C commutator, Bob), and the remaining four were found with (human) trial and error. Is ten minimal? Probably not, but it took me quite a few hours to even improve on an 11-alg set I’d found earlier. (I think an 8-alg set might be possible, in lieu of counting arguments demonstrating otherwise.) This, of course, loses the low case count of the “Sunes” method of the previous paragraph and three of the four non-seed algs are kind of nasty to execute IRL, so it’s probably useless for speedcubing purposes, but it’d be interesting to know what the theoretical limit is.


Everything’s going south again. Totally couldn’t’ve seen that one coming.

The arithmetic derivative

The arithmetic derivative is defined over the nonnegative integers by the following relations:

\displaystyle\forall p\text{ prime}\quad p'=1\\  \forall a,b\in\mathbb Z_{\ge0}\quad(ab)'=a'b+ab'

The second one is effectively the product rule. We can actually assign any value to the derivative of the primes and then we can use the product rule to extend it over all the real numbers, but having p'=1 for all prime p leads to what looks like the nicest arithmetic derivative. One justification is that the product rule gives (p^n)'=np^{n-1}p' (see also: chain rule), so by taking p'=1, we get a simple expression for this special case and the resulting function x\mapsto x' is nontrivial. (Another choice is p'=0, which maps every value to zero. There’s also the option of p'=p, which we might investigate in a later post.)

The arithmetic derivative, unlike the usual differential operator, is not linear: 3'+4'=5\ne1=7'. Or, well, that’s obvious since a linear map on the nonnegative integers must have the form n\mapsto cn for some c\in\mathbb Z_{\ge0}.

The product rule can also be written in terms of the “log derivative”: \tfrac{(ab)'}{ab}=\tfrac{a'}a+\tfrac{b'}b. Which is cool until you realise it’s nothing more than just dividing the product rule equation by ab.

For any positive integer n, we have that n=\prod_pp^{\nu_p(n)}. From the product rule, we derive n'=n\sum_p\nu_p(n)/p. (Pun intended.) Suppose we want to know what the growth rate of the arithmetic derivative is like, but obviously this depends a lot on the number of prime factors and their multiplicities, so we’ll smooth out the fluctuation in values by considering \sum_{k\le n}k'/k instead.

But first, a simple lemma: \sum_p1/(p\log p) converges. This can be proven by pulling in the PNT, but that’s way overkill. It’s been quite a while since I did any nontrivial number theory, so let’s see how far I get before I give up. The sum over integers \sum_k1/(k\log k) diverges, so all we need to show is that the primes are sparse enough.

Consider the binomial coefficient \binom{2n}n for n>1, which is singly divisible by all the primes between n and 2n (exclusive). We have that \binom{2n}n<2^{2n}, so \sum_{n<p<2n}\log p<(\log4)n. The LHS can further be bounded below by (\pi(2n)-\pi(n))\log n, so we get \pi(2n)-\pi(n)<(\log4)\tfrac n{\log n}. This is actually only a constant factor worse than what the PNT gives us, so that’s cool.

\displaystyle\sum_p\frac1{p\log p}=\sum_{k\ge0}\sum_{2^k<p\le2^{k+1}}\frac1{p\log p}\\  =\frac1{2\log2}+\sum_{k\ge1}\sum_{2^k<p<2^{k+1}}\frac1{p\log p}\\  <\frac1{2\log2}+(\log4)\sum_{k\ge1}\frac{2^k}{k\log2}\cdot\frac1{2^k(k\log2)}\\  =\frac1{2\log2}+\frac2{\log2}\sum_{k\ge1}\frac1{k^2}\\  =\frac1{\log2}\left(\frac12+\frac{\pi^2}3\right)\\  <\infty


Getting back to the arithmetic derivative:

\displaystyle\sum_{k\le n}\frac{k'}k=\sum_{k\le n}\sum_{p|k}\frac{\nu_p(k)}p\\  =\sum_p\frac1p\sum_{k\le n,~p|k}\nu_p(k)\\  =\sum_p\frac1p\sum_{1\le j}\left\lfloor\frac n{p^j}\right\rfloor\\  =\sum_p\frac1p\sum_{1\le j\le\log_pn}\left(\frac n{p^j}+O(1)\right)\\  =\sum_p\frac1p\left(\sum_{1\le j}\frac n{p^j}+O(\log_pn)\right)\\  =\sum_p\frac1p\left(\frac n{p-1}+\frac{O(\log n)}{\log p}\right)\\  =\sum_p\frac n{p(p-1)}+O(\log n)\sum_p\frac1{p\log p}\\  =T_0n+O(\log n)

where T_0\approx0.773 is a constant defined as \displaystyle T_0=\sum_p\frac1{p(p-1)}. In other words, roughly speaking, the derivative is T_0 times of the input if we somehow smooth out all the fluctuations. This error bound is probably tight (in that it’s not o(\log n)) because it comes from approximating the floor function with the identity. Notice that the estimate given on the Wikipedia article (which comes from E. J. Barbeau) is the unnecessarily loose T_0n+O(\log n\log\log n), which was brought up on the talk page as well. Funny that because Barbeau wrote the first major paper, everyone’s just citing his results, including his inaccurate approximation of T_0.

Anyway, Wikipedia also has \sum_{k\le n}k'=T_0n^2/2+O(n^{1+\epsilon}), which again is unnecessarily loose. We could follow a similar derivation as above (which was what I initially did), or we could just shortcut it by using summation by parts, which is a Cool Technique for quickly evaluating this type of related sums when the hard work has already been done for one case.

\displaystyle\sum_{k\le n}k'k^a=\sum_{k\le n}\frac{k'}kk^{a+1}=n^{a+1}(T_0n+O(\log n))-(a+1)\sum_{k\le n}(T_0k\cdot k^a)=T_0\frac{n^{a+2}}{a+2}+O(n^{a+1}\log n)

I skipped some steps but you get the idea. The above is valid for all nonnegative real a; for negative a, the series is convergent so there’s no asymptotic growth to speak of. Plug in a=0 and we get \sum_{k\le n}k'=T_0n^2/2+O(n\log n) with barely any additional effort.

That is not how you cite things from the Internet

Here we are, in the glorious twenty-first century, where nearly everyone has internet access yet seemingly no one knows what the world wide web is or how it works.

Let’s get this clear: if you want to cite user-generated content, you don’t cite the website hosting it; you cite the bloody user. The user is the publisher, not the host!

Yet almost every time I flip open the newspapers I get to see marvels like “Source: YouTube” along with a shitty screenshot of an overcompressed video upscaled a thousand times. Or sometimes there’s the equally useless “Source: Twitter” caption.

Maybe I should stop reading MyPaper.


I must’ve missed the memo saying that CM1111 Inorganic Chemistry 1 was only for chemistry majors or something, because CORS didn’t offer it at all. The other chemistry module I could take was CM1131 Physical Chemistry 1, but the minimum bid points for that was higher than the all the bid points I’d been given over three semesters. The line-up ended up being Algebra I, Combinatorics and Graphs I, Ordinary Differential Equations, Programming Methodology, and Japanese 1.

Funny thing though, the Physical Chem 1 class is right before ODE in the same lecture theatre, and I was quite surprised when I saw some NUS High students seated in the LT when I got there. (They’re taking level-3000 modules already?!) Until they all stood up and left as the ODE class started. Uh huh.

I don’t think they’d recognise me as “that strange guy who loitered around hidden corners of the school” because, well, hidden corners are hidden, and it’s also been about four years since I graduated.

Anyhow, this semester’s scheduling is pretty awful, though I guess it could’ve been worse. The Japanese class is all the way over at FASS, which is about twenty minutes walking distance. (My mnemonic says my walking speed is about 100 metres per minute, so that’s 2 km.) It’s not much further away from FoS than UTown is, anyway, so what the heck, I’ll walk.

SoC, however, is even further. Screw that. I mean, I already skipped one recitation session for Programming Methodology because I couldn’t find my way to the freaking seminar room until I was already 25 minutes late, and while I did attend the first tutorial, I’m having this strong inclination to just skip that too because the only way to commute to/from SoC is to take the shuttle bus, and oh god, the shuttle bus. They’re almost always packed like sardines. I hate that. On one hand, the tutorial and recitation sessions are surrounded by one-hour breaks. On the other, do I really want to spend all that time walking? No thanks. I’ll probably just skip the recitation tomorrow as well just for consistency, because, y’know, I’m already acing the homework section. I’m sure I can tank a few percentage points fewer in the attendance grade by maxing out everything else. There’s actually a leaderboard and I’m number 8 out of about 190 right now, so yeah, I’m literally not worrying about it.

English classes are right in the morning at 8 am, and I can’t skip them, so RIP sanity. Or, I could skip them, and then everything would be a repeat of last semester. Sounds like an extremely bad idea. Anyhow, I’m finding myself zoning out during the classes because of insufficient sleep. But poorer life decisions have been made.

Shitposting IRL

I’m taking ES1102 again, and this time I’m going to… idk.

I don’t really care anymore.

Whatever motivation I had in scoring well already died out a few weeks into the previous semester, so now it’s time for the spiral of depression to start again.

Have I mentioned that it’s almost 3 am, the English class is at 8 am, and I have eight hours of classes in total tomorrow? I have no idea how I’m going to pull through that with just three hours of sleep. I probably won’t. Caffeine doesn’t really do much for me, at least not in the form of coffee. All drinking coffee does is make my urine smell like coffee.

If tutorial registration plans go too far south, Tuesdays could end up being packed with eleven hours of classes from 8 am to 8 pm. Unfortunately, I missed the memo on CS1010S’s tutorial timeslots when picking the modules, so things were already going south.


It’s fairly trivial that the product of the gcd and the lcm of two positive integers is equal to the product of those integers, and the easiest way to prove this is to use the unique prime factorisation given by the fundamental theorem of arithmetic.

How can we prove that without resorting to prime factorisation? Preferably, without invoking the notion of primes at all.

For starters, \tfrac{ab}{\gcd(a,b)} is a common multiple of a,b (since \tfrac{ab}{\gcd(a,b)}=a\cdot\tfrac b{\gcd(a,b)}), and so we have \mathrm{lcm}(a,b)|\tfrac{ab}{\gcd(a,b)}. Multiplying by the gcd, we get \gcd(a,b)\cdot\mathrm{lcm}(a,b)|ab.

On the other hand, \tfrac{ab}{\mathrm{lcm}(a,b)} is a common divisor of a,b (since \tfrac{ab}{\mathrm{lcm}(a,b)}=a/\tfrac{\mathrm{lcm}(a,b)}b), so \tfrac{ab}{\mathrm{lcm}(a,b)}|\gcd(a,b). Multiplying by the lcm, we get ab|\gcd(a,b)\cdot\mathrm{lcm}(a,b).

Since a,b were assumed to be positive, the above divisibility statements imply ab=\gcd(a,b)\cdot\mathrm{lcm}(a,b). QED.

Two lemmas were implicitly used above. The first is that the least common multiple is a divisor of every common multiple, and the second is that the greatest common divisor is a multiple of every common divisor. These are also very straightforward to prove through the Euclidean algorithm.


There’re two sets of lights in my room, which we’ll call the left and right lights, with the reference frame being me facing my old laptop’s monitor. As the old laptop had this reflective casing (wtf?), and having the lights reflected directly into my eyeballs is kind of annoying, I pretty much never used the left lights. Of all the people I live with, my maid is the sole person to have realised that I use the right lights 99% of the time.

Also, since the monitor was glossy (wtf?!), I often used the computer in darkness anyway. And then every so often my mother would come in, blabber a bit about how “it’s bad for your eyes” and whatever, and then switch on the left lights before leaving. Every time this happened made me extremely pissed, because (i) it’s patently false that having the unnecessarily bright lights reflected directly into my eyes after they were accustomed to the darkness is any better than using the computer in darkness and (ii) oftentimes, I was about to go to sleep so I had to go switch off the lights anyway. But hey, if she wants to believe she’s doing something useful for her son, whatever; I’ll play along.

Last year I got a new monitor, but some hurfdurfery about Linux graphics drivers left me unable to use it with the old computer, so the situation didn’t really change much. (Unless I used VGA, which does work perfectly with Linux, but for some reason that has lots of ugly horizontal ringing.)

Anyhow, the left lights are roughly neutral, but the right lights were recently changed to be warm/orangey. Needless to say, this also frustrates me plenty. I could use the left lights and be annoyed by reflections, or I could use the right lights and be annoyed by the whole room being orange. The net effect is that I’m unhappy whenever I’m in my room in the evening and not sleeping, and then I spend time thinking about how unhappy I am instead of doing whatever I should be doing.

It’s all rather depressing.

PS: Oh, and apparently I lost the water bottle I got as a souvenir from national service on Friday. Not really sure how that happened since I usually check my surroundings before leaving, but okay. It was a shitty bottle, though still better than not having one at all.


I didn’t want to admit this, but I got addicted to cubing once again, and for just about the whole of July I’ve done roughly nothing of value because I spent a lot of time playing with this cube-shaped toy. So much that my nails are breaking and chipping.

I never was much of a speedcuber, and while getting a better cube has improved my time by a few seconds, I still haven’t managed to break the 30-second barrier consistently. I’ve been assuming that this could be attributed to my fingers being slower than the “average” person’s; after all, I also use the Dvorak keyboard layout and I’m only just on par with the average Qwerty user in the same age group. (I’m a lot faster than people who don’t need to type fast, but that’s not something to brag about.) I don’t know, but that’s not really the point!

My right thumb nail cracked, which was when I switched to playing one-handed, achieving a completely unremarkable average of ~80 seconds. And then my left pinky nail cracked, but I continued anyway. This probably forced a bad form because I was actively trying to avoid using my pinky and the weird posture I adopted is already burnt into my muscle memory.

There’s something ridiculously appealing about solving cubes. With the standard 3×3×3 cube, it takes less than 20 seconds to scramble and I can solve it in under 40 seconds, so a solve-scramble cycle pretty much always takes around a minute. A single solve is not a huge time investment, and it’s easy to fall into the trap of “one more won’t take long, so I might as well go for it”. It’s easy enough to squeeze in a minute here and there, and then for the past few weeks I was carrying the cube with me practically everywhere throughout the house all the time. (Other than the washroom. That’d’ve been unhygienic.) Even if I had to use my right hand for whatever, that left one hand free to cube.

Cubing, however, is not an activity that only involves the hands. It involves attention too, and that meant that I was constantly being distracted from everything else I should’ve been doing. That sucked.

I have a megaminx too, and my most recent timings were about 400 seconds average. I don’t remember if that was better or worse than my timings years ago, but again, that’s not the point! Doing a megaminx solve or a 4×4×4 solve is a considerably larger time investment, not just because the solving takes five to ten times longer, but also because the scrambling takes that much longer too. The difficulty of fairly scrambling a megaminx without a computerised random number generator puts me off from it, which is a bit sad because I feel like I’m done with cubing and I won’t improve any further, so I want to have fun with dodecahedroning (still with the knowledge I won’t even get near world record speeds!), but it’s harder to sink time into that.

I don’t know what to make of this. So I’m sick of cubing, but it’s a “fast” time killer so I do it anyway and I accidentally killed too much time.


Is this a thing? Probably not. It doesn’t seem very useful, because the preceding step would be simultaneously phasing and permuting corners, and I don’t think anyone’s ever bothered generating algs for that. That also corresponds to a coset space of 18 elements, so there wouldn’t be too many cases.

If we fix the corner permutation, 2GLL ∩ ZZLL is a subgroup of order 108. Thanks to symmetry there are a lot fewer cases. (m/n means there are n cases and I know m of them. It’s not a probability, because not all cases are equiprobable.)

Oriented corners (1/27): 3/3 cases.
Sune (8/27): 4/4 cases.
U (4/27): 3/3 cases.
T (4/27): 3/3 cases. (One case is shared with U.)
L (4/27): 2/4 cases.
Double Sune (2/27): 2/3 cases.
Bruno (4/27): 1/3 cases.

So that’s a total of 21 cases (not counting the trivial case), 16 of which I know, and the probability of hitting a known case is 43/54 ≈ 80%. (Update: now I’m at 19 cases out of 21, with the only two I haven’t bothered learning being the only-all-corners-misoriented cases, which also happen to have the longest algs.)

If we look at all the 2GLL cases, the break down is more like:

Oriented corners: 4/4 cases.
Sune: 8/12 cases.
U: 4/7 cases. (p = 1/2)
T: 4/7 cases. (Again, one case shared with U; p = 1/2)
L: 2/8 cases. (p = 1/6)
Double Sune: 4/5 cases. (p = 11/12)
Bruno: 2/7 cases. (p = 1/4)

26 out of 48 nontrivial cases covered, with probability 83/162 ≈ 51% of hitting a known case.

Man, I think it would be pretty fun to tally up all the algs I know. I recently picked up a bunch of COLLs too, but like I said before, some of them are plain useless. Only two out of six of the Sune COLL cases have short and very fast algs, so it’s on average better to just use Sune/Niklas and do PLL. (The diagonal-corner-swap PLLs can always be avoided too, which saves about a third of a move.) I’m not particularly intent on learning the whole set of COLLs because memorising things is hard; of the 33 nontrivial cases, I know 21 of them. Since CP/2GLL and COLL/EPLL are distinct substeps of the last layer, there’s some overlap between the list of COLLs with the list of 2GLLs above.

Counting the PLLs and non-edge-preserving OLLs too, that puts the total number of last layer algs I know at about 60, and most of them are not what I’d use in a speedsolve in the first place. I’m kinda just learning COLL and 2GLL “for fun”, so there’s that, though COLL is sorta useful on larger cubes with the sandwich method.

But anyway, I wanted to mention something about antisymmetry. I think antisymmetry’s pretty cool.

Take the Rune algorithm, for instance. The effect of the move sequence is antisymmetric, but the move sequence itself (R U R’ U R’ U’ R2 U’ R2 U2 R) is not, which is kind of lame. On the other hand, consider this Sune + H perm alg: (R U R’ U’ (R’ U’ R) U (R U’ R’) U R U’ R’). Here, not just the effect is antisymmetric; the move sequence is too!

There’re some other examples, like U M’ U2 M U being the x-axis mirror of its inverse. This is an edge 3-cycle; by conjugating this with F2 or M2, we get an edge 3-cycle in one layer. There’s also Bruno (R U2 R2 U’ R2 U’ R2 U2 R), which is the z-axis mirror of its inverse.

R U2 R2 U2 R’ U2 R U2 R’ U2 R2 U2 R is another example that’s the z-axis mirror of its inverse.

R U R U R U’ R’ U’ R’ U’, an edge 3-cycle, is the (x−y)-axis mirror of its inverse. Incidentally, conjugating this with R U’ gives the standard 2-gen U perm alg, but the resulting move sequence is not antisymmetric.

Niklas (L’ U R U’ L U R’) is the x-axis mirror of its inverse.

F R’ U2 R F’ R’ F U2 F’ R (an L3C/COLL alg) is the (x−z)-axis mirror of its inverse.

This antisymmetry seems to make memorising the alg easier, because it effectively compresses the alg length by half.

I’m apparently really bad at counting cases so some of the values I’ve reported before might be wrong. I don’t count mirrors separately (I just mentally swap the roles of my hands) but most websites do, so it’s hard to corroborate my values with external sources. Grain of salt, etc.

So there’s this double Sune with H perm edges alg I found with Cube Explorer that’s pretty cool. It goes (r U2 r’ U’ r2 R’ U R’ U’ r2 R’ U’ R’), and if you replace all the wide turns with single-layer turns, then this becomes just a double anti-Sune. Alternatively, we can rewrite it as a double anti-Sune with inserted M-slice turns (M’ R U2 M R’ U’ M2 R U R’ U’ M2 R U’ R’).

Executing this alg doesn’t require any regrips at all, which is extra cool. (Same holds true for the usual anti-Sune, actually, and this extends to applying the anti-Sune twice.) We can keep the left hand holding the LF and LB edges, and start with the right thumb on DR, index on UFR, middle on UR. Its inverse, which was what Cube Explorer actually found, does require regrips; this disparity comes from how fingertricking a U’ turn is harder than a U turn with the right index finger.

It could be done with just R and U turns, but that’d need 15 moves instead of only 13.

I am pleased to announce that since I last posted about cubing, my left ring fingernail snapped, my right index fingernail chipped off, and both my left pinky nail and my right thumb nail have cracks along their sides.

Actually, no, I’m not pleased at all. The moral of the story is that speedcubing, stickerless cubes, and long nails do not go together. I’m assuming the presence of stickers actually provides a little bit of cushioning for the nails impacting ABS at high speed because this is the first time my nails are getting worn down and broken so rapidly, but it could just be that I’ve been spending a lot more time on cubing than I’ve ever had before.

PAD, 2015 edition

Sorry if this seems a bit uninspired because I put this off for a whole week and only started this morning, and ended up with nothing I hadn’t already come up with a few months ago.

Remember when I asked about noncontrived series for π with an initial term of 22/7? Well, here’s a not-quite-example.


Obtained from forcing a splitting ratio in transforming the Gregory-Leibniz series so that the first term is 11/14 (22/7 divided by four) and choosing the free parameter so that the quadratic coefficient in the numerator vanishes. It’s certainly contrived! However, it’s rather coincidental that the first two terms under the summation actually cancel each other out (1/(3⋅5⋅7⋅11) = 5/(5⋅7⋅11⋅15)), so we could equally well rewrite the above as


PLL timings, more last layer notes, and some more FMC

Preface: Maybe you’re sick of reading about all this cubing stuff already, but I’m not, so you’ll have to bear with it. Usual programming will resume eventually.

G perm not included because I still can’t immediately recognise them. For the rest, I did ((alg) y)12 or something similar, to avoid “optimisations” when the first and last moves are the same. Times given are for doing the algorithm 12 times (along with 12 cube rotations). I have no idea how large the standard deviation is, so maybe take that to be around 1.5 seconds. (I could do more runs to get the sample standard deviation, but all this speedcubing has been filing my right index fingernail down faster than it grows.)

35.60 U perm, cw, ftm/stm-optimal (F2 U M’ U2 M U F2)
35.18 U perm, ccw, ftm/stm-optimal (F2 U’ M’ U2 M U’ F2)
33.08 U perm, cw, stm-optimal, 〈M,U〉 2-gen (M2 U’ M’ U2 M U’ M2)
35.47 U perm, ccw, stm-optimal, 〈M,U〉 2-gen (M2 U M’ U2 M U M2)
38.77 H perm, stm-optimal, 〈M,U〉 2-gen (M2 U M2 U2 M2 U M2)
43.10 H perm, ftm-optimal including AUF (L R U2 L’ R’ y L’ R’ U2 L R)
45.70 H perm, ftm/stm-optimal excluding AUF (R2 S2 R2 U R2 S2 R2)
44.25 Z perm, qtm-optimal (L F U’ R …)
40.77 Z perm, qtm-optimal (F R U R’ …)
50.63 Z perm, ftm/stm-optimal (M S2 M’ D’ M2 u M2)
41.41 A perm, ccw, ftm/stm-optimal (R2 B2 …)
41.39 A perm, cw, ftm/stm-optimal (R’ F R’ …)
35.16 A perm, ccw, ftm/stm-optimal (l’ R’ D2 …)
36.98 A perm, cw, ftm/stm-optimal (l’ U R’ …)
49.27 E perm (l U’ R’ D …)
45.32 F perm (R’ U R U’ R2 …)
35.84 T perm, ftm-optimal, right-handed, 〈U,R,F〉 3-gen (R U R’ U’ …)
39.03 T perm, ftm-optimal, left-handed, 〈U,L,F〉 3-gen (L’ U’ L U …)
35.81 J perm, unflipped, qtm-optimal, 〈L,U,R〉 3-gen (L’ U R …)
37.95 J perm, flipped, qtm-optimal, 〈L,U,R〉 3-gen (R U’ L’ …)
42.91 J perm, unflipped, qtm/ftm-optimal (B2 L U L’ …)
41.57 R perm, ftm-optimal, UBL/UL block (R U2 R’ U2 …)
40.96 R perm, ftm-optimal, UFL/UL block (R’ U2 R U2 …)
51.40 N perm, right block, ftm-optimal, 〈L,U,R〉 3-gen (R U’ L …)
51.35 N perm, left block, ftm-optimal, 〈L,U,R〉 3-gen (L’ U R’ …)
39.00 V perm, ftm-optimal, (R’ U R’ d’ …)
40.24 Y perm (F R U’ R’ …)

Writing CSS is hard, so deal with this stupid layout. I presume this is quite slow compared to “real” speedcubers, and there are some aberrations like my U perm being the fastest even though it’s not an 〈R,U〉 2-gen algorithm, and all the serious speedcubers seem to swear by 2-gen algorithms. (Note: “2-gen” means it’s in a subgroup generated by two moves.) Cases I know multiple algorithms for have been listed multiple times (U, H, Z, A, T, J) ; there are slight surprises like the standard A perm being slower to execute than my rotated version, but that’s probably because the rotation conjugated with the y-axis rotation I applied after every algorithm simplified to a single z-axis rotation. The net effect is that the average amount of cube rotating in this speed test is less than what a real solve would entail.

(Friendly reminder that in cubing notation, up/down is the y-axis and front/back is the z-axis, just in case Rubik’s cubes weren’t confusing enough.)

Sorting these by speed, it goes like this: U perm (both types), A perm (anticlockwise cycle), J perm (right-handed), T perm, A perm (clockwise cycle), J perm (left-handed), H perm, V perm, Y perm, Z perm, R perm (both types), F perm, E perm, N perm (both types), and finally G perm (all four types).

Well, G perm is only slow now because I still suck at recognising it (and using dumb mnemonics…), but given that it’s the most common PLL (with a 2/9 chance of coming up), I’ll probably get better over time.

The very tiny amount of corner control I exert during corner orientation (by keeping a 2×2 block if it exists) restricts the PLL in special cases to A/J/V, all of which are among the faster ones execution-wise for me, and V also happens to be the rarest. (Probabilities of occurrence are 1/3, 1/3 and 1/6, respectively.) It probably doesn’t make sense to learn all the cases because it’s pretty rare to begin with. Unless I switch to a 2-look/3-look last layer that starts by solving a 2×2 block…

How would that work? I’ve been pondering a bit since I blogged about it. Let’s assume edge orientation is done, as usual. If there are no misoriented corners, go straight to PLL, which can be done in two looks with a bare minimum of {A,U,J,V}, where A, J, V are already needed later on anyway. Otherwise, there is at least one misoriented corner. Picking any such corner as a reference, the two edges that should be adjacent to it can be in 12 possible (relative) locations. This gives a total of 24 cases to learn, or 12 if mirrors aren’t included. Not too big, and I bet most of the cases can be dealt with with a single algorithm applied in different locations.

The next step is a combination of L3C and “L3C plus permutation parity”. The former is well-studied and has 22 cases. As for the latter, the two swapped edges give us a fixed orientation in space, so there are 32⋅3 = 27 cases. If you exclude mirrors, L3C has 12 cases and L3C+parity has 15 cases. (None of the L3C+parity states are symmetrical, except for the V perm and V perm + diagonally misoriented corners; similarly, in L3C, only the two-corner pure rotation states have reflection symmetry.)

To elaborate a bit on splitting PLL into CPLL and EPLL, H/Z can be done with two U perms (apply any U perm, and there will be three wrong edges left). All the diagonal-corner-swap cases can be done with V followed by U; for the Y perm, the only thing to “watch out for” is that this can end with a H perm instead of a U perm if you swap the wrong pair of corners. The adjacent-corner-swap cases can be classified by edge permutation, where F and T have Z-permuted edges (apply any J perm to fix corners), and the rest have U-permuted edges (apply any A perm to fix corners) or correctly-permuted edges (exactly the A perm). (Actually, all the cases in the U-permuted edges category can be handled with the J perm, but the previous sentence gives a one-liner summary that completely avoids special-casing.)

This has net 40 algorithms (not counting mirrors) for a purely 2-look (edge-oriented) last layer. For comparison, OCLL/PLL has net 20 algorithms, also not counting mirrors. Another problem here is probably that recognition of the first half involves finding the two adjacent edges, and since the edges are oriented, their colours won’t be on the U face.

[excised into new post for organisation and readability purposes]

On another note, I had a go at FMC again. (I got sick of failing at BLD.)

Scramble: F2 L2 F2 L2 U B2 D F2 L2 U2 R2 D’ F2 L D L’ D2 L R2 F’ D’ U2
(actually Cube Explorer generated a different scramble, but I screwed up while applying it, so I manually scrambled it some more, then fed it to Cube Explorer to get this)

First attempt (44):
B’ D R2 B’ U2 D2 L2 D (8; block)
U’ L U D’ B’ D (6; edge orientation)
B2 L’ B2 L2 (4; one corner-edge pair)
D B D’ B D B2 D’ (7; edge permutation using Sune)
U’ B’ L2 B’ R’ B L2 B’ R B2 (10; corner 3-cycle)
D’ B2 D U L R B2 L’ R’ (9; H perm)

Second attempt (not counting intermediate attempts that used more moves) (41):
B’ D R2 B’ U2 D2 L2 D (8; block)
U’ L U D’ B’ D (6; edge orientation)
B2 L’ B2 L2 (4; one corner-edge pair)
R B R’ B R B’ R’ B R B2 R’ (11; corner orientation using double Sune)
(D L’ D’ L)3 (12; corner swaps)

Third attempt (31):
R B2 (2; 2×2×2 block with U2 pre-move)
F’ D2 R F’ (4; 2×2×3 block)
R D2 R’ D’ R2 D2 R2 U2 (8; first two layers and undoing premove)
F L2 B’ L’ B L’ F’ (7; edge orientation using wide anti-Sune)
L’ D’ L D’ L D L2 D L2 D2 L’ (11; 2GLL using Rune)

This was actually an unusually good run, though I also exceeded the one-hour limit by quite a lot. I’m running a search for the optimal F2L and Cube Explorer’s getting stuck at depth 14, so 14 moves is actually optimal. I didn’t bother trying to fix the bad edges after making the block, because the F2L pieces happened to magically line up and, well, an optimal F2L happened. The last layer part is coincidentally also 14 moves optimal, so there’s a wastage of four moves in my solution. That said, I exhausted all the two-edge-flipping OLLs I knew and the right wide anti-Sune was the only one that left me with a last layer I already knew how to solve optimally.

I don’t have a “very huge” repertoire of last layer algorithms (the edge 3-cycles, some corner 3-cycles, the PLLs, random short OLLs, random 2GLLs, random COLLs, etc.) so this “brute-force through algorithms until something works” technique tends to fail more often than it works. Petrus’s site has a list of forty short-ish ZBLL algorithms such that all edge-oriented last layer cases can be solved with at most two of these algorithms. Sounds kinda useful, but >memorising 40 algorithms. It took me seven years to get around to memorising all of OCLL and PLL, and that’s only 28 algorithms!

Oh, and while I was trying to practise the Roux method, all the blocks just lined up, I switched back to Petrus, and I got a 30-move solve without even backtracking. That probably wasn’t a good scramble.


And now that I’ve played with FMC, it’s time to move on to blindfolded solving. (Except without the blindfold. I just close my eyes.)

I’ve been trying to learn the 3OP method, where the orientation of all the pieces is solved first, then conjugated 3-cycles are used to permute all the pieces to their correct locations. (Caveats: it can be the case that the edge permutation parity and corner permutation parity are both odd, in which case an extra algorithm is needed to swap two corners and two edges; also, while the two-pairs-of-swapped-corners (or of swapped edges) case can be handled with two 3-cycles, the 3OP method prefers to solve them directly.)

Why 3OP? Because it doesn’t require learning any new algorithms! I’ve long since known how to create a conjugated commutator to flip any two edges, with an additional four-edge-flip algorithm I picked up while trying to learn Roux. Corner rotation is a bit more annoying (and where I mess up a lot), but doable. The permutation 3-cycles as well as the parity algorithm are all from PLL.

Anyway, so far I’ve managed to (not very reliably) memorise and execute the solve up to permuting the corners, but not the edges. Well, considering the amount of information involved, it goes something like this:

Edge orientation: 11 bits (exactly)
Corner orientation: 7 log2 3 ≈ 11.09 bits
Corner permutation: log2 8! ≈ 15.30 bits
Edge permutation: log2(12!/2) ≈ 27.84 bits

So yeah, edge permutation is more entropic than the whole cube’s orientation (which I already have some trouble doing), and contributes over 40% of the memorisation load. Or it would, if not for the fact that humans tend to be inefficient at memorising raw data. There’s already a lot of “data expansion” anyway.

My memorisation system is exactly the same as in the 3OP guide, except I tag all the edges with integers from 0 to 11 and the corners from 0 to 7; the guide uses one-based indices (and a different corner ordering). Edge orientation is memorised as a sequence of which edges need to be flipped, and corner orientation is done by grouping corners into ccw/cw pairs and cw/cw/cw or ccw/ccw/ccw triplets. Permutation (for both corners and edges) is done by memorising the cycles.

Memorising as cycles is inefficient if you judge solely by the number of bits needed, but it’s very easy to determine which pieces to apply a 3-cycle to and to update the permutation mentally after applying said 3-cycle. In case the first three pieces in the cycle are inconveniently located, one can also rotate the whole cycle mentally (e.g. from (0,3,4,1) to (1,0,3,4)) without affecting the actual permutation. Applying a 3-cycle to the first three pieces in the cycle drops the second and third numbers from the cycle, so the amount of stuff to memorise reduces as the solve progresses. (The pattern continues for larger cycles, but larger cycles need more setup moves (more setup moves ⇒ more memorisation) and I haven’t memorised any 5-edge-cycle algorithms anyway.)

The way I do this is to visualise where the three pieces of a cycle are by tapping them, applying and memorising a bunch of setup moves, applying the algorithm proper, then undoing the setup moves. Sometimes I forget the setup moves when trying to undo them (especially if they involve whole-cube rotations) and then the whole attempt is screwed.

One gigantic problem with BLD is that I can’t tap into my visual memory to apply algorithms, which is something I’ve been relying on since I learnt how to solve a cube. I can’t even know if I’m doing something right or wrong until the very end, when I look at the cube and go “dammit, I screwed up again”. (This happens about two-thirds 90% of the time.)

Oh, and my stickerless GuHong still has the DaYan logo sticker on the white centre (it’s not completely stickerless), so that’s one face I can identify by touch, which is actually cheating. Not that it helps much; I can’t rely on it to get the cube back to the correct spatial orientation when I forget which rotations I used as setup.

It’s only been two days since I started, so maybe I’ll be able to actually execute a full solve eventually. Possibly even under ten minutes!


Scramble: R’ F U D L’ B U’ D L2 B U R2 D2 R2 U2 L’ D2 R’ D2 L2 F2 L R’ D R U D L’ F2 R F’ D’ R’ F2 R’ F2 R U2 F2 U2 D2 R F2 R’ B R F B R L D R L’ U’ D2 F’ U2 B U2 B’ R2 B R2 L2 D2 F’

(That’s three two-phase scrambles concatenated, just so you know I wasn’t cheating or anything.)

Solution (45 moves):
B’ U’ B’ L’ F U L2 U’ R D2 F D U F U’ (block)
B’ D B F’ R’ F (edge orientation)
R D’ R’ D2 R D2 R D R2 D R2 D2 R’ D2 R (first two layers + edges; I actually had no idea what was going on here)
L2 U L’ D2 L U’ L’ D2 L’ (last three corners)

I tried. For what it’s worth, building the block can be done in at most 9 moves; say, R’ L U R2 U’ F U’ B2 L2. (I only checked the same block I used in Cube Explorer, so presumably among the other 11 possible block choices, some of them will use fewer moves.)

There was already a 2×2 block when I was done with edge orientation and I tried both ways to extend it into F2L, but this gave the shortest solution. I also tried the inverse state, but that didn’t help either.

And then I hit the (self-imposed) one-hour time limit.

FMC is hard.


I’ll be taking Algebra I this coming semester, and maybe I’ll start to actually understand group theory then.

Though now that I’m looking at the timetable, wtf, MA2202S has two three-hour tutorial+lecture sessions and MA2214 Combinatorics and Graphs I has 6-8 pm lecture sessions. This is awful, but maybe I’ll pull through somehow. The other maths modules up for consideration this semester are MA3205 Set Theory and MA3220 Ordinary Differential Equations, though there isn’t really a very wide choice because most of the interesting stuff depends on Algebra I or Analysis II; other than the four aforementioned modules, the remaining options are MA3236 Non-Linear Programming and MA3264 Mathematical Modelling.

For that matter, I’m actually done with the maths major requirements at the 2000 level, except for being short of Algebra I. Anyhow, I’ll have to start filling in all the non-maths requirements too, so I guess I’ll be taking some chemistry and computer science modules to fill up the science-but-not-maths area, and the rest will be… bleh. I kinda don’t really want to think about those, so maybe let me just postpone that for two more weeks, when module bidding actually starts.

Looking at the chem modules, it looks like the level-1000 ones all have five hours of lectures per week. And if I’m going to take Inorganic Chemistry 1, that’ll give me a gigantic seven-hour block of non-stop lectures. Maybe I should take Organic Chem 1 next semester instead. Carbon’s a pretty cool guy and it doesn’t afraid of anything.

There’s CS1010S and CS1101S, which are both called “Programming Methodology”, but according to the descriptions, the latter has a focus on functional programming, while the former is for imperative programming. To be honest, I don’t really get the distinction; apparently things like map and filter belong to functional programming, but I use them a lot in what I believe is an imperative style. But anyway, these two modules conflict with each other, and the higher-level CS modules seem to mostly depend on the former. You know that xkcd strip on dependencies? That’s pretty much what I’m seeing.

(Maybe if apt can solve sudoku, it can solve module dependencies too! Or is that being too optimistic?)

At any rate, that’s four modules out of five covered, assuming bidding goes well and all that stuff. The fifth will probably be some “general education” or “Singapore studies” module, which I’ll decide later.

Next semester (the one starting next year) will have MA3110S Analysis II and lolenglish, so that’s two modules out of five set in stone. (Yeah I’m kinda predicating this on my passing of English this semester, but that shouldn’t be a problem, now that I know what it’s like!) Conditioned on being able to get CS1010S this semester, I’ll be taking CS1020 Data Structures and Algorithms I next semester. Or there’s the six-MC module CS2020 Data Structures and Algorithms Accelerated, which depends on an A- grade on CS1010*. Either way, that’ll fulfil the faculty requirements of 16 MCs in at least three different categories. (I already took Physics I, so that’s 4 MCs and one category down.)

The remaining two modules to take would be MA3233 Combinatorics and Graphs II along with some other random general education module, maybe. (MA3201 Algebra II is an option too.) I don’t really like this.

I have six non-science modules to take and I don’t really want to take more than one per semester, but there’re only four semesters left. Six semesters if I’m going the Honours route, which I probably will be, but I think applying for Honours depends on having cleared all the non-science requirements as well. This semester will see me taking two, which still leaves four modules over three semesters. Thanks to the pigeonhole principle, either I’m wrong about the Honours thing and I can take one in the fourth year, or there must be some subsequent semester where I’ll still have to take at least two non-science modules.

If I’m not taking Set Theory or ODEs this semester, which is most likely the case, I’ll be taking them two semesters later. Plus MA3265 Introduction to Number Theory and CS4232 Theory of Computation. (CS4232 actually only depends on any MA2xxx module, so I could take it this semester if I wanted, but I’ll wait and see.) Plus one random module. Or actually, maybe I’d rather fit MA3209 Analysis III here. See two paragraphs below.

And then the semester after that (i.e. the sixth semester) would be whichever of MA3201 and MA3233 I didn’t take earlier, along with MA3111S Complex Analysis I. At this point I’d be done with the level-3000 major requirements, and then I have to care about SPM completion requirements. I missed out on MA2101S, so I’ll most likely have to take either MA5203 Graduate Algebra I or MA5205 Graduate Analysis I (or even both) for SPM. The former depends on Algebra II, and the latter depends on MA4262 Measure and Integration which depends on Analysis III.

I could streamline Graduate Algebra I into the fifth semester if I take Algebra II in the fourth, but then that’d push Combinatorics and Graphs II out to the sixth. If not, it’ll have to be in the seventh semester if I take it at all. As for Graduate Analysis I, the earliest possible scheduling would be to take Analysis III in the fifth semester, and Measure and Integration in the sixth, and then the module itself in the seventh. If I don’t take Analysis III in the fifth semester, that’ll force Graduate Analysis I beyond the first four years of study.


Third semester: MA2214, MA2202S, CS1010S, CM1111, lolenglish, wildcard.
Fourth semester: MA3110S, CS1020, MA3201 or MA3233, lolenglish, wildcard. (Possibly CM1121 Organic Chemistry 1.)
Fifth semester: MA3205, MA3220, MA3265, CS4232 or MA3209, wildcard.
Sixth semester: MA3111S, MA3233 or MA3201, MA4262 (if possible), undecided, wildcard.
Seventh, eighth semesters: ?????

Funny how I was going to write a post about the modular group, and then it suddenly went into modules to take in the coming semesters.