## π/2 is even

Præface: Okay, having somehow survived without proper Unicode input (Ctrl-Shift-U) for five(?) months, I still feel like killing myself every time I have to type in Greek stuff or mathematical operators, or whatever. (Also, “everytime” is wrong, and “every time” is definitely spelt as two words, yet I constantly see some people using it. I’m not really interested in derailing every conversation by being a grammar nazi, but gosh.)

Let’s start with the factorial-over-double-odd-factorial series for π/2. You know, the one that Euler guy came up with centuries ago.

$\displaystyle\frac\pi2=\sum_{k=0}^\infty\frac{k!}{(2k+1)!!}$

Now let’s work modulo 2. Notice that the terms from $k=2$ onwards have even numerator and odd denominator, so they’re congruent to 0.

$\displaystyle\frac\pi2\equiv\frac11+\frac13\equiv1+1\equiv0$.

So there.

Now, it seems that we can even use this series to calculate the 2-adic representation of π/2 (and hence π or any other rational multiple thereof), because not only does it converge under the usual metric, it also converges under the 2-adic metric, as the number of times 2 divides into the numerator increases by at least one every two terms, while the denominator stays odd forever.

Just as an aside, there’s one pretty cool algorithm for calculating the inverse of a nonzero number modulo a prime, which is kinda related to the Euclidean algorithm if you stare at it enough. See pages 131-132 of The Book of Numbers. To wit, in code:

def modinv(a,p):
'''Calculate the multiplicative inverse of a modulo p.'''
inv = 1
a %= p
while a != 1:
q = (p+a-1)//a # ceil(p/a)
a = a*q-p
inv = inv*q%p
return inv


Because the modulus is prime (and by the loop condition, $a>1$), $1\le a\lceil p/a\rceil-p\le a-1$, so the algorithm is guaranteed to terminate. By instead rounding $p/a$ to the nearest integer, rather than ceiling, $|a|$ can be guaranteed to halve every iterate; in other words, it converges a lot faster (on average). Now, this algorithm doesn’t directly work with nonprime moduli, but we can easily tweak it to work for prime power moduli by simply choosing the closest integer coprime to the modulus instead of just floor/ceiling. (This modification does not necessarily work for not-prime-power moduli because the closest coprime number might be too far away.)

Anyway.

Working out the first sixteen terms of the series, we get π/2=…10111110001011102. What an utterly fascinating result. we get π/2=…0000000000000002. (I fixed something wrong with my calculation.) Does it actually tend to all zeroes?

Not every series for π converges with the 2-adic metric though (obviously). The Gregory-Leibniz series reduces to Grandi’s series modulo 2, which can hardly be called convergent. On the other hand, there (probably?) are other series with only odd denominators that converge under the 2-adic metric, and converge to $pi$ in the usual Euclidean metric, but do they necessarily converge to the same value in the 2-adic metric?

Stealth edit: fixed the value of $\pi$ above.

Edit #2: tried using the Euler-transformed series for arctan(1/2)+arctan(1/3), which also seemed to give a few thousand binary digits of 0s. Now, why exactly would it converge to zero, if it actually does?

## Another one

Lol.

Also, re: Page Down. Those polynomials turn out to be normalised versions of Legendre polynomials. Surprise, surprise! I noticed this quite a while ago, but didn’t bother making a post for it.

## I really want to sleep

Frequency plots; x axis has frequency relative to Nyquist frequency (because that was easier to plot). These are of course normalised, but I didn’t put that in the legend because you have enough common sense for that.

Define an interpolation that uses $n$ samples around the one we want to interpolate as an $n$th-order interpolation scheme.

The first-order polynomial interpolation interpolates polynomials up to degree 1 correctly, and has zero first derivative at 0 frequency. In general, the $n$th-order polynomial interpolation interpolates polynomials up to degree $2n-1$ exactly, with zero first, second, …, $(2n-1)$th derivatives at 0 frequency. (Of course, the number of zero derivatives is exactly the same as the degree of polynomial interpolated.)

If you take the limit of $f_n[k]$ as $n\to\infty$, you get $\lim_{n\to\infty}f_n[k]=(-1)^{k+1}$ for nonzero $k$. Which is yet another appearance of Grandi’s series, but I digress.

Going back to the problem of trying to optimise the kernel over the coefficients, we may, as before work in time domain instead of frequency domain. This turns out to reduce the problem to choosing coefficients $k:g_n[k]$ such that their sum is 1, and the sum of squares is minimised. Do we know how to solve that? Of course we do. You just give every coefficient equal value.

Well. The frequency response here converges to a sinc, which is hardly ideal; it’s “closer” to flat than polynomial interpolation, but it feels wrong. Hmm.

## SaaS

I think SaaS is a really bad idea when the service provided is shit.

Let’s see. I have to work with exactly one such SaaS system at work. Well, maybe not exactly one. Probably more than just one, but there’s one I really dislike.

The HTML interface for SAP GUI is pathologically bad, and violates just about any website design guideline you can think of (except for not choosing garish colours). The freaking page is rendered server-side instead of client-side (well, a mix of both; more on this), so merely resizing the window or changing the zoom level or even scrolling in a table causes the whole thing to refresh. Now, this is really inconsistent, but thankfully so, because if it were just some kind of thin client a la VNC, I’d rather just kill myself.

Anyway. The interface uses some special derivative of Arial, which very expectedly, looks completely eyerapeful at typical zoom levels (100%, for example). Unless you have ClearType on, in which case, on Windows XP it makes everything bold and still eyerapeful, but on Windows 7 it looks okay. Still doesn’t excuse using a shitty font though. Anyanyway. Said derivative is monospaced, and SAP GUI (or whatever configuration of it is being used) does its very hardest to exploit this fact.

I once had to generate a list of everyone in the compound, with various information like IC, name, appointment, etc. etc. (basically everything in the system), and it gave me a a wall of ASCII-art table with all the fields padded with spaces to the longest entry. Except no, that was a simplification. The lines would’ve been longer than 1024 characters that way, and for whatever reason, linebreaks were forcefully inserted where the lines would otherwise have exceeded 1024 characters. Really? Like, reaaaaally? Why didn’t they just use a HTML table and avoid all this retardation?

Now, that thing I used had an export function, and it could export to XLS. Except of course, that wasn’t really Excel format. It was CSV. Except of course, it still had wtf-ery line breaks at 1024 characters width. Even though the CSV file itself had tabs. What the—

Note that I’m going by memory and the 1024 value might not be correct, and neither might the rest of the story. It’s been months since that happened. It’s also not directly relevant to the title, what with being anecdotal evidence. If there’s one thing English class taught me, it’s that anecdotal evidence is worthless when speaking about things.

SaaS is bad because it takes control away from the user. Architecturally speaking, implementations also tend to be bad if they’re implemented over HTTP/HTML, because that’s not what the protocol and languages are designed for. There’s an insane amount of overhead if you want to do things the easy way. On the other hand, the hard way is hard, so yeah.

I think I’m still feeling oddly lethargic and also I don’t think I can write very well about this topic, so just refer to Stallman’s writings about it or something.

## I want to sleep

So let’s say you have a discrete signal (because I don’t deal with finicky continuous things), and you want to interpolate one sample (say, the sample at $t=0$) from all the other samples. What do you do?

If the signal is bandlimited at half Nyquist, sinc interpolation from all the odd samples would give exactly that. But the bandlimitation isn’t really an assumption we’re free to make. As we’ve seen in the past, using a binomial window on a sinc filter corresponds exactly to (unwindowed) polynomial interpolation. Let’s try that. Also note that windowing polynomial interpolation doesn’t really make sense; you could reduce the weights of the points near the minimum/maximum, but that makes the interpolation not very interpolate-y.

I don’t particularly feel like typing out equations after equations while half-asleep, so yeah. Given the $2n$ samples at $\{k:1\le|k|\le n,k\in\mathbb Z\}$, we can form some equations, which gives us the weights $f_n[k]=(-1)^{k+1}\frac{n!^2}{(n-k)!(n+k)!}$ for nonzero $k$ (and $f_n[0]=0$), where we’re approximating our sample $X[0]$ with $\sum_{k\in\mathbb Z}X[k]f_n[-k]$, which is really just a convolution. Note that we can rewrite the kernel, again for nonzero $k$, as $f_n[k]=(-1)^{k+1}\binom{2n}{n+k}/\binom{2n}n$, where we can factor out the $\binom{2n}n$ denominator if we want, but the main highlight is that such a kernel would be exactly a row of Pascal’s triangle with alternating sign and the middle term set to zero (then normalised to unit sum).

What can we say about the characteristics of this filter? For one, by the polynomial interpolation construction, it necessarily has lots of zero derivatives in the frequency domain at zero frequency. So, more or less the same drawback as our binomial-window sinc: really flat frequency response for low frequencies, terrible behaviour at high frequencies relative to window width. But how terrible is terrible? Our binomial-window sinc isn’t pathologically bad, at least; the frequency response (of the binomial-window sinc) doesn’t converge uniformly, but it is bounded. It turns out that our missing-middle-sample polynomial interpolation leads to unbounded frequency responses at high frequencies, as $n\to\infty$. Why?

Turns out that the binomial coefficients with alternating sign (i.e. without setting the central term to zero) form a high-pass filter, the same way binomial coefficients without alternating sign form a low-pass filter. I don’t want to elaborate on this because I want to sleep, but I can’t. (Chances are, if you didn’t already have knowledge of DSP, you wouldn’t understand me, yet if you already did, you don’t need to read this.) Anyway, if you subtract out the central term, it doesn’t cause the frequencies to go out of whack yet. What does that is the normalisation to unit sum.

Er, I forgot how I was going to continue.

Let’s consider the energy of such a filter. We don’t have any model to use here, so let’s just assume we’re filtering white noise (or anything else with a flat power spectrum), and we treat every frequency equally. The energy can be defined just as the integral of the square of the frequency response (multiplied by an optional weighing function), or appealing to Parseval’s theorem, since our weighing function is constant, we can just do sum-squares on the kernel directly. That gives something like $\binom{4n}{2n}/\binom{2n}n^2$ iirc, which grows at $\Theta(\sqrt n)$. Which is unbounded, yeah. So it’s not just bad, it gets worse for large $n$.

I haven’t worked out the arithmetic for alternatives (like minimising the integrated square difference of the frequency response from ideal), so I’ll cut short. Also, I want to sleep. Anyway, direct minimisation might not be the best option; placing certain restrictions like the kernel summing to unity is equivalent to placing a zero at 1 in the pole-zero plot, which also reduces the number of degrees of freedom by one. Bah whatever.

## A new kind of life

num_states=4
num_neighbors=8
num_nodes=32
1 0 2 1 3
2 0 0 0 0
1 0 3 1 2
2 0 2 0 2
3 1 3 1 3
1 1 3 0 2
2 2 5 2 5
3 3 6 3 6
4 4 7 4 7
2 5 0 5 0
3 6 9 6 9
4 7 10 7 10
5 8 11 8 11
3 9 1 9 1
4 10 13 10 13
5 11 14 11 14
6 12 15 12 15
3 1 1 1 1
4 13 17 13 17
5 14 18 14 18
6 15 19 15 19
7 16 20 16 20
4 17 17 17 17
5 18 22 18 22
6 19 23 19 23
7 20 24 20 24
8 21 25 21 25
5 22 22 22 22
6 23 27 23 27
7 24 28 24 28
8 25 29 25 29
9 26 30 26 30

Right, I forget about things sometimes. I copied the tree file I generated for the second-order variant of Life. Copy into text editor of choice, save as ReversibleLife.tree (or whatever name you want), then put it in Golly’s Rules directory (which should be ~/.golly/Rules/). Launch Golly, go to Control>Set Rule…, and type in ReversibleLife (or whatever you named it).

The post title is a pun on Wolfram’s NKS. I haven’t read that book in years and actually I don’t even have the book; the last and only time I read it was in the NUS science library in 2009. Or was it 2008? Who cares.

Anyway, the structure of second-order cellular automata is immediately reminiscent of Feistel networks. In fact, that is what it is, if you take out the key, and let the round function be an iteration of the CA.

To run the rule in reverse (which you can, because this is a reversible automaton), just swap states 1 and 2. Note that the rule tree compresses two planes with one bit per cell (one plane for the current generation, one plane for the previous generation) into one plane with two bits per cell. Swapping the bits corresponds to swapping states 1 and 2, so yeah. States 0 and 3 remain the same for obvious reasons.

The smallest explosive structure seems to be the T tetromino. In fact almost everything seems to explode in a mess. Conversely, of the patterns that don’t grow unboundedly, they must necessarily be gliders or periodic, though I’ve not noticed the existence of any gliders. The glider-or-periodic thing also applies to all reversible cellular automata, btw.

## And you should know that

I still have a half-written post about the Thue-Morse sequence in the draft queue, but I’ll set that aside for a moment to ramble about my work life. (It’s nothing particularly interesting or new compared to the MathWorld article, anyway.)

On IRC, I like to use quotes when referring to my day job. I also like to call it data entry, which is a slight misrepresentation and oversimplification of what I really do. What I really do is— Well, I can’t really reveal “RESTRICTED” information, can I? But whatever, I will. Anyway, as I was saying, my main job scope involves this thing known as “leave”.

You know, where people get to take a day off from the horribly mindnumbing environment. Funnily enough, “off” is a distinct concept, also known as OIL or “off in lieu”, which isn’t as strictly regulated and is really easily abused.

So, let’s see. GOM 404-06-02 is a really long (was it thirty pages?) document describing the leave system in place. And all its intricacies. No, I lied. It only describes some of the intricacies. You have to figure out the rest on your own. You also have to read all the thousand circulars and directives to figure out if there’s anything else relevant, because publishing errata or addenda directly with the main general order is simply too sane. Oh, and the only way to receive updates is to either check yourself or through word of mouth.

In an interesting twist of events, from what was two leave clerks in the camp I’m in, now it’s just me. (For the short period when I was newly posted to the camp, there were three, but one of them didn’t know a thing about the leave system. That one was me, of course. Now I’m a freakin’ expert at this.) This puts my irreplaceability at approximately infinity.

Now, you see, I’m a very lazy person. If there was one thing I constantly vowed to change about myself throughout my high school life, it’d be my laziness. That one aspect of myself hasn’t changed in the almost-two-decades I’ve existed, and it doesn’t look like it’d be changing any time soon. Most of my time in office is unsurprisingly not spent on work. As I’ve mentioned, there’re intranet fora, but those move so goshdarn slowly sometimes. Sometimes I dig out old threads from 2008 or even earlier, but when the average post involves some kind of massacre of the English language, reading them sometimes turns out to be worse for my state of mind than just being bored. The rest of the time I spend writing things on paper like I’ve been doing since I learnt how to use a pencil. This involves a lot of paper, as you may expect. I’m also morally bankrupt, so I use office paper for this.

Hey taxpayers, that’s where your money’s going. I also occasionally do origami with that. Guess where the paper for my initial trioctaflexagon prototypes came from.

I also do some programming. Now, you see, unlike writing things on paper, typing furiously on a keyboard could ostensibly look like I’m actually doing work. When I first came to camp, the leave calculator provided was basically a spreadsheet with some formulae in it. It was inefficient. But I lived with it for months, before finally deciding to do something about it. I tried reverse engineering it, but this readily proved futile because, unlike normal code where you can sometimes hope for comments to exist, spreadsheets don’t exactly have a place for you to put comments, and the formulae are all necessarily unreadable one-liners. So I redid it in JavaScript. I also scrapped floating point arithmetic, and used only rational arithmetic for my version. In short, I did it “the right way”. I also did a speed dial kind of thing with a bunch of things that I had to use really often, but other than these I’ve exerted pretty much zero af my programming expertise on work-related matters. Heck, the speed dial thing was more of a fancy bookmarks page actually; there’s a bunch of JS, but it could be rewritten in HTML alone.

The rest of the time coding was mostly spent on ad hoc calculations, with the remainder used for fun things like Conway’s Game of Life.

<t> so today at work
<t> I coded a game of life thing
<t> and spent 80% of my time playing with it
<t> I almost forgot to leave in time for the bus :v
<X> i did that in uni
<X> it was fun
<t> that supreme feeling of having control over the gliders’ lives and the ability to smash them together to create chaos
<X> technically the gliders don’t really have “lives”
<X> in fact the organisms that compose them die repeatedly as they move
<X> since life apparently doesn’t include evolving into an ambulatory form

Anyway, if I were to break down the time spent in the office, it’d probably go like this. Half the time is spent writing things on paper. Half of the remaining time is spent on the fora. Half of that remaining time is spent sleeping. Half of this remaining time is spent on work, and the other half is spent on more sleep.

Of course, I get to do all the nonsense I want because I’m in this privileged position where they can’t find anyone to replace me. There was a newly-posted guy who I was supposedly going to split the leave workload with, but then he got swapped with another guy because this other guy thought his job was too stressful and wanted out. Just a few weeks back, I composed an email full of snark and sarcasm, written in a highly condescending tone, to a probably-forty-year-old chief clerk of some other unit. Someone probably wanted to stir up some drama and included the head in the recipient list when replying. Understandably, nothing happened, because snark and sarcasm aside, nothing I wrote in the email was wrong. Have I ever mentioned how much I love being correct? My officer-in-charge probably got a rude shock out of it when she saw, but, well, I’m not really the type to care about others’ feelings.

The really important thing to take away from this story is that you can get away with whatever bullshit you pull if people can’t just get rid of you.

PS: I also made a variant of Life with reversible rules, and interestingly there’re quite a few oscillators with small prime periods (5, 7, 11, 13, 17). Anything that’s a still Life lifeform becomes a period-3 oscillator (so period 3 isn’t really interesting); among the configurations that aren’t still Life lifeforms, it seems that beyond a certain size some explosive behaviour happens. (With a finite grid size though, the reversibility does imply eventual periodicity, possibly longer than the age of the universe in femtoseconds.) I wonder if Golly could support that.

## Thingses

At some point of time, I received a commendation letter and a nomination to be granted the title of “outstanding”. Apparently, slacking as much as I did in school grants accolades in the chair force. (Amusingly enough, within this camp, there’s been another ex-NUSH student who’s also received the nomination, a few years ago. I noticed this while going through past title recipients.) For obvious reasons (the not-quite-humility I have), I’ve tried to reject both, but they happened anyway. At least the latter is still only a nomination as yet. I hope I don’t get it for real.

I think a part of that may be contributed to actual competence. I mean, when I actually do work (which is rarely ever, but it still sometimes happens), I do it the best I can. Perfectionism?

Anyway. I still post on とある intranet の forum, and most specifically the mathematics thread. I’ve occasionally posted in the others, but I guess there’s nothing much of note there. While the question on conditional convergence has not been fully resolved (I posted the closed forms for $(a,b)$ equalling $(1,1)$, $(2,1)$ and $(1,2)$, but not the general solution), I posted a new one about linear recurrences. I know, old topic. I covered that before somewhere here or on zznq. Maybe. I can’t remember and don’t feel like checking.

The first question was relating to the general form of $u_n=u_{n-1}-u_{n-2}$. You can actually work this out with brute force and notice that it has a period of six terms. This one got answered pretty fast, so I followed it up with something a bit less trivial. Define our recurrence by $u_n=au_{n-1}+bu_{n-2}$. For what values of $a$ and $b$ is this recurrence periodic, regardless of initial values?

If you know how to solve linear recurrences, this isn’t hard. Our characteristic polynomial here is $x^2-ax-b$, and we have the condition that the two zeros must satisfy $x^n=1$ for some integer $n$. If we restrict ourselves to real $a,b$, that forces the two zeros to be reciprocals of each other, and letting one of them by $\exp(2\pi i\frac pq)$, we get $a=-2cos(2\pi\frac pq)$ and $b=-1$.

One catch though. This misses one solution, along with including two solutions that aren’t. One case we left out above is where one zero is $\pm1$. In this case, we can either have a doubled zero at $\pm1$, or one zero at $1$ and one zero at $-1$. The latter gives the characteristic polynomial $x^2-1$, or equivalently, the recurrence $u_n=u_{n-2}$. Yup, periodic with period 2, no doubt. What about the former? Turns out that having doubled zeros makes them non-solutions; those give characteristic polynomials $x^2\mp2x+1$, or equivalently the recurrence $u_n=\pm2u_{n-1}-u_{n-2}$. In the $+$ case, this is equivalent to linear extrapolation which obviously isn’t periodic, and in the $-$ case, this is equivalent to linear extrapolation with signs flipping every term, which also isn’t periodic.

So there you have it. In particular, if you want rational $(a,b)$, applying Niven’s Theorem, the only choices you get are $(0,\pm1)$ and $(\pm1,-1)$. All the other solutions have $b=-1$ and irrational $a$.

Also, on the topic of linear recurrences:

:anime:

## Apparently I’m a retard

Re: weighted medians.

Applying a median filter (or more generally, a rank filter) on a binary image is no different from blurring and thresholding. So, what I did manage to show was that for binary images, a Gaussian-like blur looks nicer than a box blur.

I nuked the code on Pastebin and reupped it on GitHub with some changes. Only took about an hour to get everything set up! Said changes include adding two more implementations of the (normal) median filter using different algorithms (quickselect (which is actually the slowest, ironically) and the constant-time one). And of course they produce identical outputs.

Copyright? Eh. I don’t feel like putting my name on this.

Also, a whole lot of non-research image algorithms in non-professional software tend to be outright buggy or doing the wrong thing. Gimp’s resizers were broken the last time I checked (around 2011), for example. So are PIL’s. Resizers seem like a thing that should be trivial, but somehow people get them wrong anyway. I’ve written about this. GEGL has got an expert on team recently though (Robidoux), so presumably it’s less wrong nowadays. (He has some ideas about resampling which I don’t fully agree with, like performing contrast expansion before resampling, and reversing it after. But hey, he’s the expert.)

Oh right, I’ll take this bit to mention that some time in the last few months, Firefox/Nightly implemented high-quality image downscaling. It’s not instant though, and before it’s computed the old low-quality scaling is used. Which then becomes really jarring because there’s a sudden shift in the image, meaning that at least one of the two scalers are wrong. Embarrassing.

Anyway, up next in imagefilters is probably some blur code. That shouldn’t be too hard. Probably a box blur and general convolution sometime. I might clean up the old rescaling code and add it, but as it stands it’s really messy and embarrassing to look at.

## Bonus feature

Let’s say you have the alternating harmonic series.

You should already know that it’s conditionally convergent and by rearranging terms you can get any value you want. For instance, if you have $a$ positive terms and $b$ negative terms ad infinitum (assuming the terms of the same sign are added in order of decreasing magnitude), the value is $\frac12\log\frac{4a}b$.

In the case where $(a,b)$ is $(1,1)$, this is just the normal alternating harmonic series. Where it’s $(1,2)$, we get half the value instead, which is a commonly used example of the pitfalls of roughly handling conditionally convergent series.

As a side note I posted that question to とある intranet の forum, and it seems like that one other guy who posts in the maths thread isn’t too familiar with conditionally convergent series. Oh well.

Anyway, an interesting observation that somehow eluded me for the past seven or so years since I learnt about the alternating harmonic series is that if you take one positive term, then four negative terms, etc., the sum works out to be zero exactly.

$\displaystyle\frac11-\frac12-\frac14-\frac16-\frac18+\frac13-\frac1{10}-\frac1{12}-\frac1{14}-\frac1{16}+\cdots=0$

This converges slowly.

Define:

$\displaystyle f(k):=\frac1{2k+1}-\frac1{8k+2}-\frac1{8k+4}-\frac1{8k+6}-\frac1{8k+8}$

Then we have, as a restatement of the above:

$\displaystyle\sum_{k=0}^\infty f(k)=0$

This still converges slowly. Of course it does, we haven’t changed anything yet. $f(k)=\Theta(k^{-2})$, so the sum has $\Theta(k^{-1})$ error. Let’s improve it by one order. Observe that the sequence $(2k+1/4,2k+2/4,2k+3/4,2k+4/4)$ isn’t quite centred on the same value as the (one-valued) sequence $(2k+1)$. We can shift the positive terms by one-eighth of a term to compensate.

No need for anything too fancy.

$\displaystyle g(k):=\frac78\cdot\frac1{2k+1}+\frac18\cdot\frac1{2k+3}-\frac1{8k+2}-\frac1{8k+4}-\frac1{8k+6}-\frac1{8k+8}$.

$\displaystyle\frac18+\sum_{k=0}^\infty g(k)=\sum_{k=1}^\infty g(k)=0$

What’s the order of $g$? We should expect it to be higher. And it is, by exactly one order. $g(k)=\Theta(k^{-3})$, so the sum has $\Theta(k^{-2})$ error.

Now, this is all hogwash because we already know the sum is zero, but it was a fun exercise anyway!