Some fixed-point theorem

Let I be a closed interval and f:I\to I be a function satisfying |f(x)-f(y)|<|x-y| for all distinct x,y\in I. What we want to show is that \lim_{n\to\infty}f^n(x) converges to the unique fixed point of f.

Proofs for the existence and uniqueness of the fixed point are standard. Additionally, since |f\circ f(x)-f\circ f(y)|\le|f(x)-f(y)|<|x-y| for distinct x,y\in I, f\circ f also has a unique fixed point. We also need the “trivial” result that if \lim_{n\to\infty}f^n(x) converges, it converges to the unique fixed point, and likewise for f\circ f; what’s left is to show that this limit exists regardless of starting point.

For any x_0\in I, let x_n=f^n(x_0). Also let y be the fixed point of f. Suppose to the contrary that \lim_{n\to\infty}x_n does not converge. Consider the sequence n\to|x_n-y|, which is clearly positive and decreasing, and thus convergent. If it converges to 0, then \lim_{n\to\infty}x_n=y, a contradiction. Therefore z:=\lim_{n\to\infty}|x_n-y| must be positive, and there exists some N\ge0 such that for all n\ge N, z<|x_n-y|<2z.

If n\to x_n-y is eventually always positive or eventually always negative, then \lim_{n\to\infty}(x_n-y) exists, which is a contradiction. If n\to(x_n-y)(x_{n+1}-y) is eventually always negative, then n\to(x_{2n}-y) is eventually always positive or eventually always negative, where as before we get a contradiction.

Consequently, there exist some i,j\ge N such that

(x_i>y\wedge x_{i+1}>y\wedge x_j>y\wedge x_{j+1}<y)\text{ or}\\  (x_i<y\wedge x_{i+1}<y\wedge x_j<y\wedge x_{j+1}>y).

Then we have

|x_{i+1}-x_{j+1}|=|(x_{i+1}-y)+(y-x_{j+1})|=|x_{i+1}-y|+|y-x_{j+1}|>2z,

but also, since x_i\ne x_j,

|x_{i+1}-x_{j+1}|<|x_i-x_j|=|(x_i-y)-(x_j-y)|<z<2z,

a contradiction. Therefore the initial supposition that \lim_{n\to\infty}x_n is divergent is false, and the proposition is proven.

There’s probably a better way of proving this. This one relies on the structure of the real numbers even though it should hold true when generalised to arbitrary metric spaces. The caveats are that we can’t use the intermediate value theorem to show the existence of a fixed point and that it’s not necessarily true that an ordering exists at all, so splitting into the positive/negative/alternating/none-of-the-above cases is an invalid operation.

Last-minute hurfdurfery

MA2108S, Tutorial 10, let’s go. 16th April, 23:34. All the files are on my home server but it’s not like it’s public anyway, so maybe I’ll upload them to Mediafire some time. I lost focus somewhere around the last part of question 6, so the times are kinda meaningless.

1

Claim: f is continuous on (a,b).

Let x\in(a,b) and \varepsilon:=\min\{x-a,b-x\}. Then since x\in[a+\varepsilon,b-\varepsilon] and f is continuous over that interval, f is continuous at x. Therefore f is continuous on (a,b).

Claim: f is not necessarily continuous on [a,b].

Consider f:x\mapsto1/x over (0,1), where f is clearly discontinuous at 0.

(23:38)

2

For any x\in[a,b], the truncation of the (canonical) binary expansion of x (i.e. \lfloor2^nx\rfloor/2^n) is a rational sequence converging to x also bounded by [a,b], and since f is stated to be continuous,

\displaystyle f(x)=f\left(\lim_{n\to\infty}\frac{\left\lfloor2^nx\right\rfloor}{2^n}\right)=\lim_{n\to\infty}f\left(\frac{\left\lfloor2^nx\right\rfloor}{2^n}\right)=\lim_{n\to\infty}0=0.

(23:44)

3

Um.

\min\{f(x),g(x)\}=\tfrac12(f(x)+g(x)-|f(x)-g(x)|)\\  \max\{f(x),g(x)\}=\tfrac12(f(x)+g(x)+|f(x)-g(x)|)

Since the set of continuous functions is closed under addition, scalar multiplication, and taking the absolute value, the set is also closed under \min and \max.

(23:53)

4

Assumed above, but I guess this actually needs to be proven? (If we assume 3, we can use |f(x)|=\max\{0,f(x)\}+\max\{0,-f(x)\}, but circular arguments are circular.)

Let a be some element of f‘s domain and assume that it is a cluster point too. (If it’s not a cluster point, continuity is trivial.)

\lim_{x\to a}g(x)=\lim_{x\to a}|f(x)|=|f(a)|=g(a)

(00:00)

5(a)

Consider x':=x\mod\pi. If x'\in[0,\pi/6)\cup(5\pi/6,\pi), then f(x)=x. If x'\in\{\pi/6,5\pi/6\}, then f(x)=x/2. Otherwise, |2\sin x|>1, so f(x)=0.

f is continuous over \mathbb R\backslash\pi(\{1/6,5/6\}+\mathbb Z) and discontinuous elsewhere.

5(b)

Continuous over the irrationals, discontinuous over the rationals. Gone through in class etc.

5(c)

Continuous at 0, discontinuous elsewhere. Given any nonzero x, for any \delta, consider the \min\{\delta,|x|/2\}-neighbourhood of x. There exists both a rational and an irrational in this neighbourhood; let these be x_r,x_i respectively. (WLOG assume x,x_r,x_i are all positive.)

x_r>x/2\implies g(x_r)=x_r>x/2\\  g(x_i)=0

Therefore g is discontinuous at every nonzero value.

5(d)

This is clearly discontinuous at the reciprocals of the square roots of nonsquare integers, and continuous between the reciprocals of the square roots of integers. This leaves the cases |x|>1 and x=1/n for some nonzero n.

In the former case, h(x)=0 is constant and hence continuous at x.

In the latter case, suppose WLOG that n>0. We have that h(x)=0 while h(x-)=(n^2+1)(-1)^n\ne0, so h is not continuous at such x.

To summarise, h is discontinuous on \{\sqrt n|n\in\mathbb N\}\cup\{-\sqrt n|n\in\mathbb N\} and continuous elsewhere.

(00:45) (I got distracted, okay.)

6(a)

\displaystyle{\left(\frac{1+x2^x}{1+x3^x}-1\right)/x^2=\frac{2^x-3^x}x\frac1{1+x3^x}=\left(\frac{2^x-1}x-\frac{3^x-1}x\right)\frac1{1+x3^x}\to\left(\log2-\log3\right)1=\log\frac23}

\displaystyle{\left(\frac{1+x2^x}{1+x3^x}\right)\uparrow x^{-2}=\left(\left(\frac{1+x2^x}{1+x3^x}\right)\uparrow\left(\frac{1+x2^x}{1+x3^x}-1\right)^{-1}\right)\uparrow\left(\left(\frac{1+x2^x}{1+x3^x}-1\right)/x^2\right)\to\exp\log\frac23=\frac23}

6(b)

\displaystyle\frac{(x+e^x)-1}x=1+\frac{e^x-1}x\to2

\displaystyle(x+e^x)^{1/x}=\left((x+e^x)^{1/(x+e^x-1)}\right)^{(x+e^x-1)/x}\to e^2

6(c)

\displaystyle x^{\sin x}=\exp((\log x)(\sin x))=\exp\left((\log x)x\left(\frac{\sin x}x\right)\right)\to\exp(0\cdot1)=1

6(d)

Apparently, for this kind of questions you’re supposed to use cheating methods like Taylor series to get a guess of what the behaviour is, then rigorously prove it without Taylor series! Seems kinda backwards.

\displaystyle\log(x+\sqrt{1-x^2})=\log\left(1+x-\frac{x^2}2+O(x^3)\right)=x-x^2+O(x^3)\\  \log(nx+\sqrt{1-n^2x^2})=\log\left(1+nx-\frac{(nx)^2}2+O(x^3)\right)=nx-(nx)^2+O(x^3)

Hmm. Not quite the right approach.

\displaystyle(x+\sqrt{1-x^2})^{1/x}=\left((x+\sqrt{1-x^2})^{1/(x+\sqrt{1-x^2}-1)}\right)^{(x+\sqrt{1-x^2}-1)/x}\to\exp1=e

\displaystyle{\lim_{x\to0}\frac{\log(nx+\sqrt{1-(nx)^2})}{\log(x+\sqrt{1-x^2})}=\lim_{x\to0}n\frac{\log(nx+\sqrt{1-(nx)^2})^{1/(nx)}}{\log(x+\sqrt{1-x^2})^{1/x}}=n\lim_{x\to0}\frac{\log(nx+\sqrt{1-(nx)^2})^{1/(nx)}}{\log(x+\sqrt{1-x^2})^{1/x}}=n\frac ee=n}

6(e)

\displaystyle0\le\sin\log\left(1+\frac1x\right)<\log\left(1+\frac1x\right)\to0\implies\sin\log\left(1+\frac1x\right)\to0\\  \sin\log(x+1)-\sin\log x=2\left(\sin\log\frac{x+1}x\right)\cos\log\sqrt{x(x+1)}\to0

6(f)

Second derivative?

\displaystyle\frac{\log(x+h)+\log(x-h)-2\log x}{h^2}=\frac{\log(1+h/x)+\log(1-h/x)}{h^2}=\frac{\log(1-h^2/x^2)}{h^2}=x^{-2}\log(1+h^2/x^2)^{x^2/h^2}\to x^{-2}

(01:57)

7

Let f:x\mapsto[x=0] and consider behaviour around x=0.

\displaystyle\lim_{h\to0}|f(x+h)-f(x-h)|=0\\  \lim_{h\to0}|f(x+h)-f(x)|=1

(02:25)

Actually, I might as well do tutorial 11 too.

1

Follows from intermediate value theorem and induction; in this case it would be easier to prove the more general statement for any convex combination, not just the mean.

2

Again, intermediate value theorem. For sufficiently large and sufficiently small x, f(x)>0, and since f(0)=-1<0, IVT gives us the existence of at least two distinct roots.

3

Bi-Lipschitz blah blah. f(x) is well defined (cf. tutorial 9 question 1), so all that remains is to show that f is continuous.

|x-x_0|=|y-y_0-\varepsilon(\sin y-\sin y_0)|\ge|y-y_0|-\varepsilon|\sin y-\sin y_0|\ge|y-y_0|(1-\varepsilon)\\  |y-y_0|\le|x-x_0|\frac1{1-\varepsilon}

f is Lipschitz continuous and therefore also continuous. (A special case of a more general theorem on how a small Lipschitz perturbation of the identity function gives a bi-Lipschitz result.)

4

\forall n\in\mathbb N\quad g(x_n)-f(x_n)=f(x_{n+1})-f(x_n)

If n\mapsto x_n is either nonincreasing or nondecreasing, then it is a convergent sequence, where we may let x_0:=\lim_{n\to\infty}x_n and f(x_0)=g(x_0).

Otherwise, there exists some n for which (x_{n+1}-x_n)(x_{n+2}-x_{n+1})\le0.

\mathrm{sgn}((f(x_{n+1})-f(x_n))(f(x_{n+2})-f(x_{n+1})))=\mathrm{sgn}((x_{n+1}-x_n)(x_{n+2}-x_{n+1}))\le0

But also

\mathrm{sgn}((f(x_{n+1})-f(x_n))(f(x_{n+2})-f(x_{n+1})))=\mathrm{sgn}((g(x_n)-f(x_n))(g(x_{n+1})-f(x_{n+1}))),

where, by IVT, there exists some value of x between x_n and x_{n+1} such that f(x)=g(x).

5

This is a homework question.

6

Suppose for contradiction that f is continuous. Then it attains a minimum value a at two points x_1\ne x_2, and likewise a maximum value b for two other points x_3\ne x_4. WLOG let x_1<x_2 and x_3<x_4.

If x_1\ne0, let \epsilon:=\min\{(x_1)/2,(x_2-x_1)/3\}; if x_2\ne1, let \epsilon:=\min\{(x_2-x_1)/2,x_2\}. Let a':=\min f(\{x_1-\epsilon,x_1+\epsilon,x_2-\epsilon,x_2+\epsilon\}\cap[0,1]). If a=a', we have a contradiction. Suppose a\ne a'. By IVT, at least three of these statements are true (with the fourth possibly failing due to asserting the existence of an element in an empty set):

\displaystyle\exists x\in(x_1-\epsilon,x_1)\quad f(x)=(a+a')/2\\  \exists x\in(x_1,x_1+\epsilon)\quad f(x)=(a+a')/2\\  \exists x\in(x_2-\epsilon,x_2)\quad f(x)=(a+a')/2\\  \exists x\in(x_2,x_2+\epsilon)\quad f(x)=(a+a')/2

As the open intervals are all disjoint, this implies there are at least three distinct values of x such that f(x)=(a+a')/2, a contradiction.

Therefore x_1=0 and x_2=1. By the same argument, x_3=0 and x_4=1. However, this means that \max_{0\le x\le1}f(x)=f(x_3)=f(0)=f(x_1)=\min_{0\le x\le1}f(x); in other words, f is a constant function. This is also a contradiction.

Consequently, the initial assumption that f is continuous is untenable.

7

“A continuous endomorphism has a fixed point.”

If f(0)=0 or f(1)=1 we are done. Suppose otherwise, and let g(x):=f(x)-x.

g(0)>0 and g(1)<0, so by IVT there exists some x\in(0,1) such that f(x)=x.

8

This is exactly what I said for question 1.

9

Definition quoting time!

Suppose that f is uniformly continuous and that \lim_{n\to\infty}(x_n-y_n)=0. For any \varepsilon, there exists some \delta such that |x-y|<\delta\implies|f(x)-f(y)|, and there exists some N such that for all n\ge N, |x-y|<\delta. Putting two and two together, \lim_{n\to\infty}(f(x_n)-f(y_n))=0.

In the other direction, suppose that f is not uniformly continuous. Then there exists some \varepsilon>0 such that for all \delta>0 there exist some x(\delta),y(\delta) such that |x-y|<\delta and |f(x)-f(y)|>\varepsilon. Then construct sequences x_n:=x(1/n) and y_n:=y(1/n), where \lim_{n\to\infty}(x_n-y_n)=0 and \lim_{n\to\infty}(f(x_n)-f(y_n))>0.

10

Oh god it’s 4 am I can’t think any more.

Counterexamples

It turns out that my mathematical intuition is the visual sort of intuition and for some reason, in spite of that, I’m terrible at reasoning about geometrical objects. Is it merely due to lack of experience? Probably. But no, that’s not the point of this post.

MA2108S, Tutorial 8, question 4.

Suppose f(x)\ge M for some constant M>0 and x\in[a,\infty). Suppose that \lim_{x\to\infty}f(x+1)/f(x)=\alpha exists. Show that \lim_{x\to\infty}\sqrt[x]{f(x)}=\alpha.

Sounds like this could be true, right? After all, if you consider f(x) as just a discrete sequence f(a),f(a+1),\dots this follows immediately from Stolz’s theorem. In fact, if f is bounded over every interval [a',a'+1] where a'\ge a, this statement is true. If we impose the requirement that f is continuous, this implies boundedness over every such interval and thus the proposition holds. (Proof of this is omitted, because hey, look at the post title. But basically the idea is to prove that f(a+n+x)^{1/(a+n+x)} cannot differ too much from f(a+n)^{1/(a+n)} when n\in\mathbb N is large enough and 0\le x\le 1.)

However, finding examples in analysis is all about pathological functions too hard to graph and comprehend. A bounded function won’t do, so we need an unbounded function. But we can’t have a function that has singularities either (like x\mapsto1/x). If we can’t have a function that’s nicely defined, we might as well make it super-pathological.

\begin{array}{rl}f:\mathbb [0,\infty)\to&[1,\infty)\\  x\mapsto&\begin{cases}1\text{ if }x\notin\mathbb Q\\n\text{ if }x\in\mathbb Q\text{ and }x=m/n,~\gcd(m,n)=1,~n>0\end{cases}  \end{array}

This is sort of like the reciprocal of Thomae’s function. Just happened to be the first thing that came to mind because we covered that in class a while ago, though I can’t think of a simpler and equally suitable function. Anyway.

\displaystyle\forall x\in\mathbb R^+\quad f(x+1)=f(x)\\  \implies f(x+1)/f(x)=1\\  \implies\lim_{x\to\infty}f(x+1)/f(x)=1

Kinda obvious, yes? The problem is that now we can also show that \limsup_{x\to\infty}\sqrt[x]{f(x)}=+\infty, and we know that 1 is not infinite. Probably. Come to think about it, we never actually proved the finiteness of 1. Let’s assume that.

For any positive integer N, let \displaystyle x:=N+\frac1{N^{N+1}}>N.

\displaystyle f(x)=N^{N+1}\\  \sqrt[x]{f(x)}=N^{(N+1)/\left(N+\frac1{N^{N+1}}\right)}\ge N

Thus we have shown that \limsup_{x\to\infty}\sqrt[x]{f(x)}\ge\limsup_{x\to\infty}\lceil x\rceil=+\infty, and consequently, \lim_{x\to\infty}\sqrt[x]{f(x)} diverges as well.

Actually, the crux of this proof is that for every a'\ge a, f is unbounded over the interval [a'+n,a'+n+1] for some nonnegative integer n, and put that way, it’s a lot easier to come up with other counterexamples.

MA2108S Tutorial 9

Let’s see. It’s 11:32 pm as I write this sentence. How long will it take me to finish this?

Order in which questions were done: 3, 5, 10, 11, 2, 12, 1, 6, 4, 8, 9, (todo) 7.

1

Non-kosher solution: the derivative of f:x\mapsto m+\varepsilon\sin x is strictly bounded by \pm1, so fixed-point iteration converges to the unique fixed point. But we don’t know about derivatives yet!

Side note: Actually, this reminds me. In the Numerical Analysis I module, the lecture notes give that if |f'(x)|\le c<1 for some c then fixed-point iteration converges, which is stricter a condition than necessary. This can be relaxed in two different ways; either by using Lipschitz continuity (c-Lipschitz where c<1) or by imposing |f'(x)|<1. Or to generalise slightly further, it’s enough to have |f(x_1)-f(x_0)|/|x_1-x_0|<1 for all x_0\ne x_1 as the condition. This forces every interval to shrink by a nonzero amount on applying the function, which in turn forces convergence and uniqueness of the fixed point. (Actually, wait, I’m having a hard time trying to convince myself that this is actually true now. I’ll leave this as a [citation needed] and get back to the tutorial question. Actually actually, the Wikipedia article on the Banach fixed-point theorem says that this is true if we assume f to be an endomorphism on a closed interval, which was the case I was thinking of anyway.)

How would we approach this question? The root of f(x)=m must lie somewhere in (m-1,m+1), since |\varepsilon\sin|\le\varepsilon<1. But wait, we just assumed that the root exists. Instead, we should use IVT and show both existence and the bound in one step.

The standard high-school-level proof of fixed-point iteration converging involves MVT, which we can’t use because we don’t have differentiation yet. What we can do is to weaken it to Lipschitz continuity, which is pretty much the statement of the MVT anyway. We’ll take as a lemma that \sin is 1-Lipschitz; i.e. f is \varepsilon-Lipschitz. Now we can prove uniqueness of the root and that iteration of f generates a contractive sequence (thereby implying convergence) if the seed is in (m-1,m+1).

Suppose there exist two distinct solutions x,x' (i.e. f(x)=f(x')=m). Then |x-x'|=|f(x)-f(x')|\implies|f(x)-f(x')|/|x-x'|=1>\varepsilon, a contradiction. Therefore the fixed point is unique. Henceforth let x be the unique solution.

\displaystyle|x_{k+1}-x|=|f(x_k)-f(x)|\le\varepsilon|x_k-x|\\  \implies |x_n-x|\le\varepsilon^n|x_0-x|\\  \implies\lim_{n\to\infty}x_n=x

(01:12)

2

This is easily bounded in both directions thus, assuming x>1.

\displaystyle\prod_{k=1}^n(1+x^k)<\prod_{k=1}^n(x^k+x^k)=(2x)^{n(n+1)/2}\\  \prod_{k=1}^n(1+x^k)>\prod_{k=1}^nx^k=x^{n(n+1)/2}

Therefore p=n(n+1)/2. To find A, which is obviously 1, consider the following limit of a finite product.

\displaystyle\lim_{x\to\infty}x^{-p}\prod_{k=1}^n(1+x^k)=\lim_{x\to\infty}\prod_{k=1}^n\frac{1+x^k}{x^k}=\lim_{x\to\infty}\prod_{k=1}^n1=1

Therefore \prod_{k=1}^n(1+x^k)\sim x^{n(n+1)/2}.

(00:04)

3

Suppose x>2\cdot10^5. Then

\displaystyle10^{-5}x^3>2x^2=x^2+200000x>x^2+11x>x^2+10x+100.

(11:36)

4(i)

\displaystyle{\frac{\tan x-\sin x}{x^3}=\frac{\sin x(\tfrac1{\cos x}-1)}{x^3}\\  =\frac{\sin x}x\frac{1-\cos x}{x^2\cos x}=\frac{\sin x}x\frac{2\sin(\tfrac x2)^2}{x^2}\sec x\to1\cdot\frac12\cdot1=\frac12}\\  \therefore\tan x-\sin x\sim\frac12x^3

4(ii)

Trying to rationalise this would lead to a fifth-degree monstrosity. Probably not desirable.

\displaystyle\frac{\sqrt{1-2x}-\sqrt[3]{1-3x}}{x^2}=\frac{1-2x-(1-3x)^{2/3}}{x^2(\sqrt{1-2x}+\sqrt[3]{1-3x})}\\  \to\frac{1-2x-(1-3x)^{2/3}}{2x^2}=\frac{(1-2x)^3-(1-3x)^2}{2x^2(\text{tl;dw})}\\  \to\frac{(1-2x)^3-(1-3x)^2}{6x^2}=\frac{1-6x+12x^2-8x^3-1+6x-9x^2}{6x^2}\\  \to\frac12\\  \therefore\sqrt{1-2x}-\sqrt[3]{1-3x}\sim\frac12x^2

(01:35)

5

Let f\in O(x^m) and g\in O(x^n). Assume x\ge0 to avoid dealing with complex numbers if m,n\notin\mathbb Z. (Why does he always use n,m instead of m,n?)

By definition, there exist nonnegative \alpha,\beta such that f(x)<\alpha x^m and g(x)<\beta x^n for all (or sufficiently small) x.

Then f(x)g(x)<(\alpha\beta)x^{m+n}.

(11:40)

6

This one we went through during lecture already, but I don’t pay much attention or take notes, so.

\displaystyle{\frac{\sqrt{x+\sqrt{x+\sqrt x}}}{x^{1/8}}=\sqrt{\frac{x+\sqrt{x+\sqrt x}}{x^{1/4}}}=\sqrt{x^{3/4}+\sqrt{\frac{x+\sqrt x}{x^{1/2}}}}=\sqrt{x^{3/4}+\sqrt{1+\sqrt x}}\to\sqrt{\sqrt1}=1}

(01:19)

7

I am completely lost here.

8

Take \displaystyle\phi:x\mapsto\sqrt[3]{1+x}-1,\ \psi:x\mapsto x/3,\ \alpha_{m,n}:=\frac m{n^2}.

\displaystyle\alpha_{m,n}\le\frac n{n^2}=\frac1n\to0

\displaystyle\lim_{x\to0}\frac{\phi(x)}{\psi(x)}=\lim_{x\to0}\frac{\sqrt[3]{1+x}-1}{x/3}=\lim_{x\to0}\frac{(1+x)-1}{3x/3}=1

Applying 7, we get the following.

\displaystyle\lim_{n\to\infty}\sum_{k=1}^n\left(\sqrt[3]{1+\frac k{n^2}}-1\right)=\lim_{n\to\infty}\sum_{k=1}^n\frac k{3n^2}=\lim_{n\to\infty}\frac{n(n+1)}2\frac1{3n^2}=\frac16

(02:03)

9

Convert the product into a sum by taking the logarithm first.

Take \displaystyle\phi:x\mapsto-\log\cos x,\ \psi:x\mapsto x^2/2,\ \alpha_{m,n}:=\frac{6m}{n^{3/2}}.

\displaystyle\alpha_{m,n}\le\frac{6n}{n^{3/2}}=6n^{-1/2}\to0

\displaystyle\frac{\log\cos x}{\cos x-1}=\log(1+(\cos x-1))^{1/(\cos x-1)}\to\log e=1\\  \lim_{x\to0}\frac{\phi(x)}{\psi(x)}=\lim_{x\to0}\frac{\phi(x)}{1-\cos x}\frac{1-\cos x}{\psi(x)}=1\cdot 1=1

Again, applying 7, we get the following.

\displaystyle\lim_{n\to\infty}\prod_{k=1}^n\cos\alpha_{k,n}=\exp\lim_{n\to\infty}\sum_{k=1}^n\log\cos\alpha_{k,n}=\exp\left(-\lim_{n\to\infty}\sum_{k=1}^n\frac{\alpha_{k,n}^2}2\right)=\exp\left(-\lim_{n\to\infty}\sum_{k=1}^n\frac{18k^2}{n^3}\right)=\exp(-6)=e^{-6}

(02:23)

10

Assume n\ge2 and also e>2. Define the following.

\displaystyle a:=n!e\mod1=\sum_{k=n+1}^\infty\frac1{(n+1)(n+2)\cdots(k-1)k}\in\left(\frac1{n+1},\frac1n\right)

The upper bound comes from bounding with a geometric series. (This was something I figured out in like year 3, when I then realised that trying to transform the e series was pointless, given its already superlinear convergence.)

\displaystyle\lim_{n\to\infty}n\sin(2\pi n!e)=\lim_{n\to\infty}n\sin(2\pi a)

Since we know that \sin(2\pi a) is increasing in a if 0\le a<1/4, we need to further assume n>4, at which point we can squeeze the result out. Literally.

\displaystyle\lim_{n\to\infty}n\sin\left(2\pi\frac1{n+1}\right)=\lim_{n\to\infty}(n+1)\sin\left(2\pi\frac1{n+1}\right)=2\pi\\  \lim_{n\to\infty}n\sin\left(2\pi\frac1n\right)=2\pi\\  \therefore\lim_{n\to\infty}n\sin(2\pi a)=2\pi

(11:54)

11

Are the square brackets supposed to mean the floor function? Why don’t people use \lfloor\rfloor for the floor function, though?

\displaystyle\lim_{x\to0}(x+\lfloor x^2\rfloor)=0+0=0

\displaystyle\lim_{x\to0+}\frac{|x|-x}x=\lim_{x\to0+}\frac0x=0\\  \lim_{x\to0-}\frac{|x|-x}x=\lim_{x\to0-}\frac{-2x}x=-2

(11:57)

12

\displaystyle\lim_{x\to0+}f(x)=\lim_{x\to0+}x=0\\  \lim_{x\to0-}f(x)=\lim_{x\to0-}(1-x)^{-1}=1

(00:06)

Other bounds

I stumbled upon some geometry problem on math.se, where the solution involved knowing the bound \pi<16/5=3.2. (My own solution involved using similar triangles to determine the radius of the circle given the square’s side length.)

Well, yeah, that bound is obvious, but how would you prove it?

We have this integral:

\displaystyle\int_0^1\frac{x^4(1-x)^4}{1+x^2}=\frac{22}7-\pi

which is positive (as the integrand is positive), so \pi<22/7<16/5. This one is simple and elegant, but it feels too easy. Besides, this integral was only discovered in the twentieth century. So suppose you’re a nineteenth-century mathematician and want to come up with a simple proof of the given bound.

The first idea I had was to see if a few terms of the Gregory-Leibniz series would suffice, but that turned out to need 19 terms. So that’s a nope.

What if we take \zeta(2)=\pi^2/6 instead? We’ve covered transforming this series before, and while transforming the whole series is overkill (and justifying that the transforming doesn’t change the sum is more effort than it’s worth), we can make simpler changes to the series.

\displaystyle\pi^2=6\left(1+\frac1{2^2}+\frac1{3^2}+\frac1{4^2}+\cdots\right)\\  <6+6\left(\frac1{(3/2)(5/2)}+\frac1{(5/2)(7/2)}+\frac1{(7/2)(9/2)}+\cdots\right)\\  =6+6\left(\frac1{3/2}\right)\\  =10

Which gives us \pi<\sqrt{10}, and since 10 Day is on March 16th, we have \pi<\sqrt{10}=3.16\dots<3.2. (Less whimsically, this also follows from (16/5)^2=256/25>10.)

Obviously, if we’re not counting the above integral, we probably shouldn’t count this as a reasonable solution either.

The Euler-transformed Gregory-Leibniz series has positive terms, but by noting that the term ratio is always strictly less than half, we can easily bound it anyway.

\displaystyle\frac\pi2=\frac11+\frac13+\frac2{15}+\frac6{105}+\cdots\\  <\frac43+\frac2{15}\left(1+\frac12+\frac14+\frac18+\cdots\right)\\  =\frac43+\frac4{15}  =\frac85

Exactly the bound we wanted, but again, this is sorta nontrivial.

Similarly, using the squared arcsine series:

\displaystyle\frac{\pi^2}{18}=\sum_{k\ge1}\frac{(k-1)!^2}{(2k)!}<\frac12+\frac1{24}\left(1+\frac14+\frac1{16}+\cdots\right)=\frac59

which shows \pi<\sqrt{10}.

And for yet another series-related approach, we can use three terms of the series for \arctan(1/\sqrt3) to get \pi^2<6724/675<10.

Maybe the real question should be: is there a simple geometric proof that \pi<16/5? Using a weighted average of the areas of the inscribing and circumscribing hexagons, I managed to get the bound \pi<11\sqrt3/6=3.175\dots, but proving that this weighted average gives an overestimate for \pi needed some calculus, which doesn’t seem all too kosher.

Worrying about things I shouldn’t really even be worrying about

Yet another thinking-out-loud post here.

Because of reasons I missed out on two assignments for that English module that contributed to 25% of the total grade, or a shade over a third of the non-exam portion of the grade. Which kind of sucks, I guess, but hey, it already happened.

Over the past three weeks I was having this gigantic dilemma over whether I should just give up and drop the module (it doesn’t affect CAP anyway) or whether I should persevere, which is kind of a difficult decision.

Pros of dropping it:

  • I don’t have to worry about whether my chances of passing are reasonably high. This is a rather significant problem, because as it is the only way of passing is to get decent marks on the remaining CA part of the grade and score incredibly well on the final exam. On the other hand, it’s not too unthinkable that I might actually do well on the exam. (Ergo, dilemma.)
  • I don’t have to spend more hours this semester researching for things I cannot care less about. Seriously, how can I be expected to care about social media when I hardly use it?
  • I don’t have to come up with excuses for why I forfeited my grades for the two aforementioned assignments. This is the most immediate concern since the next class is in about ten hours.

Cons:

  • I’ll have to retake it next year.
  • In which case, I’d probably be the only second-year taking it. Might be embarrassing?
  • With the exception of Geometry, all the other non-English modules I’m taking this semester are easymode. Next semester will most likely be different (including non-maths modules), and if I actually have to study things, I will have even less time to work on English assignments. One option is to take only 17 MC of modules next semester, but that sounds kinda suboptimal.
  • Sunk cost makes me feel bad even if it’s a complete fallacy and I know it. Which makes me feel even worse. (Though if I fail to pass this time round, the amount of sunk cost will be even higher.)
  • Even if social media is an awful topic to write about, I’m not sure next semester’s would be any better. Judging from the past semesters, almost every topic relates to real people in some way (last semester’s was about globalisation). I said I didn’t care about social media, but I care about globalisation even less, and chances aren’t high I’ll get a topic I can blog about anyhow. The plus side of social media as the topic is that at least it relates to the Internet and (I like to believe) I know the Internet somewhat well.

Hmm.

All signs point to persevering as the better option. This really is starting to look like a “no matter what happens, I lose” situation though.

Rational approximations to the binary logarithm

Sorta a follow-up to the post on rational approximations for binary exponentiation.

Maybe your first instinct would be to try to invert the rational functions in the earlier post. How well does that work?

For the first-order approximation f_1(x)=(3+x)/(3-x), we have the inverse

\displaystyle f_1^{-1}(x)=\frac{3x-3}{x+1}.

For the second-order approximation f_2(x)=(26+9x+x^2)/(26-9x+x^2), we have the inverse

\displaystyle f_2^{-1}(x)=\frac{9(x+1)-\sqrt{-23x^2+370x-23}}{2(x-1)}.

Oh, whoops, this isn’t a rational function. In general, rational functions are not injective, and it can also be shown that the only rational functions that have rational inverses are the nondegenerate Möbius transformations, so we can’t just proceed with inverting our earlier functions. Or if you decide to proceed anyway…

[screenshot: 378+131x+18x^2+x^3=y*(378-131x+18x^2-x^3)]

Yeah, uh, I get the feeling this is not going in quite the right direction.

Let’s scrap the above and start with rational functions, interpolating over exact powers of 2, where the numerator and denominator have equal degree. We want analogues of the criteria we had for the exponential approximation; for the exponential approximation we had f(x)f(-x)=1, so for our log approximation we should have g(x)+g(1/x)=0. This forces some sort of symmetry over inversion.

It also forces symmetry in the coefficients of the rational interpolant, though in this case the symmetry is not between the numerator and denominator, but within the coefficients of each. The above forces one of the numerator and denominator to be palindromic and the other to be antipalindromic, and since we have g(1)=0, the numerator must be antipalindromic (and therefore the denominator must be palindromic). This cuts down the number of coefficients we have to solve by a factor of two, which is always cool.

For the (1,1)-order interpolant, which has three degrees of freedom, we choose (\frac12,-1),(1,0),(2,1) as the points to interpolate, and by the above optimisation, we only need to solve for (2,1).

\displaystyle g_1(x):=\frac{ax-a}{x+1}\\  g_1(2)=\frac a3=1\implies a=3\\  g_1(x)=3\cdot\frac{x-1}{x+1}

Unsurprisingly, this is exactly the same as f_1^{-1}.

The (2,2)-order interpolant has five degrees of freedom, so we extend our choice of points to include (\frac14,-2),(4,2), though again we need only consider (2,1) and (4,2).

\displaystyle g_2(x):=\frac{ax^2-a}{x^2+bx+1}\\  g_2(2)=1\implies 3a-2b=5\\  g_2(4)=2\implies 15a-8b=34\\  a=\frac{14}3,\,b=\frac92\\  g_2(x)=\frac{28}3\cdot\frac{x^2-1}{2x^2+9x+2}

The (3,3)-order interpolant may be derived likewise.

\displaystyle g_3(x):=\frac{ax^3+bx^2-bx-a}{x^3+cx^2+cx+1}\\  g_3(2)=1,\,g_3(4)=2,\,g_3(8)=3\implies a=\frac{125}{21},\,b=\frac{245}{12},\,c=\frac{49}4\\  g_3(x)=\frac5{21}\cdot\frac{100x^3+343x^2-343x-100}{4x^3+49x^2+49x+4}

Once again, rational interpolants provide a much better fit than polynomials, especially in this case where there is an essential singularity at x=0. A polynomial would go wild trying to fit that, and the only polynomials that satisfy the functional equation p(x)+p(1/x)=0 are the constant polynomials, which are not very interesting.

I may or may not expand on this post at a later date. Consider this a WIP.

MA2108S Tutorial 7

1 (comparison)

\displaystyle0<\frac{a_n}{1+na_n^2}<\frac1{n^2}\\  \implies\sum_{n\ge1}\frac{a_n}{1+na_n^2}\text{ converges}

2 (comparison/condensation)

Suppose n\mapsto x_n is bounded. As a positive bounded increasing sequence, it has a positive limit x. Consequently, there exists some K such that \forall n\ge K\quad x_n>x/2. Suppose m\ge K.

\displaystyle1-\frac{x_m}{x_{m+1}}=\frac{x_{m+1}-x_m}{x_{m+1}}<\frac{x_{m+1}-x_m}{x/2}=\frac2x\cdot(x_{m+1}-x_m)

Then telescoping series etc. blah blah blah.

Suppose n\mapsto x_n is unbounded. We basically want to use something like Cauchy’s condensation test here, but we don’t know exactly how quickly or slowly this sequence diverges. (The “worst case” for divergence would be something like the harmonic series, which the condensation test handles readily, but we can’t assume our sequence diverges logarithmically.)

Construct a subsequence recursively from indices i:k\mapsto i_k where i_1=1 and x_{i_{k+1}}>2x_{i_k}.

\displaystyle\sum_{n=i_k}^{i_{k+1}-1}\left(1-\frac{x_n}{x_{n+1}}\right)=\sum_{n=i_k}^{i_{k+1}-1}\frac{x_{n+1}-x_n}{x_{n+1}}>\sum_{n=i_k}^{i_{k+1}-1}\frac{x_{n+1}-x_n}{x_{i_{k+1}}}=\frac{x_{i_{k+1}}-x_{i_k}}{x_{i_{k+1}}}>\frac12

Therefore divergence.

3 (Stolz’s)

\displaystyle\frac1n\sum_{k\le n}ku_k=\frac1n\sum_{k\le n}(k-n)u_k+\sum_{k\le n}u_k

By Stolz’s theorem, we have the following:

\displaystyle\lim_{n\to\infty}\frac1n\sum_{k\le n}(k-n)u_k=-\lim_{n\to\infty}\sum_{k\le n}u_k=-\sum_ku_k

Therefore,

\displaystyle\lim_{n\to\infty}\frac1n\sum_{k\le n}ku_k=-\sum_ku_k+\sum_ku_k=0

4 (AM-GM inequality; comparison)

Cf. Tutorial 6.

5 (Raabe’s)

Let l':=\tfrac{1+l}2.

Suppose l>1. For sufficiently large n (we also need n>(l-l')/(ll')),

\displaystyle n\log\frac{u_n}{u_{n+1}}>l'\frac n{n-l'}\\  \frac{u_n}{u_{n+1}}>e^{l'/(n-l')}>1+\frac{l'}{n-l'}\\  \frac{u_{n+1}}{u_n}<1-\frac{l'}n.

This implies convergence of \sum u_n.

Suppose l<1. For sufficiently large n (we also need n>|l'|),

\displaystyle n\log\frac{u_n}{u_{n+1}}<l'\\  \frac{u_n}{u_{n+1}}<e^{l'/n}\le\frac1{1-\frac{l'}n}\quad\left(\text{if }\left|\frac{l'}n\right|<1\right)\\  \frac{u_{n+1}}{u_n}>1-\frac{l'}n.

This implies divergence of \sum u_n.

6 (telescoping)

\displaystyle a_n\prod_{k\le n}\frac1{1+a_k}=\prod_{k\le n-1}\frac1{1+a_k}-\prod_{k\le n}\frac1{1+a_k}
\displaystyle\sum_{n=1}^\infty a_n\prod_{k\le n}\frac1{1+a_k}=1-\prod_{k=1}^\infty\frac1{1+a_k}

As the series is increasing and bounded above by 1, it converges.

7 (application of 6)

Let \alpha:=\liminf_{n\to\infty}(\frac{u_na_n}{u_{n+1}}-a_{n+1}). For sufficiently large n\ge n_0, \frac{u_na_n}{u_{n+1}}-a_{n+1}>\alpha/2. WLOG let n_0:=1.

\displaystyle\frac{u_na_n}{u_{n+1}}>a_{n+1}+\frac\alpha2\\  \frac{u_n}{u_{n+1}}>\frac{2a_{n+1}+\alpha}{2a_n}\\  \frac{u_{n+1}}{u_n}>\frac{2a_n}{2a_{n+1}+\alpha}\\  u_n<u_1\frac{2a_1}{2a_2+\alpha}\cdots\frac{2a_{n-1}}{2a_n+\alpha}\\  =2u_1a_1\frac{2a_2}{2a_2+\alpha}\cdots\frac{2a_{n-1}}{2a_{n-1}+\alpha}\frac1{2a_n+\alpha}\\  =\frac{2u_1a_1}\alpha\frac1{1+\alpha/(2a_2)}\cdots\frac1{1+\alpha/(2a_{n-1})}\frac{\alpha/(2a_n)}{1+\alpha/(2a_n)}

Then apply 6.

8 (harmonic series comparison)

This is very similar to Raabe’s test, but has somewhat weaker conditions. Let \alpha:=\liminf_{n\to\infty}n(\frac{b_n}{b_{n+1}}-1). (The question uses the limit, but we can weaken this to just the inferior limit.)

\displaystyle\exists K\ge\alpha\quad\forall n\ge K\quad n\left(\frac{b_n}{b_{n+1}}-1\right)>\frac\alpha2\\  \frac{b_n}{b_{n+1}}>1+\frac\alpha{2n}\\  \frac{b_{n+1}}{b_n}<1-\frac\alpha{2n+\alpha}\le1-\frac\alpha{3n}\le\exp\left(\frac\alpha{3n}\right)\\  b_n<b_K\exp\left(-\frac\alpha3\left(\frac1K+\cdots+\frac1{n-1}\right)\right)\to0

By the squeeze theorem, b_n\to0, and so \sum_{n\ge1}(-1)^nb_n converges.

Note: this is faulty reasoning.

9 (root)

Root test: for sufficiently large n, a_n^{1/n}<\frac{1+q}2<1.

10 (p-series)

\displaystyle-\log a_n\ge (1+\alpha)\log n\implies a_n\le\frac1{n^{1+\alpha}}\implies\sum_{n\ge1}a_n\text{ converges}

Chase

Remember four years ago when I floated the idea of negative masses?

There’re some “obvious” things that can be said about a system with two point objects of nonzero masses m and -m. As previously mentioned, if the system is initially stationary, the two objects will start moving in tandem, accelerating in the direction of the positive-mass object. The reason for this is that the gravitational fields of the two objects are equal in magnitude but opposite in sign; the negative-mass object falls towards the positive-mass object with exactly the same acceleration as the positive-mass object’s falling away from the negative-mass object.

What is perhaps slightly less obvious is that the objects will experience the same acceleration even if the objects do not initially have the same velocity; in other words, there is a (noninertial) reference frame where both objects move in straight lines at constant velocity. There is no way for such a system to have a closed orbit in any nonrotating reference frame.

On the other hand, in general, if we have two masses m_1,m_2 where m_1+m_2\ne0, there will be a circular orbit where both objects revolve around their barycentre. As the barycentre’s location involves dividing by m_1+m_2, this gives another reason as to why in our earlier system (where m_1+m_2=0) there cannot be a circular orbit.

MA2108S Tutorial 6

1: Apply Stolz’s theorem.

\displaystyle\lim_{n\to\infty}{a_1+2a_2+\cdots+na_n\over n^2}=\lim_{n\to\infty}{(n+1)a_{n+1}\over2n+1}=\lim_{n\to\infty}{n+1\over2n+1}a_{n+1}={1\over2}\alpha

2: WLOG suppose a:=0 and b:=1. Then let \Sigma:=\{(-1/2,1),(0,1/2)\}. (I’m not sure there’s much more to this question.)

3: It’s obvious that the series of the finite differences is bounded termwise by q^n and hence is absolutely convergent.

4: Take the logarithm (justified since every term is positive) to get the equivalent problem (where a_n=\log s_n\in\mathbb R and the corresponding limits are in \mathbb R\cup\{\pm\infty\}):

\displaystyle\liminf_{n\to\infty}(a_{n+1}-a_n)\le\liminf_{n\to\infty}\frac{a_n}n\le\limsup_{n\to\infty}\frac{a_n}n\le\limsup_{n\to\infty}(a_{n+1}-a_n)

The middle inequality is trivial, and the first and third inequalities are conjugates under sign change, so it will suffice to prove just the third. If \limsup_{n\to\infty}(a_{n+1}-a_n)=\infty we are done. Otherwise, there is a finite largest limit point of (a_{n+1}-a_n). Let \alpha:=\limsup_{n\to\infty}(a_{n+1}-a_n)<\infty. Then given arbitrary \epsilon>0, there exists K\in\mathbb N such that for n\ge K, a_{n+1}-a_n<\alpha+\epsilon.

For n\ge2K, we have {a_n=(a_n-a_{n-1})+\cdots+(a_{K+1}-a_K)+a_K<(n-K)\alpha+(n-K)\epsilon+a_K<n\alpha+n\epsilon+O(1)}, so a_n/n<\alpha+\epsilon+O(1/n) and thus \limsup_{n\to\infty}a_n/n\le\alpha=\limsup_{n\to\infty}(a_{n+1}-a_n).

5: Let b\in[0,1]. ?? Holy crap, this is so obvious but how the heck do you prove this. Contradiction?

Suppose b is not a cluster point; then there is an \varepsilon-neighbourhood and K\in\mathbb N such that for all n\ge K a_n\notin V_\varepsilon(b).

Actually it seems easier to prove equidistribution and have this as an immediate consequence. Or not!

We can construct a decreasing subsequence of n\mapsto a_n recursively (like the gcd algorithm presented before), although as it is we still have the problem of showing that this construction leads to a subsequence that converges to zero. If we cheat and use rounding instead of flooring, we get a contractive sequence with factor 1/2, which obviously converges; there can be a negative sign in the terms, so this is not necessarily a subsequence, but it can be made into one. (The reason this will never be zero is because every term is irrational, by definition.)

After which, we can just do quotient-remainder where the remainder is bounded by arbitrary small values. QED.

6: \displaystyle\tan\left(\frac\pi4+\frac1n\right)=\frac{\left(\tan\frac\pi4+\tan\frac1n\right)}{1-\left(\tan\frac\pi4\right)\left(\tan\frac1n\right)}=\frac{1+\tan\frac1n}{1-\tan\frac1n}

\displaystyle\left(1\pm\tan\frac1n\right)^n=\left(1\pm\tan\frac1n\right)^{\cot\frac1n\cdot n\tan\frac1n}\to \exp\left(\pm\lim_{n\to\infty}{n\tan\frac1n}\right)=e^{\pm1}

\displaystyle\lim_{n\to\infty}\left(\frac{1+\tan\frac1n}{1-\tan\frac1n}\right)^n=\frac{e}{1/e}=e^2

7: Compare with \zeta(3/2). (P-series + comparison test.)

8: First one follows from AM-GM inequality and the second one follows from the first. Third: take b_n:=1/n and apply the first to show that it is absolutely convergent (and hence convergent).

9: False. Counterexample: harmonic series.

10: a_n=(n-1)\alpha+a_1

\displaystyle\frac{1/a_{n+1}}{1/a_n}=1-\frac\alpha{n\alpha+a_1}=1-\frac1n+\frac{a_1}{n(n\alpha+a_1)}

\displaystyle\frac{a_1}{n(n\alpha+a_1)}n^2\to\frac{a_1}\alpha\implies\frac{a_1}{n(n\alpha+a_1)}n^2\text{ bounded}

By Gauss’s test, this series diverges.

√10

Um, hello, why is everyone celebrating Pi Day instead of a more culturally significant number like, you know, 10?

Or its square root, at least. That’s gotta be half as important, and considering how important 10 is, √10 must rank pretty high up too. In the last post, we showed that nonnegative numbers have square roots, and 10 is clearly nonnegative, so √10 really exists. It would be rather embarrassing were we to start deifying some imaginary entity!

We’ll be slightly sacrilegious and approximate 10 with 9, because 9 is a nice number. Why is 9 a nice number? Well, duh, of all the numbers, its English spelling is the closest to the word “nice”.

But that aside, we also have 32 = 9. Thus:

\displaystyle\sqrt{10}=\sqrt9\sqrt{10\over9}=3\left(9\over10\right)^{-1/2}=3\left(1-{1\over10}\right)^{-1/2}=3\sum_{k=0}^\infty\binom{4k}{2k}{1\over40^k}

This is a series where the terms (and consequently, the partial sums) all happen to be terminating decimals and it converges at just about one digit per term, though on average each term has two more decimal places when written out in full. On the other hand, we can also write it as follows.

\displaystyle\sqrt{10}=\sum_{k=0}^\infty{(2k-1)!!\over(2k)!!}{3\over10^k}

Which would allow us to convert this into a spigot algorithm without even having to use a bignum library.

var A = []; // output digits (base 100000)
var B = [0]; // input digits (variadic base)
var n = 1e4; // number of decimal digits to generate
var i, j, q;
for (i = 1; i <= n; i++) B[i] = 3*(2*i-1); // initialisation
for (i = 0; i < n/5; i++)
{
	for (j = n, q = 0; j > 0; j--) // q: carry from next digit
	{
		B[j] = B[j]*1e5 + q*(2*j-1); // multiply digit and add carry
		q = (B[j] / ((2*j)*10))|0; // calculate carry
		B[j] -= q * ((2*j)*10); // subtract carried amount
	}
	A[i] = q; // carry for last digit ~ integer part (most of the time)
}
while (i --> 0)
{
	if (A[i] >= 1e5) // overflow in the extracted digits (happens very rarely)
	{
		A[i] -= 1e5; // subtract
		A[i-1]++; // and carry
	}
	A[i] = (1e5 + A[i] + '').substring(1); // convert to 0-padded string
}
print('3.' + A.join('')); // concatenate and prepend a "3."

Or if you like it compact (but still entirely valid strict JavaScript with no implicit semicolons):

for(var A=[],B=[0],n=1e4,i=0,j,q,s='';i++<n;)B[i]=6*i-3;for(i=0;i<n/5;A[i++]=q)for(j=n,q=0;j;B[j]-=20*(q=B[j]/20/j|0)*j--)B[j]=B[j]*1e5+2*q*j-q;for(;i-->0;s=(1e5+A[i]+'').slice(1)+s)A[i-1]+=A[i]>=1e5;print('3.'+s);

Previously.

Addendum:

How rarely is “very rarely”? I didn’t bother doing the calculation before, but I got bored one day and worked out the corresponding series. The overflow happens with “probability” \sqrt{10}/3+3/\sqrt{10}-2\approx0.0028.

While the above code is based on Rabinowitz and Wagon’s algorithm, there is a difference in where the denominators are handled. Theirs does it in the carry, mine does it in the digit. (I only recently realised I interpreted their algorithm wrong, after having implemented my version quite a few times in the past few years.) Of course, as the underlying hypergeometric series is the same, convergence isn’t affected much. It’s just that, if I’d implemented the above with Rabinowitz and Wagon’s version, the overflow would happen with the considerably higher “probability” \sqrt{10}/27\approx0.12.

Why the scare quotes? Because this calculation has an assumption that every legal digit is equally likely when considering powers of 100000 multiplied by \sqrt{10} of digits in our variadic base, which is actually a pretty strong statement we have a good reason to believe is true, but have no proof of. (We don’t even have a proof that such a digital equidistribution property holds in a usual integer base, which is also a weaker form of normality.)

Nonnegative numbers have square roots

Claim 0: 0 has a nonnegative square root.
Proof: 02 = 0.

Claim 1: 1 has a nonnegative square root.
Proof: 12 = 1.

Claim 2: A nonnegative number a with a nonnegative (or nonpositive) square root x has a nonpositive (or nonnegative, resp.) square root −x.
Proof: (−x)2=x2=a; x ≥ 0 ⇒ −x ≤ 0; and x ≤ 0 ⇒ −x ≥ 0.

Claim 3: A nonnegative number a with a nonnegative square root x has a unique nonnegative square root, a unique nonpositive square root, and no other square roots.
Proof: We already have x and, by the previous claim, −x as square roots. Suppose there is a distinct root y, where yx ≠ 0 and y+x ≠ 0. Then (yx)(y+x) = y2x2 = y2−a ≠ 0 ⇒ y2a, a contradiction.

Claim 4: Squaring preserves order on two nonnegative numbers.
Proof: xyx2xyy2; also x < yx2 < xy < y2.

Claim 5: Square-rooting (taking the nonnegative square root) preserves order on two nonnegative numbers with square roots.
Proof: If there were two nonnegative numbers with square roots in a different order, squaring them would contradict the previous claim.

Claim 6: A number a greater than 1 has a positive square root.
Proof: Let a be a real number greater than 1 whose square root we will be interested in. Let S := {x ∈ ℝ | 0 < x2a}. As 1 ∈ S, S is nonempty and thus has a least upper bound.

Let x := sup S and y := (ax+a)/(x+a). If x2 = a, we are done. Henceforth assume x2a.

a>1\implies a^2>a\implies a^2-a>0
1\in S\implies x\ge1>0\implies (x+a)^2>a^2>a^2-a>0
\displaystyle0<\frac{a^2-a}{(x+a)^2}<1

\displaystyle y^2-a=\frac{(ax+a)^2}{(x+a)^2}-a=(x^2-a)\cdot\frac{a^2-a}{(x+a)^2}

If x^2>a, then

x^2-a>0\\  y^2-a<x^2-a\\  y^2<x^2\\  y<x,

but also

y^2-a>0\\  y^2>a\ge z^2\quad\forall z\in S\\  y>z\quad\forall z\in S\\  y\ge\sup S=x,

a contradiction.

If x^2<a, then

x^2-a<0\\  y^2-a>x^2-a\\  y^2>x^2\\  y>x,

but also

y^2-a<0\\  y^2<a\\  y\in S\\  y\le\sup S=x,

again a contradiction.

Therefore x is a positive square root of a.

Claim 7: A positive number a less than 1 has a positive square root.
Proof: By the above, 1/a has a positive square root x. 1/x > 0 and (1/x)2 = 1/x2 = 1/(1/a) = a.

Claim 8: A nonnegative number has a nonnegative square root.
Proof: Apply claims 0, 1, 6 or 7 as necessary.

Claim 9: A nonnegative number a has a unique nonnegative square root, a unique nonpositive square root, and no other square roots.
Proof: By claim 8, a has a nonnegative square root, and by claim 3, it has a unique nonnegative square root, a unique nonpositive square root, and no other square roots.

“Buzzword” is a buzzword

So uh, “e-learning week” in NUS, and I figure many students will be using this week as a second term break.

Me, I’m going through the webcasts of the Linear Algebra II for the lectures I missed, and I missed quite a few classes because of sleep deprivation due to staying up to do English shit. Except once I missed it because I needed time to rewrite/redo my Mathematical Analysis I assignment, and damn am I glad I did that because I completely screwed up the definition of a sequence’s limit in the first two questions.

I have a few things to say about the webcasts. The most annoying thing is that the aspect ratio is wrong. It’s 2015, how do people still manage to get something as simple as aspect ratio wrong? Secondly, I don’t know if this is because I don’t have Silverlight, but I can only view the exact same thing as what’s presented on the computer monitor. The LA2 lecturer uses the visualiser a lot so this makes the webcasts kind of useless, hurr.


Tomorrow I’ll head out and go buy a new hard drive. Someone’s been bugging me to do that (because I can’t be productive with so little free space on my current hard drives and he needs me to be productive).

MA2108S Tutorial 4 question 2 (original)

Idk why Prof Xu decided to “correct” the question to make it easier. It’s perfectly solvable as it is.

Let x:n\mapsto x_n be a sequence such that for any sequence y:n\mapsto y_n, at least one of these equalities hold:
\displaystyle\limsup_{n\to\infty}(x_n+y_n)=\limsup_{n\to\infty}x_n+\limsup_{n\to\infty}y_n\quad\text{(additivity)}
\displaystyle\limsup_{n\to\infty}x_ny_n=\left(\limsup_{n\to\infty}x_n\right)\left(\limsup_{n\to\infty}y_n\right)\quad\text{(multiplicativity)}

Show that x is convergent.

Take y_n:=-x_n. If additivity holds, then
\displaystyle\mathrm{LHS}=\limsup_{n\to\infty}0=0
\displaystyle\mathrm{RHS}=\limsup_{n\to\infty}x_n-\liminf_{n\to\infty}x_n
\displaystyle\implies\limsup_{n\to\infty}x_n=\liminf_{n\to\infty}x_n\\  \implies\lim_{n\to\infty}x_n=\limsup_{n\to\infty}x_n=\liminf_{n\to\infty}x_n.

If multiplicativity holds, then
\displaystyle\mathrm{LHS}=\limsup_{n\to\infty}(-x_n^2)=-\liminf_{n\to\infty}x_n^2=-\left(\liminf_{n\to\infty}|x_n|\right)^2\le0
\displaystyle\mathrm{RHS}=-\left(\limsup_{n\to\infty}x_n\right)\left(\liminf_{n\to\infty}x_n\right).
As the LHS is nonpositive, so is the RHS, which implies
\displaystyle\left(\limsup_{n\to\infty}x_n\right)\left(\liminf_{n\to\infty}x_n\right)\ge0,
which implies that the superior and inferior limits are either both nonpositive or both nonnegative.

If they’re both nonpositive, then \limsup x_n=\liminf |x_n| and \liminf x_n=\limsup |x_n|, and if they’re both nonnegative then \limsup x_n=\limsup |x_n| and \liminf x_n=\liminf |x_n|. Either way,
\displaystyle\left(\liminf_{n\to\infty}x_n\right)\left(\limsup_{n\to\infty}x_n\right)=\left(\liminf_{n\to\infty}|x_n|\right)\left(\limsup_{n\to\infty}|x_n|\right).

Furthermore, note that x is either eventually nonpositive or eventually nonnegative. (This follows trivially from the above and the definition of the superior and inferior limits.) Without loss of generality, assume that x is either everywhere nonpositive or everywhere nonnegative.

\displaystyle\left(\liminf_{n\to\infty}|x_n|\right)^2=\left(\liminf_{n\to\infty}|x_n|\right)\left(\limsup_{n\to\infty}|x_n|\right)\\  \implies\liminf_{n\to\infty}|x_n|=\limsup_{n\to\infty}|x_n|\\  \implies\liminf_{n\to\infty}x_n=\limsup_{n\to\infty}x_n\\  \implies\lim_{n\to\infty}x_n=\liminf_{n\to\infty}x_n=\limsup_{n\to\infty}x_n

QED.

The corrected version of the question merely asks to show that a sequence satisfying the additivity condition is convergent and that a sequence satisfying the multiplicativity condition is convergent, which is considerably easier. Almost too easy compared to all the other questions in this module, even!

Shit’s broken

Some time ago I adopted this mentality where I should prioritise things that’re due/overdue over all those things that are not overdue. Sounds great? Yeah, I thought so too.

What this ended up in was that I shunted all these “lower priority” tasks to “focus” on the shit that’s due, and ended up procrastinating doing other things. (Said procrastination stems from doing lots of things that individually don’t take up a lot of time and hence get fast-tracked by my brain’s scheduling algorithm.) So I’ve done none of the high-priority tasks, and none of the low-priority tasks either! This is arguably a worse outcome than doing none of the high-priority tasks but actually finishing some low-priority ones.

Also, I have a 800-word essay due Monday (i.e. two days ago) that I haven’t started on. Lawd, English classes. But somehow I managed to get the top score in my group for the last essay submission. Not that that’s much of an accomplishment since half the class is ESL, though, and it was only a B anyway.

I don’t know what to do anymore. I don’t think I can keep up with this, but at the same time, I really want to be rid of this nonsense. I guess it helps that I don’t really have to spend much time studying for the four midterm tests I have this Friday? Not that I’d spend that time on the essay.

Like, here I am, rambling on my personal blag instead of doing this rambling on the blag for the English module. (Yes, I now have yet another blog.) What the hell am I doing?

Update (fifteen hours later):

So, I’ve made “some” progress on the researching front. I even woke up early to do that, except I ended up browsing HN and such because ???. Derp. Have I mentioned that another aspect of the English module grading is peer review, and that means I have to read what other people write?

I have a rather high standard for correctness of grammar in what I read. Unfortunately, and this shouldn’t really have been a surprise, these people can’t into grammar. I guess I’m like an outlier in grammar knowledge in the whole class; I’m only in it because I can’t string words to form coherent sentences, maybe? Funnily enough, sometimes it also seems that the tutor is desperate to correct sentences which aren’t even wrong because she misparsed them initially.

It’s easy to do that with English because nouns don’t have ten million forms depending on context, like Latin does. I don’t even remember what all these forms are called. Ergo, Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. So anyway, sometimes a sentence will have an uncommon structure but have a low edit distance to a common structure, and people will misinterpret it as being a typo of the common structure rather than treating it for what it is.

Also, I am so, so, so screwed. I literally have to finish writing up the eight hundred words by tonight, and I also have that Mathematical Analysis tutorial to go through. Hurrrr.