This problem set is an original endeavor, meaning I haven’t purposely copied the problems from an existing book or paper. However, the more straightforward exercises are just “natural” extensions of existing formulas, and I am sure somebody else proposed them first. For example, exercise 4 appears under different forms in various problems you can find online, and exercise 5 is a variation of an existing exercise from the classical Romanian “culegere de probleme” Nastasescu & Nita. I found exercise 8 solved on YouTube with a slight twist: \(a=10, b=e\).
Note: Nobody other than me has reviewed the problems. If you see any issues with them (they might be totally wrong for some reason, don’t hesitate to contact me).
^{[very easy]} If \(a,b,c \in (0, \infty) \setminus \{1\}\), \(abc \neq 1\), \(x=\log_ba\), and \(y=\log_ca\), then compute \(\log_{abc}a\). ^{solution}
^{[easy]} If \(a,b \in (0, \infty) \setminus \{1\}\), \(a^x=(ab)^n\) and \(b^y=(ab)^n\) prove that \(\frac{1}{x}+\frac{1}{y}=\frac{1}{n}\). ^{solution}
^{[easy]} Prove that \(\sum_{i=2}^{n}(\frac{1}{\log_i n}) = \prod_{i=n+1}^{n!}(\log_{i-1}i)\), where \(n \gt 2\) and \(n \in \mathbb{N}\). ^{solution}
^{[easy]} If \(a^{\log_m x} * x^{\log_m b} = ab\), prove that \(x=m\). ^{solution}
^{[medium]} If \(2\log_b x = \log_c x - \log_a x\), prove that \(c^2=(\frac{a}{c})^{\log_a b}\). ^{solution}
^{[hard]} If \(\rvert \log_a x \log_b x \lvert=\lvert \log_b x \log_c x + \log_a x \log_c x + \log_a x \log_b x\rvert\), prove that \(c=(abc)^{\log_c(abc)}\). ^{solution}
^{[easy]} Find \(x\) if \(a,b,c \in (0,1) \cup (1, \infty)\), and \(a^{\ln\frac{b}{c}}*b^{\ln\frac{c}{a}}*c^{\ln\frac{a}{x}}=1\).^{solution}
^{[hard]} Solve \((\log_ax)^{\log_bx} = (\log_bx)^{\log_ax}\).^{solution}
^{[easy]} If \(a,b,m \in (0,1)\) or \(a,b,m \in (1, \infty)\), prove that \(\frac{\log_{ab}m}{\sqrt{\log_am*\log_bm}} \le 1\). ^{solution}
^{[easy]} If \(a,b,m \in (0,1)\) or \(a,b,m \in (1, \infty)\), prove \(\frac{1}{\log_{(a+b)}m} \ge \frac{1}{\log_2m} + \frac{1}{2}(\frac{1}{\log_am}+\frac{1}{\log_bm})\). ^{solution}
^{[medium]} If \(x_i \in (0,1)\) or \(x_i \in (1, \infty)\), prove \(\sum_{i=1}^{n-1}\log_{x_{i}}{x_{i+1}} \ge n - \log_{x_n}x_1\), \(\forall n \in \mathbb{N}, n \ge 2\).^{solution}
12.^{[medium]} If \(x=\log_b c\), \(y=\log_a c\), and \(m=\log_{abc} c\), prove that \((\frac{1}{m \sqrt{3}}-1)(\frac{1}{m \sqrt{3}}+1) \le \frac{1}{x^2} + \frac{1}{y^2}\).^{solution}
13.^{[medium]} If \(a,b,m \in (0,1)\) or \(a,b,m \in (1, \infty)\), prove \(\sum_{i=2}^{n-1}\log_i(i+1)\ge \sum_{i=2}^{n-1}\log_i2+\frac{n-2}{2}\).^{solution}
\(\log_{abc}a=\frac{1}{log_a(abc)}=\frac{1}{\log_aa+\log_ab+\log_ac} = \frac{1}{1 + \frac{1}{x}+\frac{1}{y}}=\frac{xy}{xy+x+y}\).
If \(a^x=(ab)^n \Rightarrow x = \log_{a}(a^nb^n) = n + n\log_ab = n(1+\log_ab)\).
In a similar fashion, if \(b^y=(ab)^n \Rightarrow y = \log_{b}(a^nb^n) = n + n\log_ba = n(1+\log_ba) = \frac{n(\log_ab+1)}{\log_ab}\).
With this in mind, \(\frac{1}{x}+\frac{1}{y}=\frac{1}{n(1+\log_ab)} + \frac{log_ab}{n(1+\log_ab)} = \frac{1+\log_ab}{n(1+\log_ab)}=\frac{1}{n}\).
We start with the sum:
\[\sum_{i=2}^{n}(\frac{1}{\log_i n}) = \frac{1}{\log_2 n} + \frac{1}{\log_3 n} + ... + \frac{1}{\log_n n} \Leftrightarrow \\ \sum_{i=2}^{n}(\frac{1}{\log_i n}) = \log_n 2 + \log_n 3 + ... + \log_n n \Leftrightarrow \\ \sum_{i=2}^{n}(\frac{1}{\log_i n}) = \log_n n!\]Then we compute the product:
\[\prod_{i=n+1}^{n!}[\log_{i-1}i] = \underbrace{\underbrace{\log_n(n+1) * \log_{n+1}(n+2)}_{=\log_n(n+2)} * .}_{=\log_nk}.. * \log_{n!-1}(n!) \Leftrightarrow \\ \prod_{i=n+1}^{n!}[\log_{i-1}i] = \log_n(n!)\]As we can see the sum and product are equal.
We know that: \(a^{\log_m x} * x^{\log_m b} = ab\).
Let’s write \(x^{\log_m b}=k\).
This would be equivalent to:
\[\log_m(x^{\log_m b})=\log_m k \Leftrightarrow \\ \log_m b * \log_m x =\log_m k \Leftrightarrow \\ \log_m x * \log_m b =\log_m k \Leftrightarrow \\ \log_m (b^{\log_m x})=\log_m k \Leftrightarrow \\ b^{\log_m x} = k\]So we can re-write our initial relationship as:
\[a^{\log_m x} * b^{\log_m x} = ab \Leftrightarrow \\ (ab)^{\log_m x} = (ab)^1 \Rightarrow \\ \log_m x = 1 \Rightarrow \\ x = m\]\(2\log_b x = \log_c x - \log_a x\) is equivalent to:
\[\frac{2}{\log_x b}=\frac{\log_x a - \log_x c}{\log_x a \log_x c}\]We re-write everything as:
\[2*\log_x c = \frac{\log_x b}{\log_x a}*(\log_x a - \log_x c)\]The latest is equivalent to:
\[\log_x c^2 = \log_a b * \log_x \frac{a}{c} \Leftrightarrow \ \log_x c^2 = \log_x (\frac{a}{c})^{\log_a b}\]Finally, we have proven:
\[c^2=(\frac{a}{c})^{\log_a b}\]The modulus is there as a hint that you can square both sides.
\[|\log_a x \log_b x|=|\log_b x \log_c x + \log_a x \log_c x + \log_a x \log_b x| \Leftrightarrow \\ \\ (\log_a x \log_b x)^2=(\log_b x \log_c x + \log_a x \log_c x + \log_a x \log_b x)^2 \Leftrightarrow \\ \\ \frac{\log_c x}{\log_c x} (\log_a x \log_b x)^2 = (\log_b x \log_c x + \log_a x \log_c x + \log_a x \log_b x)^2 \Leftrightarrow \\ \\ \frac{1}{\log_c x} = \frac{(\log_b x \log_c x + \log_a x \log_c x + \log_a x \log_b x)}{\log_a x \log_b x \log_c x} * \frac{(\log_b x \log_c x + \log_ax \log_cx + \log_ax \log_bx)}{\log_ax \log_bx}\Leftrightarrow \\ \\ \frac{1}{\log_c x}=(\frac{1}{log_a x} + \frac{1}{\log_b x} + \frac{1}{\log_c x})(\frac{\log_c x}{\log_a x} + \frac{\log_c x}{\log_b x} + 1) \Leftrightarrow \\ \\ \log_x c = (\log_x a + \log_x b + \log_x c)(\frac{log_x a}{log_x c} + \frac{\log_x b}{\log_x c} + \frac{\log_x c}{\log_x c}) \Leftrightarrow \\ \\ \log_x c = (\log_x a + \log_x b + \log_x c)(\log_c a + \log_c b + \log_c c) \Leftrightarrow \\ \\ \log_x c = \log_x(abc) * \log_c(abc) \Leftrightarrow \\ \\ c = (abc)^{\log_c(abc)}\]After reducing the terms further:
\[\ln c(\ln b - \ln x) = 0 \Rightarrow x = b\]Not all the math questions have nice answers.
\[(\log_ax)^{\log_bx} = (\log_bx)^{\log_ax} \Leftrightarrow \\ \log_b[(\log_ax)^{\log_bx}] = \log_b[(\log_bx)^{\log_ax}] \Leftrightarrow \\ (\log_bx)*\log_b(\log_ax) = \log_ax*\log_b(\log_bx) \Leftrightarrow \\ (\log_bx)*\log_b(\log_ax) = \frac{\log_bx}{\log_ba}*\log_b(\log_bx) \Leftrightarrow \\ \log_b(\frac{\log_bx}{\log_ba}) = \frac{\log_b(\log_bx)}{\log_ba} \Leftrightarrow \\ \log_b(\log_bx)-\log_b(\log_ba) = \frac{\log_b(\log_bx)}{\log_ba} \Leftrightarrow \\ \log_b(\log_bx) - \frac{\log_b(\log_bx)}{\log_ba} = \log_b(\log_ba) \Leftrightarrow \\ \log_b(\log_bx)(1-\frac{1}{\log_ba}) = \log_b(\log_ba) \Leftrightarrow \\ \log_b(\log_bx) = \frac{\log_b(\log_ba) \log_ba}{log_ba-1} \Leftrightarrow \\ x = b^{b^{\frac{\log_b(\log_ba) \log_ba}{log_ba-1}}}\]From the QM-AM-GM inequality:
\[\frac{\sum_{i=1}^{n-1}\log_{x_{i}}{x_{i+1}} + \log_{x_n}x_1}{n} \ge (\log_{x_n}x_1*\underbrace{\prod_{i=1}^{n-1}\log_{x_{i}}{x_{i+1}}}_{=\log_{x_{1}}x_n})^{\frac{1}{n}} \Leftrightarrow \\ \frac{\sum_{i=1}^{n-1}\log_{x_{i}}{x_{i+1}} + \log_{x_n}x_1}{n} \ge (\underbrace{\log_{x_n}x_1 * \log_{x_1}x_n}_{=1})^{\frac{1}{n}}\Leftrightarrow \\ \sum_{i=1}^{n-1}\log_{x_{i}}{x_{i+1} + \log_{x_n}x_1}{n} \ge n \Leftrightarrow \\ \sum_{i=1}^{n-1}\log_{x_{i}}{x_{i+1} \ge n - \log_{x_n}x_1}{n}\]So we can conclude that \(\frac{1}{m}=\frac{1}{x} + \frac{1}{y} + 1\).
Applying the Cauchy-Schwartz inequality:
\[(\frac{1}{x}*1 + \frac{1}{y}*1 + 1*1)^2 \le (\frac{1}{x^2} + \frac{1}{y^2}+1)(1^2 + 1^2 + 1^2)\]Leads to:
\[\frac{1}{m^2} \lt 3(\frac{1}{x^2} + \frac{1}{y^2}+1) \Leftrightarrow \\ (\frac{1}{m \sqrt{3}}-1)(\frac{1}{m \sqrt{3}}+1) \le \frac{1}{x^2} + \frac{1}{y^2}\]The solution is left to the reader.
]]>The article has yet to be thoroughly reviewed by anyone other than me, so I put it online, hoping to get some feedback before bringing it to a final state.
In this series, we will start with a brief recap of some of the math concepts related to the circle, including trigonometric functions like sine and cosine. We’ll also discuss Euler’s identity, introduce the concept of a sinusoid (and complex sinusoid), and finally, we’ll introduce the concept of Fourier Series.
The animations used in this series are original, although I drew inspiration from some existing materials found on the web. Please keep in mind that this is not a comprehensive course on the topic, so if you’re really interested in learning more, I recommend taking a full course on the subject.
to be continued
It all starts with The Circle:
The Circle is a geometrical figure with a center \(P(a, b)\), and a radius \(r\).
It has the following associated equation:
\[(x-a)^{2} + (y-b)^{2} = r^2\]If \(a=0, b=0\) and \(r=1\), the circle becomes less generic, so we call it The Unit Circle:
The equation for The Unit Circle is:
\[x^2+y^2=1\]One could argue The Circle is the epitome of symmetry.
Pick one point, \(A\), then its reflection on the other side, \(A^{'}\), and start rotating:
What happens on The Circle, stays on The Circle.
We rarely see angles expressed in degrees; usually, we represent them in relation to the number PI \(\pi\): \(\pi\), \(\frac{\pi}{2}\), \(\frac{\pi}{3}\), \(\frac{\pi}{4}\), etc.;
\(\pi\) is the ratio of a circle’s circumference to its diameter. π ≈ 3.14
.
If \(r \neq 1\), the circle’s perimeter is \(P=2\pi r\), while the area is \(A=\pi r^2\). Both \(A\) and \(P\) depend on \(\pi\).
\(\pi\) is irrational and transcendental.
The radian
(or rad
) is the actual unit we use to measure angles. An intimate relationship exists between the radian
and the number \(\pi\).
To transform an angle measured in degrees (\(360°\)) to radians, the algorithm is simple: we multiply the angle by \(\pi\), then divide the result by \(180\).
We can define \(\sin\) and \(\cos\) in relationship to The Unit Circle:
\(\sin\) and \(\cos\) are periodic functions with the period \(2\pi\).
At this point, it will be unfair for \(\cos\) not to plot it on the same graph:
Putting \(\cos\) and \(\sin\) side by side, we observe that they are not that different:
\[\sin(x+\frac{\pi}{2})=\cos(x)\]We say that \(\cos\) leads the \(\sin\) with \(\frac{\pi}{2}\):
The parity of mathematical functions generally refers to whether a function is even, odd, or neither.
The cosine is even, meaning \(\cos(x)=\cos(-x)\):
And the sine is odd, meaning \(\sin(-x)=-\sin(x)\), or \(\sin(x)=-\sin(-x)\):
In the Complex Plane, the points on the circle are defined by the following equation:
\[z=\cos(\theta)+i*\sin(\theta)\]Multiplying a complex number with imaginary unit \(i\) is the equivalent of rotating the number counterclockwise with \(\frac{\pi}{2}\) on an “imaginary circle” with the radius: \(r=\mid a + b*i \mid=\sqrt{a^2 + b^2}\)
For example, if we take \(z_{1} \in \mathbb{C}\) and we multiply it three times with \(i\) we will have it rotated into the all 4 quadrants of the circle:
The natural exponential function, often denoted as \(f(x)=e^{x}\), is a particular case of the exponential function where the base is \(e\), also known as Euler’s Number (\(e \approx 2.71828\)).
\(e\), just like \(\pi\), irrational and transcendental.
We normally define \(e\) as:
\[e=\sum_{n=0}^{\infty}(\frac{1}{n!})=\frac{1}{0!} + \frac{1}{1!}+\frac{1}{2!}+...\] \[e=\lim_{x \to \infty}(1+\frac{1}{x})^x\] \[e=\lim_{x \to 0}(x+1)^{\frac{1}{x}}\]Even if not obvious, there’s a strong connection between \(e\) and the circle.
The natural exponentiation function is an eigenfunction for differentiation.
An eigenfunction in this context is a function that, when differentiated, yields a constant multiple of itself:
\[\frac{d}{dx} e^{ax} = a * e^{x}\]If we substitute \(a \rightarrow e\), by subsequently differentiating \(f(x) = e^{ix}\), we obtain:
\[\frac{d}{dx}f(x) = \frac{d}{dx} (e^{ix}) = i * e^{ix}\] \[\frac{d^2}{dx^2}f(x) = \frac{d^2}{dx^2} (e^{ix}) = \frac{d}{dx} (i * e^{ix}) = -e^{ix}\] \[\frac{d^3}{dx^3}f(x) = \frac{d^3}{dx^3} (e^{ix}) = \frac{d}{dx} (-e^{ix}) = -i * e^{ix}\] \[\frac{d^4}{dx^4}f(x) = \frac{d^4}{dx^4} (e^{ix}) = \frac{d}{dx} (-i*e^{ix}) = e^{ix} = f(x)\]In simple terms, after we differentiate \(f(x)\) four times (\(f'(x), f''(x), f'''(x), f''''(x)\)), our function does a full circle.
It’s the same pattern described in the previous section when we multiplied our \(z_{1}\) with \(i\).
Subsequently deriving \(e^{ix}\) is the equivalent of subsequently multiplying \(e^{ix}\) with \(i\). Multiplying a complex number with \(i\) means rotating that number in the Complex Plane with \(\frac{\pi}{2}\).
But what is a derivative of a function at a certain point?
It’s the rate of change of that function at that particular point. But we’ve just said that the derivative of \(e^{ix}\) is equivalent to a rotation.
So, the rate of change is rotational.
We can intuitively feel that the function \(e^{ix}\) describes an actual circle.
There’s no other possible solution. So we can boldly affirm that \(e^{ix} = \cos(x) + i*\sin(x)\) (which is the formula discovered by Euler).
Of course, this is not a rigorous demonstration. One can prove Euler’s identity using Taylor Series.
Euler’s formula is a thing of marvel:
\[e^{ix}=\cos(x) + i * \sin(x)\]Or, if we choose \(x=\pi\):
\[e^{i\pi}+1=0\]By substituting \(x \rightarrow -x\) into Euler’s identity we obtain:
\[e^{-ix}=\cos(x)-i*\sin(x)\]If we add/subtract the two equalities, we will obtain the definition of the sine and cosine functions in their exponential form:
\[\cos(x) = \frac{e^{ix} + e^{-ix}}{2}\] \[\sin(x) = \frac{e^{ix} - e^{-ix}}{2*i}\]All points of the circle are determined by a functon \(z(x)\), where:
\[z(x)=e^{ix}=\underbrace{\cos(x)}_{Re(x)}+\underbrace{i*\sin(x)}_{Im(x)}\]\(x\) represents the actual angle \(\theta \in \mathbb{R}\), so \(z(\theta)=e^{i\theta}=cos(\theta)+i*sin(\theta)\).
You’ve seen that we’ve interchanged \(x\) with \(t\) and \(\theta\) throughout the article. It’s confusing, but don’t get confused. We are the ones to decide how to look at \(x\), as an angle or as time.
A sinusoid, or a sine wave, is a smooth, repetitive curve defined by following function:
\[y(t) = A * \sin(2\pi ft + \varphi) = A * \sin(\omega t + \varphi)\]If we consider time to be the \(x\)-axis, and \(y(t)\) the \(y\)-axis, the sinusoid becomes:
\[y=f(x)=A * \sin(\omega x + \varphi)\]The \(\sin\) and \(\cos\) are just particular cases of sinusoids.
The following animation is interactive. You can choose the values of \(A=\) , \(\omega=\) and \(\varphi=\) to plot the sinusoid (if you pick \(\varphi=\frac{\pi}{2}\) a cosine is plotted):
Starting with the definition of a sinusoid:
\[y(t) = A * sin(2\pi ft + \varphi) = A * sin(\omega t + \varphi)\]If we multiply each side with \(A\):
\[A*e^{i\theta}=A*(\cos(\theta)+i*\sin(\theta))\]If we substitute \(\theta\) with \(\theta=\omega t + \varphi\) we obtain the complex sinusoid:
\[s(t)=A*e^{i(\omega t + \varphi)} = A * \cos(\underbrace{\omega t + \varphi}_{\theta}) + i * A * \sin(\underbrace{\omega t + \varphi}_{\theta})\]A complex sinusoid captures the behavior of two sinusoids (one cosine and one sine) on both its axes; on the real part, it behaves like a cosine, while on its imaginary part, it behaves like a sine.
The two are in sync as they both depend on the free variable \(\theta\), expressed as \(\theta=\omega t + \varphi\).
^{(Source code)}
Two interesting observations:
Two sinusoids in phase and sharing the same amplitude but with opposite frequencies nullify themselves.
Let’s plot two arbitrary selected sinusoids \(y_{1}(x)\) and \(y_{2}(x)\), where:
The sum \(y(x)=y_{1}(x) + y_{2}(x)\) shows an interesting pattern
Adding more sinusoids to an existing one (\(A=1\), \(\omega=1\), \(\varphi=1\)) generate more complex patterns:
Not so long ago, I’ve re-imagined the game of Tetris.
It’s now possible to play Tetris with Sinusoids:
If we carefully choose the sinusoids, we can create predictable patterns:
Let’s take, for example, use the following formula:
\[y(x) = \frac{4}{\pi}\sum_{k=1}^{\infty}\frac{\sin(2\pi(2k-1)fx)}{2k-1}\]Through expansion, the formula becomes:
\[y(x) = \underbrace{\frac{4}{\pi}\sin(\omega x)}_{y_{1}(x)} + \underbrace{\frac{4}{3\pi}\sin(3\omega x)}_{y_{2}(x)} + ... + \underbrace{\frac{4}{(2k-1)\pi}{\sin((2k-1)\omega x)}}_{y_k(x)} + ...\]\(y_1(x), y_2(x), y_3(x), ..., y_k(x), ...\) are all the individual sinusoids.
If we perform the sum, we will create a square wave. The more sinusoids we pick, the better the approximation.
Please choose how many sinusoids you want to use, and let’s see how the functions look like for (and \(\omega\)=):
One sinusoid has a corresponding circle that spins. So, the above animation can be re-imagined like this:
Pick and \(\omega\)= to plot the circles and the corresponding \(y_k(x)\) functions:
There’s an intuitive proof to this: each epicycle corresponds to a specific sinusoid; when we talk about combining the sinusoids, we are talking about summing their positions at each point in time, and eventually, this operation reduces to subsequent vector additions.
Let’s take a simple example by introducing three arbitrarily picked sinusoids and their associated point vectors (or position vectors):
Their sum is \(y(x) = y_{1}(x) + y_{2}(x) + y_{3}(x) = 1.4sin(x + 1) + =0.8sin(2*x) + 0.5sin(3*x)\).
A position vector represents the displacement from the origin \((0, 0)\) to a specific point in space. In our case, the vector \((x, y_{k}(x))\) represents the position or location of a point on the graph of the function(s) \(y_{k}(x)\) at a particular \(x\) value.
If we plot \(y(x)\) on the cartesian grid we obtain something like:
At each given point \(x\), we can say for certain that \(\vec{u} = \vec{u_{1}} + \vec{u_{2}} + \vec{u_{3}}\).
If we carefully pick the right sinusoids the moving circles can describe (approximate) any shape we want.
Here is a flower for example:
Can you guess the individual sinusoids working together to draw the flower?
You probably can’t unless you apply methods from a branch of mathematics called Fourier Analysis.
Fourier Series is the mathematical process through which we expand a periodic function of period \(P\) as a sum of trigonometric functions.
Do you remember the Pink Floyd’s album cover for the Dark Side of The Moon:
Imagine our function \(f(x)\) is the light itself, the prism is essentially Fourier Mathematics, and the spectral colors emanating from the prism are the sinusoids.
Following the analogy the formula looks like this:
\[\underbrace{f(x)}_{\text{ light itself}}=\underbrace{A_{0} + \sum_{n=1}^{\infty} [A_{n} cos(\frac{2\pi nx}{P}) + B_{n} sin(\frac{2\pi nx}{P})]}_{\text{the spectral components}}\]Where \(A_{n}\) and \(B_{n}\) are called Fourier Coefficients are defined by the following integrals:
\[A_{0} = \frac{1}{P} \int_{- \frac{P}{2}}^{+\frac{P}{2}} f(x) dx\] \[A_{n} = \frac{2}{P} \int_{- \frac{P}{2}}^{+ \frac{P}{2}} f(x) * \cos(\frac{2\pi nx}{P}) dx\] \[B_{n} = \frac{2}{P} \int_{- \frac{P}{2}}^{+ \frac{P}{2}} f(x) * \sin(\frac{2\pi nx}{P}) dx\]With the help of Euler’s Formula and by changing the sine and cosine functions in their exponential forms, we can also express the Fourier Series of a function as a sum of Complex Sinusoids:
\[f(x) = \sum_{n=-N}^{N} C_{n} e ^ {i2\pi \frac{n}{P}x}\]Where:
\[C_{n} = \begin{cases} A_{0} & \text{if } n = 0 \\ \frac{1}{2} (A_{n} - i * B_{n}) & \text{if } n > 0 \\ \frac{1}{2} (A_{n} + i * B_{n}) & \text{if } n < 0 \\ \end{cases}\]If we do additional substitutions, the final form of \(C_{n}\) is:
\[C_{n} = \frac{1}{P} \int_{-\frac{P}{2}}^{\frac{P}{2}} e^{-i2\pi\frac{n}{P}x} f(x) dx\]Remember the Square Wave we’ve approximated with sinusoids in this section? At that point, we used the following formula to express the Square as a sum of sinusoidal components:
\[y(x) = \frac{4}{\pi}\sum_{k=1}^{\infty}\frac{\sin(2\pi(2k-1)fx)}{2k-1}\]Or, to keep things simpler, by substituting \(\omega=2\pi f\) (\(\omega\) is the angular frequency):
\[y(x) = \frac{4}{\pi}\sum_{k=1}^{\infty}\frac{\sin((2k-1)\omega x)}{2k-1}\]It’s time to understand how we’ve devised such an approximation.
In isolation, the Square Wave, \(f(x)\) looks like this:
^{(Source code)}
Throughout the interval [0, 2L]
, \(f(x)\) can be written as:
Where \(H(x)\) is called the Heavisde Step Function and has the following definition:
\[H(x) = \begin{cases} 0 & \text{if } x \lt 0 \\ 1 & \text{if } x \ge 0 \\ \end{cases}\]First of all, let’s look at \(A_{0} = \frac{1}{2L} \int_{0}^{2L} f(x) dx\). Notice how we’ve changed the interval from \([-\frac{P}{2}, \frac{P}{2}]\) to \([0, 2L]\) to match our example. This will be reflected in the formulas.
Well, this coefficient (\(A_{0}\)) is a fancy way to express the average of \(f(x)\) over the interval (in our case [0, 2L]
). In the same time \(A_{0}\) is the area determined by \(f(x)\) over [0, 2L]
then divided by \(2L\). But if you look at the plot again, you will see that the net area is \(0\), because the green area (S1) nullifies the red area (S2), regardless of \(L\).
^{(Source code)}
Secondly, let’s compute the \(A_{n} = \frac{1}{L} \int_{0}^{2L} f(x) * \cos(\frac{\pi nx}{L}) dx\) coefficients. An important observation is that \(f(n)\) is odd, and its average value on the interval is \(0\); we can safely say all the coefficients \(A_{n}\) also vanish.
Visually speaking, regardless of how you pick \(n\) or \(L\), the net area determined by the \(A_{n}\) integral will always be zero. It’s visually obvious if we plot \(A_n\). For example plotting \(A_{1}\), \(A_{2}\), \(A_{3}\), \(A_{4}\) looks like this:
^{(Source code)}
Similar symmetrical patterns will emerge if you increase the \(n\) in \(A_{n}\) and plot them.
Thirdly, we need to compute:
\[B_{n} = \frac{1}{L} \int_{0}^{2L} f(x) * \sin(\frac{\pi nx}{L}) dx\]If we plot a \(B_{1}\), \(B_{2}\), \(B_{3}\) and, let’s say, \(B_{4}\) we can intuitively feel what’s happening with \(B_{n}\):
^{(Source code)}
If you have a keen eye for geometrical representations, you will notice that every even \(B_{n}\) is also 0. The red and green areas nullify, so the net area described by the integral is \(0\). The odd terms will be \(2 * \text{something}\), so let’s calculate that \(\text{something}\).
We will need to split the integral on two sub-intervals \([0, L]\) and \([L, 2L]\) (there’s a chasm at \(L\)), but given the fact \(f(x)\) and \(sin(x)\) are odd, \(B_{n}\) can we written as:
\[B_{n} = 2 * [\frac{1}{L} \int_{0}^{L} f(x) * \sin(\frac{\pi nx}{L}) dx]\]We can now perform u-substition, so we can write:
\[B_{n} = \frac{2}{L} \int_{0}^{nL\pi} \frac{\sin(\frac{u}{L})}{n\pi}du\]After we take the constant out, we compute the integral, use the intervals, and take into consideration the periodicity of cosine:
\[B_{n} = \frac{2}{n\pi}(1-(-1)^n)\]And now we see it, \(B_{n}\) is exactly \(0\) if \(n\) is even, and \(B_{n}=2 * \frac{2}{n\pi}\) is \(n\) is odd.
Putting all back into the master formula of the Fourier Series:
\[f(x)=\underbrace{A_{0}}_{0} + \sum_{n=1}^{\infty} [\underbrace{A_{n} \cos(\frac{\pi nx}{L})}_{0} + B_{n} \sin(\frac{\pi nx}{L})]\]Things become:
\[f(x)=\frac{4}{\pi} \sum_{n=1,3,5...}^{+\infty} (\frac{1}{n} * \sin(\frac{\pi nx}{L}))\]If we substitute \(n \rightarrow 2k-1\) and consider, we obtain the initial formula:
\[f(x)=\frac{4}{\pi} \sum_{k=1}^{+\infty} (\frac{\sin(\frac{\pi (2k-1)x}{L})}{(2k-1)})\]To obtain the initial formula, we substitute \(L \rightarrow \frac{1}{2f}\), and \(2\pi f \rightarrow \omega\), basically we create an interdependence between \(L\) (half of the interval) and \(\omega\), \(L=\frac{\pi}{\omega}\):
\[f(x) = \frac{4}{\pi}\sum_{k=1}^{\infty}\frac{\sin((2k-1)\omega x)}{2k-1}\]Unfortunately, there’s no way we can go to \(+\infty\), so let’s consider \(s(x)\) as an approximation of \(f(x)\) that depends on \(n\).
\[s(x) = \frac{4}{\pi}\sum_{k=1}^{n}\frac{\sin((2k-1)\omega x)}{2k-1} \approx f(x)\]In the next animation, you will see that by increasing \(n\), the accuracy of our approximation gets better and better, and the gaps are slowly closed:
^{(Source code)}
To understand how adding more coefficients improves the approximation, let’s look back again at a few of our coefficients \(s_{1}(x)\), \(s_{2}(x)\), \(s_{3}(x)\), \(s_{4}(x)\) and \(s_{5}(x)\) (we will pick \(\omega=\frac{\pi}{2}\), so that \(2L=1\)):
\[s_{1}(x) = \frac{4}{\pi} \sin(\frac{\pi x}{2})\] \[s_{2}(x) = \frac{4}{3\pi} \sin(\frac{3\pi x}{2})\] \[s_{3}(x) = \frac{4}{5\pi} \sin(\frac{5\pi x}{2})\] \[s_{4}(x) = \frac{4}{7\pi} \sin(\frac{7\pi x}{2})\] \[s_{5}(x) = \frac{4}{9\pi} \sin(\frac{9\pi x}{2})\]Each of the 5 terms is a sinusoid, with \(\frac{4}{\pi}\), \(\frac{4}{3\pi}\), etc. amplitudes, and \(\frac{\pi}{2}\), \(\frac{3\pi}{2}\), etc. frequencies.
So, if we were to approximate a Square Wave with its fifth partial sum (the red dot), we would obtain something like this:
^{(Source code)}
Notice how obsessed the red dot is with the blue dot (the actual function) and how closely it follows it.
We can always add more terms to the partial sum to help the red dot in its holy mission, improving the approximation until nobody cares anymore.
Without dealing with all the associated math, the Fourier Series decomposition for a triangle-wave is:
\[s(x)=\frac{8}{\pi^2}\sum_{k=1}^{N}\frac{(-1)^{k-1}}{(2k-1)^2}*\sin((2k-1)x)\]Plotting the function \(s(x)\), we will see that things converge smoothly and fast. As soon as \(n\) approaches \(6\), we can already observe the triangle:
^{(Source code)}
Let’s compute the first terms six terms of the \(\sum\), so that:
\[s(x) \approx s_1(x) + s_2(x) + s_3(x) + s_4(x) + s_5(x) + s_6(x)\]Where:
A keen eye will see will observe the that \(s_2(x)\), \(s_4(x)\), \(s_6(x)\) and all the even terms are negative.
A negative amplitude doesn’t make too much sense, at least not in a physical sense. What are we going to do with the minus sign?
We have two options:
In practice, we will go with 2. as it’s more practical and gives us more control, but the two options are equivalent so that we can write \(s_2(x)\) in both ways:
\[s_2(x)=-\frac{8}{3^2\pi^2}*\sin(3x)\] \[s_2(x)=\frac{8}{3^2\pi^2}*\sin(3x + \pi)\]Visually speaking, the results will not be surprising if we plot \(sin(-x)\) and \(sin(x+\pi)\) side by side; the two are equivalent:
^{(Source code)}
If we consider this, the even terms of \(s(x)\) will become:
Shamelessly skipping the math demonstration, a reverse-sawtooth function has the following form:
\[s(x)=\frac{2}{\pi}\sum_{k=1}^{N}(-1)^k*\frac{\sin(kx)}{k}\]Plotting \(s(x)\), while increasing \(n\), things look like this:
^{(Source code)}
To better visualize what’s happening, let’s look at a Fourier Series Machinery and how the circles move to create beautiful practical patterns.
You can pick the shape of the desired signal from here: and the sketch will change accordingly.
^{(Source code)}
…A bunch of spinning circles on a stick, where each circle corresponds to exactly one term of the series - this is the marvelous Fourier Series Machinery.
If we run our signal through the Fourier Series Machinery, we will obtain The Amplitude (\(A\)) and The Phase (\(\varphi\)) for each Frequency (\(\omega\)) from \(1\) to \(N\). Isn’t it amazing? And it all comes down to a bunch of spinning circles… on a stick.
to be continued
]]>That said, some developers are more interested in exploring creative fields beyond the traditional job market. For these individuals, programming is more of a hobby than a means of earning a living. If you fall into this category, you may find the following project suggestions more appealing.
You can also check other articles on this topic:
For example, Austin Z. Henley recommends writing your own Topological Sort, Recursive Descent Tree Parsing, Bloom Filter, Piece Table, Splay Tree implementations.
The truth is that many Computer Science curricula have been diluted. In fact, some schools only teach the basics, such as Dynamic Arrays, Linked Lists, Queues, Stacks, and Hash Tables. However, there are many other Data Structures and Algorithms that are worth exploring beyond these fundamental concepts.
Personally, I would also go for:
If you already know how to write your own Hash Table, building a Distributed Hash Table won’t be an impossible task. Although it might seem like a complicated project, it doesn’t necessarily have to be production-ready or become the next Redis.
By the end of the project, you will likely become more comfortable with network programming and managing concurrency issues.
Bonus: Thoroughly testing your DHT will be a journey in itself.
^{Don’t get demotivated if you get stuck or the result is terrible!}
This is a relatively easy project, but it’s worth witnessing the power of the Stack data structure. You can do this by learning how to evaluate RPN expressions and implementing the Shunting Yard algorithm. As you work on this project, challenge yourself to learn a new GUI library, one that you haven’t touched before.
Once you have a working calculator, start exploring crazy ideas:
3
to the power of 2.27
?It can also work in the browser.
^{First, start learning C if you haven’t already. Contrary to popular belief, learning C early in your career will make you a better programmer in the long-run. I am more convinced of this now than I ever was before. However, this is not the time or the article to support my claim.}
When implementing the HTTP protocol, keep in mind that you don’t have to cover everything. The purpose of this exercise is not to write the fastest HTTP server out there. Instead, you do it to:
char*
frequently. Of course, you can create an abstraction over char*
to solve this problem, and while it may be buggy, it will be uniquely yours!fork()
, pthreads
, and all the other low-level knowledge you don’t usually have to deal with, except in your Operating Systems course.If mentioning C has offended you, you can always rewrite the project in Rust. That would be new.
^{An esoteric programming language (sometimes shortened to esolang) is a programming language designed to test the boundaries of computer programming language design, as a proof of concept, as software art, as a hacking interface to another language (particularly functional programming or procedural programming languages), or as a joke. (source)}
This is a side project that lets you unleash your creativity. Here are some tips to help you get started:
For more inspiration, check the esolangs wiki.
I’ve personally tackled this challenge (check this article). However, I regret not designing an original set of instructions and instead implementing the LC-3 instruction set.
Your project can be register-based, stack-based, or a hybrid. It can even have a JIT compiler if you are feeling brave.
Whatever you choose to do, the key is to be creative.
For instance, please look at uxn, which can run on multiple operating Systems or devices, and has a small community of dans writing software for it. Even tsoding, one of my favourite Tech Youtubers, recently implemented Conway’s Game Of Life as an uxn program.
Firstly, understand how uxn works by reading the official documentation or by following this excellent tutorial.
Look at existing examples:
Come up with an original game idea.
Everyone seems to prefer Snake, Tetris, Pong, and Space Invaders. But there are other (now forgotten) games that deserve your attention.
Why don’t you try implementing something different:
TIC-80 is a fantasy computer for making, playing, and sharing tiny games.
If you don’t know what game to write, take inspiration from here.
This is more of an artistic project than a programming one, but still.
Write a markdown language that is not precisely markdown but something alien.
Extend an existing markdown implementation. You can get inspiration from LiaScript or R Markdown.
Yes, it’s boring, but something needs to use the newly invented markdown language.
You don’t have to be Arthur C. Clarke or a mathematican to appreciate the useless beauty of the Mandelbrot Set.
Have you ever considered building your own Mandelbrot Set Explorer using HTML Canvas? There are plenty of examples on the internet:
Add a creative touch! For example:
Start with optics; it might be easier, and there are tons of examples on the internet:
If you understand what you are doing, jump to other areas:
Start small and slowly become Bartosz Ciechanowski.
I suppose you are already familiar with Conway’s Game Of Life.
Make something creative out of it:
There’s so much you can do.
If you’ve studied computer graphics, you might’ve encountered the concept of Bezier Curves. Why don’t you start approximating reality with them?
Some fans of Pierre Bezier and Jamiroquai have already played this game:
Maybe now it’s your time to write a different renderer.
Why don’t you pick sines and cosines instead of polynomials? I hope you see where I am going.
^{To understand what I am referring to, check this question on StackOverflow.}
This is going to be a challenging project, but not as hard as you would imagine. Basically, you need to come up with something WolframAlpha is already capable of.
You will have to be able to parse mathematical expressions. Then, you will have to (recursively) apply specific rules for differentiation (e.g. the chain and product). In the end, you will have to simplify the resulting expression.
You don’t have to implement everything.
You will have to remember the fundamentals of calculus.
It’s going to be fun.
… In the name of science.
I wouldn’t put to much effort into the graphics. It’s not like you want to contribute to people’s misfortune and addiction. But the mathematics behind a slot machine can be interesting, plus you can be creative:
^{This idea was suggested by @snej on the lobste.rs}
Few of our generation have played text-based games, and it’s fine - we need to put our hardware to better use than rendering fonts in a terminal.
But there was a time when games like Zork or Colossal Cave were extremely popular.
So why don’t you build a game engine for text-based adventures?
Make the engine cross-platform - allow the game to work in the terminal, browser, or an SDL window.
Or leave it terminal only. There are beautiful TUI libraries nowadays, so you don’t have to stay cursed because you are stuck with ncurses:
Now that Wayland is almost here, I am sure there’s a lot of new room for creativity. Look at sway.
Ok, writing a Tiling Window Manager is not the most approachable project you can think of. But at the same time, you can keep things simple. For example, XMonad, when launched, had roughly 1000 lines of code.
]]>This article is a continuation of my previous selection of non-trivial algebra problems from the Romanian Math Olympiad for high-school students. However, this time I have included a few harder problems from the National Phase that would definitely provide a challenge to any reader.
Personally, I found problems 5. and 9. to be the most difficult.
1. ^{easy} Find \(m, n \in \mathbb{Z}\) if:
\[9m^2+3n=n^2+8\]^{Hint & Answer}
2. ^{easy} Find \(x,y,z \in \mathbb{R_{+}^{*}}\) if:
\[\begin{cases} x^3y+3 \le 4z \\ y^3z+3 \le 4x \\ z^3x+3 \le 4y \end{cases}\]^{Hint & Answer}
3. ^{easy} Find \(x \notin \mathbb{Q}\) so that:
\[x^2+2x \in \mathbb{Q}\] \[x^3-6x \in \mathbb{Q}\]^{Hint & Answer}
4. ^{easy} If \(a, b \in \mathbb{C}\) prove:
\[\lvert 1 + ab \rvert + \lvert a+b \rvert \ge \sqrt{\lvert a^2 - 1 \rvert * \lvert b^2 - 1 \rvert}\]^{Hint & Answer}
5. ^{hard} Find \(a, b \in \mathbb{R}\), if \(a+b \in \mathbb{Z}\) and \(a^2+b^2=2\).
^{Hint & Answer}
6. ^{easy} Find all numbers \(n \in \mathbb{N}\) with the following property: \(\exists (a,b) \in \mathbb{Z}\) so that \(n^2=a+b\), and \(n^3=a^2+b^2\).
^{Hint & Answer}
7. ^{easy} Prove the following inequality:
\[\frac{a}{x}+\frac{b}{y} \ge \frac{4(ay+bx)}{(x+y)^2}\] \[\forall a,b,x,y \gt 0\]^{Hint & Answer}
8. ^{medium} Prove the following inequality:
\[\frac{a}{b+2c+d}+\frac{b}{c+2d+a}+\frac{c}{d+2a+b}+\frac{d}{a+2b+c} \ge 1\] \[\forall a,b,c,d \gt 0\]^{Hint & Answer}
9. ^{hard} If \(x_1, x_2, ..., x_n\) are strictly positive numbers prove the following:
\[\frac{1}{1+x_1}+\frac{1}{1+x_1+x_2}+...+\frac{1}{1+x_1+x_2+...+x_n} \lt \sqrt{\frac{1}{x_1}+\frac{1}{x_2}+...+\frac{1}{x_n}}\]^{Hint & Answer}
10. ^{medium} If \(x,y \in \mathbb{N}^{*}\), \(x \neq y\). Prove that:
\[\frac{(x+y)^2}{x^3+xy^2-x^2y-y^3} \notin \mathbb{Z}\]^{Answer}
Try to find a way to write the expression as a product between two numbers.
Even if not obvious, the key to elegantly solving the problem is to use the QM-AM-GM-HM inequalities.
Try to express \(x^3-6x\) in terms of \(x^2+2x\).
Did you know that \(\lvert a + b \rvert \le \lvert a \rvert + \lvert b \rvert\) ?
Can you use the QM-AM-GM-HM inequalities?
Can you use the QM-AM-GM-HM inequalities?
Can you use the QM-AM-GM-HM inequalities?
Can you please problem 7. ?
The key to solving this problem is the CBS Inequality. Unfortunately the solution is much more subtle than anticipated, so don’t be upset if you don’t see it.
\(0\), \(1\), small and prime numbers are our friends, so if we can find an expression like
\[(m-\text{something})(n-{something}) = 0 \text{ (or a prime number)}\], we would already have put a powerful constraint on the nature of \(m\) and \(n\).
Our expression is:
\[9m^2+3n=n^2+8\]If we move all the terms involving \(m\) and \(n\) on the left side:
\[9m^2+3n-n^2=8\]Now let’s multiply each side with \(4\):
\[36m^2-4n^2+12n=32\]And, then write the expression as the difference between two square numbers:
\[36m^2 - (2^2*n^2-2*2n*3+3^2)=23\]Our initial expression becomes:
\[[6^2m^2 - (2n^2-3)^2]=23 \Leftrightarrow \\ (6m-2n+3)(6m+2n-3)=1*23\]So there are a few possibilities:
\[\begin{cases} 6m-2n+3=23 \\ 6m+2n-3=1 \end{cases}\]Or:
\[\begin{cases} 6m-2n+3=1 \\ 6m+2n-3=23 \end{cases}\]Or:
\[\begin{cases} 6m-2n+3=-1 \\ 6m+2n-3=-23 \end{cases}\]If we sum \(6m-2n+3+(6m+2n-3)=\pm 24 \Rightarrow m=\pm 2\)
If we substitute \(m=\pm 2\) back into the initial relationship, we will obtain the values of \(n\).
The solutions are: \((2,7), (-2,7), (2, -4), (-2, -4)\).
This problem is beautiful; honestly, I had to look for some hints before solving it.
Intuitively, we observe the obvious \(x=y=z=1\) solution. But what if there are more?
\[\begin{cases} x^3y+3 \le 4z \\ y^3z+3 \le 4x \\ z^3x+3 \le 4y \end{cases}\]Becomes:
\[\begin{cases} x^3y \le 4z - 3 \\ y^3z \le 4x - 3\\ z^3x \le 4y - 3 \end{cases}\]Because \(x,y,z \in \mathbb{R_{+}^{*}}\) (a significant detail), we can multiply all our three inequalities to obtain:
\[x^4y^4z^4 \le (4x-3)(4y-3)(4z-3)\]And now, the subtle idea. According to QM-AM-GM-HM inequalities, it is obvious:
\[\frac{x^4+1^4}{2} \ge \sqrt{x^4*1^4} \Rightarrow \\ x^4+1 \ge 2x^2 \Rightarrow \\ x^4+3 \ge 2(x^2+1)\]By applying the QM-AM-GM-HM, it is known:
\[x^2+1 \ge 2*\sqrt{x^2*1^2} \Rightarrow \\ 2(x^2+1) \ge 4x\]So we can conclude that:
\[x^4+3 \ge 4x \Rightarrow x^4 \ge 4x-3\]The equality holds if \(x=1\).
Similarly, we obtain:
\[\begin{cases} x^4 \ge 4x - 3 \\ y^4 \ge 4y - 3\\ z^4 \ge 4z - 3 \end{cases}\]If we multiply all three inequalities, we will obtain:
\[x^4y^4z^4 \ge (4x-3)(4y-3)(4z-3)\]So now we have two inequalities:
\[\begin{cases} x^4y^4z^4 \le (4x-3)(4y-3)(4z-3) \\ x^4y^4z^4 \ge (4x-3)(4y-3)(4z-3) \end{cases}\]Isn’t it ironic? Because of the opposite signs of the two inequalities, they only happen when equality happens.
So we can now conclude that \(x=y=z=1\).
Let’s introduce \(a\) and \(b\), so that \(a=x^2+2x \in \mathbb{Q}\) and \(b=x^3-6x \in \mathbb{Q}\).
Now we will try to express \(b\) in terms of \(a\) and \(x\):
\[b=x^3+2x^2-2x^2-4x-2x \Leftrightarrow \\ b=x(x^2+2x)-2(x^2+2x)-2x \Leftrightarrow \\ b=x*a-2a-2x \Leftrightarrow \\ b=x(a-2)-2a\]But \(b \in Q\) and \(x \notin Q\), then \(a-2=0\), and \(b=-2a\).
If \(a-2=0 \Rightarrow a=2\), then \(x^2+2x-2=0\). We solve this equation, by finding \(\Delta=12\), with the final solutions for \(x\):
\[x=\frac{-2 \pm 2\sqrt{3}}{2} =-1 \pm \sqrt{3}\]For this, we need to apply the following fundamental inequality:
\[\lvert 1 + ab \rvert + \lvert a + b \rvert \ge \lvert 1 + ab + a + b \rvert \\ \lvert 1 + ab \rvert - \lvert a + b \rvert \ge \lvert 1 + ab - a - b \rvert \\\]If we multiply both inequalities, the problem is solved:
\[(\lvert 1 + ab \rvert + \lvert a + b \rvert)^2 \ge \lvert 1 + ab + a + b \rvert * \lvert 1 + ab - a - b \rvert \Leftrightarrow \\ \lvert 1 + ab \rvert + \lvert a + b \rvert \ge \sqrt{\lvert a^2 - 1 \rvert * \lvert b^2 - 1 \rvert}\]We need a way to link \(a+b\) and \(a^2+b^2\).
If we look at the QM-AM-GM-HM inequalities, we shall see that:
\[\frac{x_1+x_2+...+x_n}{n} \le \sqrt{\frac{x_1^2+x_2^2+...+x_n^2}{n}}\]For us, this means:
\[\frac{a+b}{2} \le \sqrt{\frac{a^2+b^2}{2}} \Leftrightarrow \\ (a+b)^2 \le 2(a^2+b^2) \Leftrightarrow \\ (a+b)^2 \le 4 \Leftrightarrow \\ \lvert a + b \rvert \le 2\]Luckily for us, \(a+b \in \mathbb{Z}\) so we can conclude that \(a+b \in \{-2,-1,0,1,2\}\).
Now, let’s try to create a second-degree equation in \(a\):
\[a^2+b^2=2 \Leftrightarrow \\ a^2+b^2+2ab-2ab+2a^2-2a^2=2 \Leftrightarrow \\ 2a^2 + (a+b)^2 - 2a(a+b)=2\]I think this trick is the hardest to think of. This type of intuition comes after lots of solved exercises. So don’t be upset if you haven’t spotted this.
If we substitute \(m=a+b\), our equation becomes:
\[2a^2 - 2am + m^2-2=0\]We need to find all the possible values of \(a\) for all the possible values of \(m\).
For example, if we pick the possible value \(m=-2\), our equation becomes \(2a^2 + 4a + 2 = 0\). The solution is \(a=-1 \Rightarrow b=-1\).
If we substitute all values of \(m\), we will find all of our solutions:
\[(-1,-1), (1,1), (\frac{1+\sqrt{3}}{2}, \frac{1-\sqrt{3}}{2}), \text{ ... and so on}\]Because of QM-AM-GM-HM inequalities, we know that:
\[2(a^2+b^2) \ge (a+b)^2\]This relationship leads to the simple conclusion:
\[2*n^3 \ge n^4\]But because \(n \in \mathbb{N}\) the only possible values for \(n\) are \(n \in \{0,1,2\}\).
If \(n=0\) then \(a=0\) and \(b=0\).
If \(n=1\) then \(a=1\) and \(b=0\).
If \(n=2\) then \(a=2\) and \(b=2\).
This one is simple:
\[\frac{a}{x} + \frac{b}{y} \ge \frac{4(ay+bx)}{(x+y)^2}\]Becomes:
\[\frac{ay+bx}{xy} \ge \frac{4(ay+bx)}{(x+y)^2} \Leftrightarrow \\ \frac{1}{xy} \ge \frac{4}{(x+y)^2} \Leftrightarrow \\ \frac{(x+y)^2}{2^2} \ge xy \Leftrightarrow \\ \frac{x+y}{2} \ge \sqrt{xy}\]This exercise would be much more laborious without using Problem 7.
But we already know from the previous answer that \(\frac{a}{x}+\frac{b}{y} \ge \frac{4(ay+bx)}{(x+y)^2}\), \(\forall a,b,x,y \gt 0\).
So, let’s re-group wisely our terms:
\[\frac{a}{\underbrace{b+2c+d}_{x}}+\frac{c}{\underbrace{d+2a+b}_{y}}+\frac{b}{c+2d+a}+\frac{d}{a+2b+c} \ge 1 \Leftrightarrow\]Because of the previous problem, we know that:
\[\frac{a}{\underbrace{b+2c+d}_{x}}+\frac{c}{\underbrace{d+2a+b}_{y}} \ge \frac{ 4(a(d+2a+b) + c(b+2c+d)) }{(2a+2b+2c+2d)^2}\]This can be further simplified to:
\[\frac{a}{\underbrace{b+2c+d}_{x}}+\frac{c}{\underbrace{d+2a+b}_{y}} \ge \frac{2a^2+2d^2+ab+bc+cd+da}{(a+b+c+d)^2} \text{ (*)}\]In a similar fashion:
\[\frac{b}{c+2d+a}+\frac{d}{a+2b+c} \ge \frac{2b^2+2d^2+ab+bc+cd+da}{(a+b+c+d)^2} \text{ (**)}\]If we sum \(\text{(*)}\) and \(\text{(**)}\):
\[\frac{a}{b+2c+d}+\frac{c}{d+2a+b}+\frac{b}{c+2d+a}+\frac{d}{a+2b+c} \ge \\ \ge \frac{2a^2+2d^2+ab+bc+cd+da+2b^2+2d^2+ab+bc+cd+da}{(a+b+c+d)^2}\]But we know that \((a+b+c+d)^2=a^2+b^2+c^2+d^2+2(ab+ac+ad+bc+bd+cd)\).
So, with enough patience, we can group things in the previous fraction as follows:
\[\frac{a}{b+2c+d}+\frac{c}{d+2a+b}+\frac{b}{c+2d+a}+\frac{d}{a+2b+c} \ge \frac{(a+b+c+d)^2+(a-c)^2+(b-d)^2}{(a+b+c+d)^2} \\\]Eventually, we would obtain something like this:
\[1+\frac{(a-c)^2+(b-d)^2}{(a+b+c+d)^2} \ge 1\]The last relationship is obviously true, \(\forall a,b,c,d \gt 0\).
This is by no means an easy problem. Of course, once you see the solution, it doesn’t look that hard, but coming up with the idea in the first place is more complicated.
We do the following notation(s):
\[\begin{cases} s_1=x_1 \\ s_2=x_2+x_1 \\ s_3=x_3+x_2+x_1 \\ \text{...} \\ s_n=x_n+x_{n-1}+...+x_1 \end{cases}\]Because \(x_1, x_2, ..., x_n\) are strictly positive we can conclude that \(s_1 \lt s_2 \lt s_3 ... \lt s_n\).
Another important aspect is the fact:
\(x_k=s_k-s_{k-1}\), \(\forall k \in N\).
With this in mind, we can re-write everything as:
\[\frac{1}{1+s_1}+\frac{1}{1+s_2}+...\frac{1}{1+s_n} \le \sqrt{\frac{1}{s_1}+\frac{1}{s_2-s_1}+...+\frac{1}{s_n-s_{n-1}}}\]The above relationship is the equivalent inequality we need to prove.
Now, we can write each term from the right side, \(\frac{1}{1+s_k}\), as a product:
\[\frac{1}{1+s_1}=\frac{1}{\sqrt{s_1}}*\frac{\sqrt{s_1}}{1+s_1}\] \[\frac{1}{1+s_k}=\frac{1}{\sqrt{s_k-s_{k-1}}}*\frac{\sqrt{s_k-s_{k-1}}}{1+s_k}\]According to CBS inequality we know the following is true:
\[(\frac{1}{1+s_1}+...+\frac{1}{1+s_n})^2 \lt (\frac{1}{s_1}+\frac{1}{s_2-s_1}+...+\frac{1}{s_n-s_{n-1}})(\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2})\]We can safely square root everything to obtain:
\[\frac{1}{1+s_1}+...+\frac{1}{1+s_n} \lt \sqrt{(\frac{1}{s_1}+\frac{1}{s_2-s_1}+...+\frac{1}{s_n-s_{n-1}})}*\sqrt{(\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2})}\]We are almost there.
If we somehow manage to prove that: \(\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2} \le 1\), the problem is solved.
But remember \(s_1 \lt s_2 \lt ... \lt s_n\), so we can safely say:
\[\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2} \le \frac{s_1}{(1+s_1)} + \frac{s_2-s_1}{(1+s_1)(1+s_2)}+...+\frac{s_n-s_{n-1}}{(1+s_{n-1})(1+s_{n})}\]This becomes:
\[\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2} \lt 1 - \frac{1}{1+s_1} + \frac{1}{1+s_1} -...-\frac{1}{1+s_n}\]Eventually, after reducing the terms one by one:
\[\frac{s_1}{(1+s_1)^2}+\frac{s_2-s_1}{(1+s_2)^2}+...+\frac{s_n-s_{n-1}}{(1+s_n)^2} \lt 1 - \frac{1}{1+s_n} \lt 1\]At this point, everything is proven.
We can write our relationship as:
\[m=\frac{(x+y)^2}{x^3+xy^2-x^2y-y^3} = \frac{(x+y)^2}{(x-y)(x^2+y^2)}.\]Let’s suppose the opposite and affirm that \(m \in \mathbb{Z}\).
If \(m \in \mathbb{Z}\), then \(\frac{(x+y)^2}{x^2+y^2} \in \mathbb{Z}\).
But we can write the fraction as:
\[\frac{(x+y)^2}{x^2+y^2} = 1 + \frac{2xy}{x^2+y^2} \in \mathbb{Z}\]Then \(\frac{2xy}{x^2+y^2}\) must be an integer.
But we know for certain that \(x^2+y^2 \gt 2xy \Rightarrow 0 \lt \frac{2xy}{x^2+y^2} \lt 1\), but this is a contradiction.
]]>^{“In the game of Quantum Soccer, the aim is to shape the wave function of a quantum-mechanical “ball” so that the probability of it being inside one of the goals rises above a set threshold. This is achieved by using the motion of the players to alter the energy spectrum of the wave function: when a player moves across the field, the energy that this action provides (or absorbs) enables transitions between certain modes of the wave function. The pairs of modes involved depend on the player’s velocity; the exact rules are spelt out in the mathematical details, but it’s easy to experiment using trial and error.”}
^{You have to cancel that noise.}
^{Just if you want to try some operating systems in your browser. After all, there’s nothing more rewarding than playing a game of Hearts in Windows 95.}
The surreal experience of my first developer job
^{A fantastic story about a developer’s life working at Skill Buy}
^{A collection of retro PC magazines.}
Kalman Filter Explained Simply
^{As the title says.}
Finite volume solver for incompressible multiphase flows with surface tension.
^{A GitHub gem. Take a look at some examples.}
Reality as a vector in Hilbert Space
^{“By taking the prospect of emergence seriously, and acknowledging that our fondness for attributing metaphysical fundamentality to the spatial arena is more a matter of convenience and convention than one of principle, it is possible to see how the basic ingredients of the world might be boiled down to a list of energy eigenvalues and the components of a vector in Hilbert space. If it did succeed, this project would represent a triumph of unification and simplification, and is worth taking seriously for that reason alone.”}
^{Ok.}
^{The world wastes a minimum of $100M annually due to inefficient string operations.}
How Google helped destroy adoption of RSS feeds
^{“Although RSS feeds are alive and still heavily used today, their level of adoption has suffered because of how difficult a handful of popular technology companies have made it to use them. Google, especially, has relied on the open web RSS protocol to gain so much market share and influence, but continues to engage in behavior that exploits the open web at the expense of its users. As a result, Google has single-handedly contributed to the reason many users who once relied on RSS feeds have stopped using them.”}
The Projects of Daniel D. Johnson
^{The type of page I like to visit.}
^{ Because it’s not how we would expect.}
^{ It never ends. }
]]>The problem was asked during the First Round of the Spanish Math Olympiad in 1988, and if you own the book, you can find it at page 9.
The integers \(1,2,3,...,n^2\) are arranged to form the \(n \times n\) matrix:
\[A=\begin{pmatrix} 1 & 2 & ... & n\\ n+1 & n+2 & ... & 2n \\ ... & ... & ... & ... \\ (n-1)n+1 & ... & ... & n^2 \end{pmatrix}\]A sum \(S_A\) is constructed as follows:
Prove that the sum \(S\) builds up to the exact total no matter what entries (\(x_1, x_2, ...\)) might be taken. So the sum \(S\) is always the same.
Excellent problem, isn’t it?
The first thing I did was check and see how everything worked, so I started with a \(3 \times 3\) matrix:
\[A=\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{pmatrix}\]I randomly selected \(3\), then removed its row and column. The matrix \(A\) becomes: \(\begin{pmatrix} 4 & 5 \\ 7 & 8 \end{pmatrix}\). Then I randomly selected \(7\), removed its row and column, so that \(A\) becomes \(A=\begin{pmatrix} 5 \end{pmatrix}\). On this run the sum \(S_A=3+7+5=15\).
I did try a second run \(S_A=5+1+9\), and a third run \(S_A=6+2+7\).
For a \(3 \times 3\) matrix, the sum is \(S_A=15\), no matter what we do.
At this point, it’s worth noting that if we examin things from a column perspective, each time we select numbers from a different column. Similarly, if we look at it from a row perspective, each time we select a number from a different row than the previous ones.
So, the intuition says there’s something to do with the positions of the numbers and not the numbers themselves. That would be incorrect, but it led me in the right direction.
For example, if we pick a matrix \(B\) that’s slightly different than \(A\), and comes in the form:
\[B=\begin{pmatrix} 1 & 2 & 3 & ... & n \\ 1 & 2 & 3 & ... & n \\ ... & ... & ... & ... & ... \\ 1 & 2 & 3 & ... & n \\ \end{pmatrix}\]If we apply the the same algorithm to \(B\), our sum will always be:
\[S_B=1+2+3+...+n=\frac{n(n+1)}{2}\]The reason is simple: before we exhaust matrix \(B\), we visit all the possible columns without repetition. Of course, the order can differ, but the sum remains the same (addition is commutative).
So, half of the problem is solved. We know that the sum is constant for a matrix \(B\). We need to find a relationship between the terms of \(B\) and \(A\) and show that something similar is happening for \(A\).
The relationship between the two matrices is the following:
We can generalize:
So if consider \(a_1, a_2, ..., a_n\) the chosen elements from \(S_A\) and the corresponding \(b_1, b_2, ..., b_n\) from \(S_B\), we see the following pattern:
If we sum \(S_A = a_1 + a_2 + ... + a_n\) we obtain:
\[S_A = (b_1+0*n) + (b_2 + n) + (b_3 + 2*n) + ... + (b_n + (n-1)*n) \Leftrightarrow \\ S_A = S_B + n(1+2+3...+(n-1)) \Leftrightarrow \\ S_A = \frac{n(n+1)}{2} + n*\frac{n(n-1)}{2} \Leftrightarrow \\ S_A = \frac{n(n^2+1)}{2} \\\]So \(S_A\) is a constant that depends on \(n\).
This explains why the result was always 15 for the \(3 \times 3\) matrix we’ve picked.
If you give the problem some thought, you will see that the problem holds not only for consecutive numbers but for all the numbers in arithmetical progression.
So, as long as the matrix \(A\) is in this form:
\[A=\begin{pmatrix} a_1 & a_2 & a_3 & ... & a_n \\ a_{n+1} & ... & ... & ... & a_{2n} \\ ... & ... & ... & ... & ... \\ ... & ... & ... & ... & a_{n^2} \\ \end{pmatrix}\]And \(a_1, a_2, ..., a_{n^2}\) are in arithmetic progression, so that \(a_k=a_{k-1}+d\), and is \(d\) is the common difference, the sum \(S_A\) (as previously defined) is constant.
If we pick a \(3 \times 3\) example of a matrix \(A\) with numbers in an arithmetical progression:
\[A=\begin{pmatrix} 1 & 3 & 5 \\ 7 & 9 & 11 \\ 13 & 15 & 17 \end{pmatrix}\]We observe that the sum, \(S_A\), is constant: \(S_A^1=1+9+17\), \(S_A^2=5+9+13\), etc. They are all 27.
Proving this is an exercise left for the reader.
]]>This is a “follow-up” to the previous article: “The math exams of my life”, as some readers were curious to see some examples of Math Olympiad exercises.
This a selection of cute, non-trivial algebra problems (with a hint of number theory) compiled from the Romanian Math Olympiad (regional phase or faza judeteana) for 8^{th}, 9^{th}, and 10^{th} graders (13-15 years old).
The solutions are surprising and involve a good understanding of algebraic concepts, pattern spotting, or tricks that, in the long run, help students develop mathematical intuition.
Depending on your passion for mathematics (or competitive mathematics), the problems should pose enough difficulty to keep you entertained for a few hours. If you are stuck with one problem, try to read the hint instead of going straight to the answer.
In case you want to solve them by yourself, do a short recap on the following subjects:
The main topic of this problem set is: “Inequalities”.
There’s a follow-up article - here.
I have a few notebooks containing solutions for various Math problems I’ve solved over the years (recreational mathematics). If time allows, I will publish more lists covering more topics. Currently, I am in the process of grouping them into categories.
^{“Screenshot” from one my notebooks.}
The problems are from the “Regional” Phase of the Romanian Math Olympiad. The truly difficult problems are usually the ones from the “National” phase. I am planning to publish a list of those as well.
Another important aspect is that I am not a mathematician, so if you see that solutions are incorrect or have better solutions, please send me some feedback.
1. ^{easy} Compute \(S=1-4+9-16+..+99^2-100^2\). ^{Hint & Answer}
2. ^{easy} Determine the smallest element of the set \(\{ab \text{ } \lvert \text{ } a,b \in \mathbb{R} \text{ and } a^2 + 2b^2=1\}\)? ^{Hint & Answer}
3. ^{easy} What is the cardinality of following set: \(\{x \in \mathbb{R} \lvert [\frac{x+1}{5}]=\{\frac{x-1}{2}\} \}\). \(\{ a \}\) is the fractional part of the number \(a \in \mathbb{R}\), and \([a]\) is the integer part of \(a \in \mathbb{R}\). ^{Hint & Answer}
4. ^{easy} Find all the elements of the set \(\{\frac{3}{2x} \lvert x \in \mathbb{R} \text{ and, } \frac{1}{[x]} + \frac{1}{\{x\}}=2x \}\). \(\{ a \}\) is the fractional part of the number \(a \in \mathbb{R}\), and \([a]\) is the integer part of \(a \in \mathbb{R}\). ^{Hint & Answer}
5. ^{easy} Given \(a,b,c \in \mathbb{R}^{*}\), we know that \((a,b,c)\) are part of an arithmetic progression, \((ab, bc, ca)\) are part a geometric progression, and \(a+b+c=ab+bc+ca\). The triplets in the set \(M=\{(a,b,c)\ \lvert a,b,c \in \mathbb{R}^{*}\}\) satisfy all the conditions mentioned before.
Compute \(\sum_{(a,b,c) \in M} (\lvert a \rvert + \lvert b \rvert + \lvert c \rvert)\). ^{Hint & Answer}
6. ^{easy} For the following sequence of numbers \((a_n)_{n \ge 1}\), \(a_1=1\) and \(a_2=6\) and \(a_{n+1}=\frac{a_n}{a_{n-1}}\), \(n \ge 2\), compute \(a_{2021}\). ^{Hint & Answer}
7. ^{medium} Find all numbers \(k \in \mathbb{Z}\), so that \(a^4+b^4+c^4+d^4+k*abcd \ge 0\), \(\forall a,b,c,d \in \mathbb{R}^{*}\). ^{Hint & Answer}
8. ^{medium} If \(x,y,z,t \in \mathbb{R}\), and \((x-3y+6z-t)^2 \ge 2021\) and \(x^2+y^2+z^2+t^2 \le 43\), then what is the value of the expression: \(\lvert x + y + z + t \rvert\) ? ^{Hint & Answer}
9. ^{medium} Considering \(x^2 + (a+b+c)x + k(ab+bc+ca) = 0\) where \(a,b,c \in \mathbb{R}_{+}^{*} \text{, and } k \in \mathbb{R}\) prove that \(\forall k \le \frac{3}{4}\) the equation has all its solution in \(\mathbb{R}\). ^{Hint & Answer}
10. ^{easy} Prove that \([\frac{x+3}{6}] - [\frac{x+4}{6}] + [\frac{x+5}{6}] = [\frac{x+1}{2}] - [\frac{x+1}{3}]\) is true, \(\forall x \in \mathbb{R}\). ^{Hint & Answer}
11. ^{medium} Prove that if \(\sum_{k=1}^{n} a_k = \sum_{k=1}^{n} a_k^2\) then \(\sum_{k=1}^{n} a_k \le n\), with \(a_k \in \mathbb{R}_{+}\). ^{Hint & Answer}
12. ^{medium} If \(a^2+b^2+c^2=3\) prove (\(\lvert a \rvert + \lvert b \rvert + \lvert c \rvert - abc) \le 4\), where \(a,b,c \in \mathbb{R}\). ^{Hint & Answer}
13. ^{hard} Prove \(\frac{a+b}{c^2}+\frac{b+c}{a^2}+\frac{c+a}{b^2} \ge 2(\frac{1}{a}+\frac{1}{b}+\frac{1}{c})\) if \(a,b,c \in \mathbb{R}_{+}^{*}\). ^{Hint & Answer}
14. ^{easy} Prove that if \(x \in \mathbb{R}\) and \(x^2+x \in \mathbb{Q}\) and \(x^3+2x \in \mathbb{Q}\), then \(x \in \mathbb{Q} \subset \mathbb{R}\). ^{Hint & Answer}
15. ^{medium} Given \(a,b \in \mathbb{R}\), we know \(3^a+13^b=17^a\), and \(5^a+7^b=11^b\). Prove \(a \lt b\). ^{Hint & Answer}
16. ^{medium} For \(n \in \mathbb{N}, n \ge 2\), let \(u(n)\) be the biggest prime number \(\le n\) and \(v(n)\) be the smallest prime number \(\gt n\). Prove:
\[\frac{1}{u(2)*v(2)}+\frac{1}{u(3)*v(3)}+...+\frac{1}{u(2010)*v(2010)}=\frac{1}{2}-\frac{1}{2021}\]^{Hint & Answer}
17. ^{medium} Prove \(\frac{1}{x^2+yz}+\frac{1}{y^2+xz}+\frac{1}{z^2+xy} \le \frac{1}{2} (\frac{1}{xy}+\frac{1}{yz}+\frac{1}{xz})\), \(\forall x,y,z \in \mathbb{R}_{+}^*\). ^{Hint & Answer}
18. ^{medium} For \(a,b,c \in (0,1) \subset \mathbb{R}\), \(x,y,z \in (0, \infty) \subset \mathbb{R}\), if:
\[\begin{cases} a^x = bc \\ b^y = ca \\ c^z = ab \end{cases}\]Prove that:
\[\frac{1}{2+x} + \frac{1}{2+y} + \frac{1}{2+z} \le \frac{3}{4}\]^{Hint & Answer}
19. ^{easy} If \(x,y,z \in R_{+}^{*}\), and \(xy=\frac{z-x+1}{y}=\frac{z+1}{2}\), prove that one of the numbers is the arithmetical mean of the other two. ^{Answer}
20. ^{medium} If \(a,b,c \in (1, \infty) or a,b,c \in (0,1)\). Prove:
\[log_a(bc) + log_b(ca) + log_c(ab) \ge 4(log_{ab}(c) + log_{bc}(a) + log_{ca}b)\]^{Hint & Answer}
Try playing with Faulhaber’s formula.
Find a way to introduce \(ab\) in the given equality \(a^2 + 2b^2=1\).
Try to get rid of the fractional part.
Try to do some substitutions based on the fact \(x=[x]+\{x\}\).
Use \(a\) to express \(b\) and \(c\).
Look for any patterns by computing the first few terms of the sequence.
Give meaningful values to \(a,b,c,d\) and see what’s happening.
Have you considered AM-GM inequality?
Can you use the Cauchy–Bunyakovsky–Schwarz inequality to solve the problem?
Can you use the Rearrangement inequality to solve the problem?
Can you use Hermite’s identity to solve the problem?
Can you use the Cauchy–Bunyakovsky–Schwarz inequality to solve the problem?
Can you use both CBS and AM-GM inequalities?
Can you prove first \(\frac{x}{y^2}+\frac{y}{x^2} \ge \frac{1}{x} + \frac{1}{y}\) ?
Try expressing \(x\) as a relationship between two rational numbers.
Think in terms of monotonically increasing and monotonically decreasing functions.
How many times a term \(\frac{1}{u(n)*v(n)}\) appears in the sum ?
Can you find a way to use the AM-GM inequality?
Work on the expressions involving logarithms. Consider changing the base of the logarithms to a common one.
This exercise is easy, so it doesn’t deserve a hint.
Consider changing the base of the logarithms to a common one.
We write our sum as:
\[S=(1^2-2^2)+(3^2-4^2)+...+(99^2-100^2)\]There is a formula for the difference of two square numbers:
\[a^2-b^2=(a-b)(a+b)\]Our \(S\) is a sum of differences between subsequent square numbers:
\[S=(1-2)(1+2)+(3-4)(3+4)+...+(99-100)(99+100) \Leftrightarrow\] \[S=-3-7-11-...-199\]If you have a keen eye for observations, you notice the numbers \(3,7,11,...,199\) have this form \(3+k*4\), \(k=0..49\).
\[S=-(3+0*4)-(3+1*4)-(3+2*4)-...-(3+49*4) \Leftrightarrow\] \[S=-3*50-4(0+1+2+...+49) \Leftrightarrow\]The infinite series whose terms are natural numbers, \(1+2+3+...+n\), is divergent. But we know the \(nth\) partial sum of the series to be: \(\sum_{k=1}^{n}k=\frac{n(n+1)}{2}\), so:
\[S=-150-4*\frac{49(49+1)}{2}\]The final answer \(S=-150-4900=-5050\).
If you are familiar with Faulhaber’s formula, you know that:
\[\sum_{k=1}^n k = \frac{n(n+1)}{2}\] \[\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}\]We can cleverly use the two formulas and reimagine our \(S\) to be:
\[S=\underbrace{(1^2+2^2+3^2+...+100^2)}_{\sum_{k=1}^{100} k^2} - \underbrace{2*(2^2+4^2+6^2+..+100^2)}_{\sum_{k=1}^{50} (2k)^2} \Leftrightarrow\] \[S=\frac{100*101*201}{6}-2*(2^2*1^2+2^2*2^2+2^2*3^2+...+2^2*50^2) \Leftrightarrow\] \[S=\frac{100*101*201}{6}-8*\frac{50*51*101}{6}\]Final answer \(S=-5050\).
The intuition begs us to find a way to link our existing relationship to a term containing \(ab\). In this regard:
\[1=a^2+2b^2=a^2 + (b\sqrt{2})^2 \Leftrightarrow\] \[1=a^2 + \underbrace{2*\sqrt{2}ab}_{\text{we add this}} + (b\sqrt{2})^2 - \underbrace{2*\sqrt{2}ab}_{\text{to remove it later}} \Leftrightarrow\] \[1=(a+b\sqrt{2})^2 - 2\sqrt{2}ab \Leftrightarrow\] \[1+2\sqrt{2}ab=(a+b\sqrt{2})^2\]But we know that:
\[(a+b\sqrt{2})^2 \geq 0 \Rightarrow\] \[1+2\sqrt{2}ab \geq 0 \Rightarrow\] \[ab \geq \frac{-1}{2\sqrt{2}}\]But is it possible for \(ab=\frac{-1}{2\sqrt{2}}\). Yes, \(ab=\frac{-1}{2\sqrt{2}}\) when \((a+b\sqrt{2})^2=0\).
All in all, the smallest element of our set is \(\frac{-1}{2\sqrt{2}}\).
If \(a \in \mathbb{R}\), then:
\[a=[a]+\{a\} \Leftrightarrow\] \[\{a\}=a-[a]\]With this in mind, we want to get rid of the wild fractional part \(\{a\}\) from our relationship:
\[[\frac{x+1}{5}]=\frac{x-1}{2} - [\frac{x-1}{2}] \Leftrightarrow\] \[\underbrace{[\frac{x+1}{5}]}_{\in \mathbb{Z}}+\underbrace{[\frac{x-1}{2}]}_{\in \mathbb{Z}}=\frac{x-1}{2}\]We can safely say that \(\frac{x-1}{2}=k \in \mathbb{Z}\).
We substitute \(x=2k+1\) in the original relation:
\[[\frac{2*k+1+1}{5}] + [\frac{2*k+1-1}{2}] = k \Leftrightarrow\] \[[\frac{2k+2}{5}] + k = k \Leftrightarrow\] \[[\frac{2k+2}{5}] = 0\]Because \([\frac{2k+2}{5}] = 0\), then:
\[0 \leq \frac{2k+2}{5} \lt 1 \Leftrightarrow\] \[-1 \leq k \lt \frac{3}{2}\]There are \(3\) numbers \(k \in \mathbb{Z}\) that satisfy the relationship: \(\{-1,0,1\}\).
But, \(x=2k+1 \Rightarrow x \in \{-1, 1, 3\}\).
Testing our solutions:
\[x = -1 \Rightarrow [\frac{-1+1}{5}] = \{ \frac{-1-1}{2} \} \Leftrightarrow [0] = \{ -1 \} \text{ is true}\] \[x = 1 \Rightarrow [\frac{1+1}{5}] = \{ \frac{1-1}{2} \} \Leftrightarrow [\frac{2}{5}] = \{ 0 \} \text{ is true}\] \[x = 1 \Rightarrow [\frac{3+1}{5}] = \{ \frac{3-1}{2} \} \Leftrightarrow [\frac{4}{5}] = \{ 1 \} \text{ is true}\]So the cardinality is \(3\).
To avoid division by zero in \(\frac{3}{2x}\), we know that \(x \neq 0\).
Note: The fractional part operation (denoted by \(\{a\}\)) is not distributive over multiplication, meaning \(\{a*b\} \neq \{a\} * \{b\}\), so avoid making any further assumptions about \(\frac{1}{\{x\}}\).
Firstly, \(\frac{1}{[x]} + \frac{1}{\{x\}}=2x\) is equivalent to \(\frac{x}{[x]\{x\}}=2x\), is equivalent to \(x(2[x]\{x\}-1)=0\).
Because \(x \neq 0\), then \(2[x]\{x\}=1\), or \(\{x\}=\frac{1}{2[x]}\).
We substitute \([x]=n\), so that \(\{x\}=\frac{1}{2n}\), where \(n \in \mathbb{Z}\) and \(n \geq 1\).
\(x=[x]+{x}\) becomes \(x=n+\frac{1}{2n}=\frac{2n^2+1}{2n}\).
The term defining the set becomes \([\frac{3}{2x}] \rightarrow [\frac{3}{2*\frac{2n^2+1}{2n}}]=[\frac{3n}{2n^2+1}]\).
For \(n \ge 1\), we know that \(0 \lt \frac{3n}{2n^2+1} \lt 1\). Without calculus, we can easily prove that for \(n \ge 1\), \(f(n)=3n\), and \(g(n)=2n^2+1\) are increasing functions, with \(g(n)\) increasing faster, so that \(\frac{f(n)}{g(n)} \lt 1\).
For certain we know that \(0 \lt \frac{3n}{2n^2+1}\).
After we solve \(\frac{3n}{2n^2+1} \lt 1\), we obtain \(n \ge 1\).
For \(n=1\), \([\frac{3}{2x}] \rightarrow 1\), for \(n \gt 1\), \([\frac{3}{2x}] \rightarrow 0\).
So the set \(\{\frac{3}{2x} \lvert x \in \mathbb{R} \text{ and, } \frac{1}{[x]} + \frac{1}{\{x\}}=2x \} = \{0, 1\}\)
\((a,b,c)\) are in an arithmetic progression \(\Rightarrow b=\frac{a+c}{2}\).
\((ab, bc, ac)\) are in a geometric progression \(\Rightarrow (bc)^2 = ab*ac\).
With this in mind, we have the following relationships:
\[\begin{cases} b=\frac{a+c}{2} \\ (bc)^2 = ab*ac \\ a+b+c=ab+bc+ca \end{cases}\]It doesn’t look like it, but we have enough information to determine \((a,b,c)\)
\[\begin{cases} 2b=a+c \\ bc=a^2 \Leftrightarrow c=\frac{a^2}{b}\\ 3b=ab+bc+ac \end{cases}\]But we can also write \(2b=a+\frac{a^2}{b}\).
Putting all together in the last equation:
\[3b=ab+\underbrace{bc}_{a^2}+a\underbrace{c}_{\frac{a^2}{b}} \Leftrightarrow \\ 3b=ab+a^2+\frac{a^3}{b} \Leftrightarrow \\ 3b=a(\underbrace{a+\frac{a^2}{b}}_{2b})+ab \Leftrightarrow \\ 3b=2ab+ab\]But because \(b \in \mathbb{R}^{*}\), and \(3b=3ab\), we can safely assume \(a=1\).
If \(a=1\), the relationship \(2b=a+c\) becomes \(2b=1+\frac{1}{b} \Leftrightarrow 2b^2-b-1=0\).
\(2b^2-b-1=0\) can also be written as \(2b^2+b-(2b+1)=0\) or \(b(2b+1)-(2b+1)=0 \Leftrightarrow (b-1)(2b+1)=0\), so \(b=1\) or \(b=-\frac{1}{2}\).
For \(b=1\), \(c=1\), and for \(b=-\frac{1}{2}\), \(c=-2\).
So our triplets are \((1,1,1)\) or \((1,-\frac{1}{2}, -2)\).
Computing the sum is trivial: \(\sum=\frac{13}{2}\).
The key to solving this exercise is to compute the first terms of the sequence:
\[\begin{cases} a_1=2 \\ a_2=6 \\ a_3=\frac{6}{2}=3 \\ a_4=\frac{3}{6}=\frac{1}{2} \\ a_5=\frac{\frac{1}{2}}{3}=\frac{1}{6} \\ a_6=\frac{\frac{1}{6}}{\frac{1}{3}}=\frac{1}{3} \\ a_7=\frac{\frac{1}{3}}{\frac{1}{6}}=2 \\ a_8=\frac{2}{\frac{1}{3}}=6 \\ \text{... and so on} \end{cases}\]We can see that after every 6 terms, the values repeat themselves. Using mathematical induction, we can prove that our sequence is:
\[S(n)= \begin{cases} 2 \text{ ,}\text{ if }n=6*k+1 \\ 6 \text{ ,}\text{ if }n=6*k+2 \\ 3 \text{ ,}\text{ if }n=6*k+3 \\ \frac{1}{2} \text{ ,}\text{ if }n=6*k+4 \\ \frac{1}{6} \text{ ,}\text{ if }n=6*k+5 \\ \frac{1}{3} \text{ ,}\text{ if }n=6*k \end{cases} \text{, where } k \in \mathbb{N}\]Final answer is: \(a_{2021} = \frac{1}{6}\)
When you have problems like this, you need to consider giving specific values to \(a,b,c,d\) that can help you narrow down your search.
For example, we know the inequality should hold regardless of \(a, b, c, d\) (\(\forall\)), so let’s assume the following:
\[a=b=c=d=n\]Then:
\[4*n^4+k*n^4 \ge 0 \Leftrightarrow \\ k \ge -4\]This already tells us that we need to look somewhere in \(k \in [-4, \infty) \cap \mathbb{Z}\).
On my first try, I’ve tried with \(a^2=b^2=m\) and \(c^2=b^2=n\), so I’ve got something like \((n\sqrt{2} - m\sqrt{2})^2 + mn*(k+4) \ge 0\).
Now, we need to find an upper bound for \(k\). For this, we need to apply some tricks to obtain:
\[\text{something positive} + (\text{an upper bound}-k)*(\text{something positive}) \ge 0\]So why don’t we pick:
\[\begin{cases} a=b=c=n \\ d=-n \end{cases}\]Then our relationship becomes:
\[4*n^4 - k*n^4 \ge 0 \Leftrightarrow \\ k \le 4\]So, at this point, we know that \(k \in [-4, 4] \cap \mathbb{Z}\). Some would stop here, and that would be wrong.
We need to come up with stronger proof. Our findings, \(k \in [-4, 4] \cap \mathbb{Z}\) were based on specific values for \(a,b,c,d\).
One famous inequality in mathematics is the inequality of arithmetic and geometric means:
\[(\frac{x_1^n+x_2^n+...+x_n^n}{n})^{\frac{1}{n}} \ge ... \ge \frac{x_1+x_2+...+x_n}{n} \ge (x_1 * x_2 * ... * x_n)^{\frac{1}{n}}\]If you are good at spotting patterns, you will see how it resembles our inequality.
\[\begin{cases} x_1 \rightarrow \lvert a \rvert \\ x_2 \rightarrow \lvert b \rvert \\ x_3 \rightarrow \lvert c \rvert \\ x_4 \rightarrow \lvert d \rvert \end{cases}\]So, if, we consider this:
\[(\frac{\lvert a \rvert ^4 + \lvert b \rvert ^4 +\lvert c \rvert ^4+\lvert d \rvert ^4}{4})^{\frac{1}{4}} \ge (\lvert a \rvert * \lvert b \rvert * \lvert c \rvert * \lvert d\rvert)^{\frac{1}{4}} \Leftrightarrow \\ a^4+b^4+c^4+d^4 \ge 4 * \lvert abcd \rvert\]This tells us that the inequality works \(\forall k \in -4, 4 \cap \mathbb{Z}\).
The first observation you should make is that \(2021=43*41\).
When we have problems like this, it’s worth checking if using the two fundamental inequalities, AM-GM inequality and Cauchy–Bunyakovsky–Schwarz helps us.
Even if not fully obvious in our case, the one that helps is the CBS:
\[(\sum_{i=1}^{n} a_i^2) * (\sum_{i=1}^{n}b_i^2) \ge (\sum_{i=1}^{n}a_i*b_i)^2 \\ \text{ equality holds if } \frac{a_i}{b_i}=k \text{, for } k \in \mathbb{R}\]For 3 pair of numbers, \((a_1, b_1), (a_2, b_2), (a_3, b_3)\), CBS looks like this:
\[(a_1*b_1 + a_2*b_2 + a_3*b_3)^2 \le (a_1^2 + a_2^2 + a_3^2) * (b_1^2 + b_2^2 + b_3^2)\]Firstly, we know that: \(2021 \le (1*x+(-3)y+6z+(-1)*t)^2\)
So, thinking in terms of the CBS inequality, why don’t we consider the following:
\[\begin{cases} a_1 \rightarrow x \\ a_2 \rightarrow y \\ a_3 \rightarrow z \\ a_4 \rightarrow t \\ \end{cases}\]and
\[\begin{cases} b_1 \rightarrow 1 \\ b_2 \rightarrow -3 \\ b_3 \rightarrow 6 \\ b_4 \rightarrow -1 \\ \end{cases}\]In this regard, we can write things like this:
\[2021 \le (x-3y+6z+t)^2 \le \underbrace{(1^2+(-3)^2+6^2+1^2)}_{47}*\underbrace{(x^2+y^2+z^2+t^2)}_{\le 43} \le 2021\]This means our expression, \((x-3y+6z+t)^2\) is squeezed between \(2021\) and \(2021\), so the equality holds true, so:
\[\frac{x}{1}=\frac{y}{-3}=\frac{z}{6}=\frac{t}{-1}=k \text{ , } k \in \mathbb{R}\]Or:
\[\begin{cases} x=k \\ y=-3k \\ z=6k \\ t=-k \end{cases} \text{ , } k \in \mathbb{R}\]If we use this substitution:
\[x^2+y^2+z^2+t^2 \le 43 \Leftrightarrow \\ k^2 + 9k^2+36k^2+k^2 \le 43 \\ 47k^2 \le 43 \\ k \le \sqrt{\frac{43}{47}}\]And then again:
\[(x-3y+6z-t)^2 \ge 2021 \\ (k+9k+36k+k)^2 \ge 43*47 \\ 47^2*k^2 \ge 43*47 \\ k^2 \ge \frac{43}{47} \\ k \ge \sqrt{\frac{43}{47}}\]So we can safely assume that our \(k=\sqrt{\frac{43}{47}}\).
Now it’s easy to compute the expression: \(\lvert x^2 + y^2 + z^2 + t^2 \rvert=3 * \sqrt{\frac{43}{47}}\).
For 6 numbers: \(x_1 \le x_2 \le x_3\) and \(y_1 \le y_2 \le y_3\) the Rearrangement inequality can be written as:
\[x_1*y_2 + x_2*y_3+x_3*y_1 \le x_1*y_1 + x_2*y_2 + x_3*y_3\]If we pick:
\[\begin{cases} x_1=y_1=a \\ x_2=y_2=b \\ x_3=y_3=c \\ \end{cases}\]We obtain:
\[ab+bc+ca \le a^2 + b^2 + c^2 (*)\]Then:
\[2(ab+bc+ca) + ab+bc+ca \le 2(ab+bc+ca) + a^2 + b^2 + c^2 \Leftrightarrow \\ 3(ab+bc+ca) \le (a+b+c)^2 (**)\]Now, let’s get to our problem. We will compute the \(\Delta\) for \(x^2 + (a+b+c)x + k(ab+bc+ca) = 0\):
\[\Delta=(a+b+c)^2-4k(ab+bc+ca)\]For our equation to have solutions in \(\mathbb{R}\), we need to find \(k\) so that \(\Delta \ge 0\).
\[\Delta = \underbrace{(a+b+c)^2}_{\ge 3(ab+bc+ca)} - 4k(ab+bc+ca) \ge 0 \\\]Using (**), we can write:
\[\Delta \ge 3(ab+bc+ca) - 4k(ab+bc+ca) \ge 0 \Leftrightarrow \\ (3-4k) \ge 0 \Rightarrow k \le \frac{3}{4}\]Hermite’s Identity states that:
\[\sum_{k=0}^{n-1}[x+\frac{k}{n}] = [nx] \text{ , } \forall x \in \mathbb{R} \text{ ,and } n \in \mathbb{N}\]One idea is to make our existing terms (e.g. [\(\frac{x+3}{6}\)]) resemble the terms to Hermite’s identity (\([x+\frac{k}{n}]\)). In this regard we need to find a way to “isolate” the \(x\) outside the fraction(s).
One idea is to perform the following substitution:
\[y=\frac{x+1}{6}\]So our identity becomes:
\[[y+\frac{1}{3}] - [y+\frac{1}{2}] + [y+\frac{2}{3}] = [3y] - [2y] \text{ (*)}\]But:
\[\begin{cases} [3y] = [y] + [y+\frac{1}{3}] + [y+\frac{2}{3}] \\ [2y] = [y] + [y+\frac{1}{2}] \end{cases} \text{ (**)}\](*) and (**) proves the identity to be correct.
Even if it’s not obvious, let’s start again with the CBS inequality:
\[(\sum_{i=1}^{n} a_i^2) * (\sum_{i=1}^{n}b_i^2) \ge (\sum_{i=1}^{n}a_i*b_i)^2 \\\]If we pick \(b_1=b_2=b_3=...=b_n=1\) the inequality becomes:
\[(\sum_{i=1}^{n} a_i^2) * n \ge (\sum_{i=1}^{n}a_i)^2\]Expanding the sums:
\[(a_1+a_2+...+a_n)^2 \le n*(a_1^2 + a_2^2 + ...+ a_n^2)\]\(a_k \in \mathbb{R}_{+}\) so we can conclude:
\[a_1+a_2+...b_n \le n\]12. ^{easy} If \(a^2+b^2+c^2=3\) prove (\(\lvert a \rvert + \lvert b \rvert + \lvert c \rvert - abc) \le 4\), where \(a,b,c \in \mathbb{R}\). ^{Hint & Answer}
The following is true (\(\forall a,b,c \in \mathbb{R}\)):
\[a^2 + b^2 + c^2 = \lvert a \rvert^2 + \lvert b \rvert^2 + \lvert c \rvert^2 = 3\]Applying Cauchy–Bunyakovsky–Schwarz:
\[\underbrace{(a^2 + b^2 + c^2)}_{=3} * \underbrace{(1^2 + 1^2 + 1^2)}_{=3} \ge (\lvert a \rvert*1 + \lvert b \rvert*1 + \lvert c \rvert*1)^2\]From this \(\Rightarrow 3 \ge (\lvert a \rvert + \lvert b \rvert + \lvert c \rvert)\) (*).
Applying AM-GM inequality:
\[\frac{a^2+b^2+c^2}{3} \ge (abc)^{\frac{2}{3}}\]We can safely observe \(-abc \le 1\) (**).
(*) and (**) \(\Rightarrow\):
\[\underbrace{\lvert a \rvert + \lvert b \rvert + \lvert c \rvert}_{\le 3} \underbrace{- abc}_{\le 1} \le 4 \text{ is TRUE}\]Looking at:
\[\frac{a+b}{c^2}+\frac{b+c}{a^2}+\frac{c+a}{b^2} \ge 2(\frac{1}{a}+\frac{1}{b}+\frac{1}{c})\]We can group things in the following manner:
\[\underbrace{\frac{a}{b^2}+\frac{b}{a^2}}_{*} + \underbrace{\frac{b}{c^2}+\frac{c}{b^2}}_{**} + \underbrace{\frac{a}{c^2} + \frac{c}{a^2}}_{***} \ge \underbrace{\frac{1}{a} + \frac{1}{b}}_{*} + \underbrace{\frac{1}{b} + \frac{1}{c}}_{**} + \underbrace{\frac{1}{a} + \frac{1}{c}}_{***}\]There’s a pattern here!! If we manage to solve this inequality: \(\frac{m}{n^2} + \frac{n}{m^2} \ge \frac{1}{n} + \frac{1}{m}\), \(m,n \in \mathbb{R}_{+}^*\) we will be able solve our problem.
So let’s solve this:
\[\frac{m}{n^2} + \frac{n}{m^2} \ge \frac{1}{n} + \frac{1}{m} \Leftrightarrow \\\] \[m^3 + n^3 = mn(m+n) \ge 0 \Leftrightarrow \\\] \[m^2(m-n)-n^2(m-n) \ge 0 \Leftrightarrow \\\] \[(m-n)(m^2-n^2) \ge 0 \Leftrightarrow \\\] \[\underbrace{(m-n)^2}_{\ge 0} * \underbrace{(m+n)}_{\gt 0} \ge 0\]Now, we know the following:
\[\begin{cases} \frac{a}{b^2}+\frac{b}{a^2} \ge \frac{1}{a} + \frac{1}{b} \\ \frac{b}{c^2}+\frac{c}{b^2} \ge \frac{1}{b} + \frac{1}{c} \\ \frac{a}{c^2}+\frac{c}{a^2} \ge \frac{1}{a} + \frac{1}{c} \end{cases}\]If we sum all three inequalities, we’ve proven the original inequality.
If \(x^2+x \in \mathbb{Q}\) , then \(x^2+x=a \in \mathbb{Q}\)
If \(x^3+2x \in \mathbb{Q}\) , then \(x^3+2x=b \in \mathbb{Q}\).
Our purpose is to try expressing \(x\) using only \(a\) and \(b\) in a way we will prove \(x \in \mathbb{Q}\).
We start by doing some tricks with \(b=x^3+2x=x^3+\underbrace{(x^2-x^2)}_{=0}+\underbrace{(x-x)}_{=0}+2x\).
After regrouping terms, \(b\) becomes \(b=x\underbrace{(x^2+x)}_{=a}-\underbrace{(x^2+x)}_{=a} + x + 2x\).
So \(b=x(a+3)-a\), or \(a+b=x(a+3)\).
We are getting closer to the solution; the only thing remaining is to check if \(a=-3\).
Let’s suppose \(a=-3\), then \(x^2+x+3=0\), but this is impossible because the solutions \(x_1,x_2 \notin \mathbb{R}\). So, we can safely assume \(a \neq 3\).
If \(a \neq 3\), then we can write \(x=\frac{a+b}{a+3}\). But both \(a+b \in Q\) and \(a+3 \in Q\), then for sure \(x \in Q\).
The fact that \(3,5,7,11,13\) are all prime numbers is just a coincidence, but congratulations if you spot that.
Sometimes, it is easier to disprove, than to prove something, so ad absurdum let’s suppose \(a \ge b\).
Firstly, for the function \(f(x)=a^x\), if \(a\) is a constant greater than 1, we say that the function is increasing, meaning the value of \(f(x)\) increases with \(x\). Secondly, if \(a \lt 1\), then we say the function is decreasing, meaning the value of \(f(x)\) decreases while \(x\) increases.
That being said, if \(a \ge b\), then \(13^a \ge 13^b\) and \(5^a \ge 5^b\).
This means that \(3^a + 13^a \ge 17^a\), or \((\frac{3}{17})^a + (\frac{13}{17})^a \ge 1\).
Let \(g(x) : \mathbb{R} \rightarrow \mathbb{R}\), \(g(x)=(\frac{3}{17})^x + (\frac{13}{17})^x\). \(g(x)\) is stricly decreasing. Also, \(g(1)=\frac{16}{17} \lt 1\). But \(g(1) \lt g(a)\) thus, \(a \lt 1\) (*)
.
If our initial supposition is correct, then \(5^b+7^b \ge 11^b\). Following the same principle, we define \(h(x)=(\frac{5}{11})^b+(\frac{7}{11})^b \ge 1\) and we eventually conclude that \(b \gt 1\) (**)
.
(*)
and (**)
\(\Rightarrow\) our supposition is false, so \(a \lt b\) is true.
This problem is easier than it looks.
\[S=\underbrace{\frac{1}{u(2)*v(2)}}_{=\frac{1}{2*3}}+\underbrace{\frac{1}{u(3)*v(3)}}_{=\frac{1}{3*5}}+\underbrace{\frac{1}{u(4)*v(4)}}_{=\frac{1}{3*5}}+\underbrace{\frac{1}{u(5)*v(5)}}_{=\frac{1}{5*7}}+\underbrace{\frac{1}{u(6)*v(6)}}_{=\frac{1}{5*7}}+\underbrace{\frac{1}{u(7)*v(7)}}_{=\frac{1}{7*11}}...\]We see that the terms in our sum repeat themselves a number of times.
For example:
If \(p, q\) are consecutive prime numbers, we define the set \(M=\{ n \in \mathbb{N} \lvert q \le n \lt p \}\). Th cardinal of \(M\) is exactly \(p-q\) for a given \(n\). It means that each term \(\frac{1}{qp}\) appears exactly \(p-q\) times in our sum.
With this in mind, we can re-write our sum as:
\[S=\frac{3-2}{2*3}+\frac{5-3}{5*3}+\frac{7-5}{5*7}+\frac{11-7}{7*11}+... \Leftrightarrow\] \[S=\frac{1}{2}-\frac{1}{3}+\frac{1}{3}-\frac{1}{5}+\frac{1}{5}+...+\frac{1}{2003}-\frac{1}{2011} \Leftrightarrow\] \[S=\frac{1}{2}-\frac{1}{2011}\]We use the AM-GM inequality for each of the following terms:
\[x^2+yz \le 2\sqrt{x^2yz} = 2x\sqrt{yz} \Leftrightarrow \\\] \[\frac{1}{x^2+yz} \le \frac{\sqrt{yz}}{2xyz} (*) \\\]But \(\sqrt{yz} \le \frac{y+z}{2} (**)\).
\((*), (**) \Rightarrow\):
\[\frac{1}{x^2+yz} \le \frac{\frac{y+z}{2}}{2xyz}\]Similarly, for the other terms:
\[\frac{1}{y^2+xz} \le \frac{\frac{x+z}{2}}{2xyz}\] \[\frac{1}{z^2+xy} \le \frac{\frac{x+y}{2}}{2xyz}\]If we sum everything up:
\[\frac{1}{x^2+yz} + \frac{1}{y^2+xz} + \frac{1}{z^2+xy} \le \frac{\frac{y+z}{2}}{2xyz} + \frac{\frac{x+z}{2}}{2xyz} + \frac{\frac{x+y}{2}}{2xyz} \Leftrightarrow \\\] \[\frac{1}{x^2+yz} + \frac{1}{y^2+xz} + \frac{1}{z^2+xy} \le \frac{1}{2}(\frac{x+y+z}{xyz}) \Leftrightarrow \\\] \[\frac{1}{x^2+yz} + \frac{1}{y^2+xz} + \frac{1}{z^2+xy} \le \frac{1}{2} (\frac{1}{xy} + \frac{1}{yz} + \frac{1}{xz})\]Let’s start by working on the expressions involving logarithms.
\[\begin{cases} a^x = bc \Leftrightarrow log_a(a^x) = log_a(bc) \Leftrightarrow x = log_a(b) + log_a(c) \\ b^y = ca \Leftrightarrow log_b(b^y) = log_b(ca) \Leftrightarrow y = log_b(c) + log_b(a) \\ c^z = ab \Leftrightarrow log_c(c^z) = log_c(ab) \Leftrightarrow c = log_c(a) + log_c(b) \\ \end{cases}\]Now, let’s change the base for our logarithms to a common number, \(m \in \mathbb{R}_{+}^{*}\):
\[\begin{cases} x = log_a(b) + log_a(c) = \frac{log_m(b)}{log_m(a)} + \frac{log_m(c)}{log_m(a)} \\ y = log_b(c) + log_b(a) = \frac{log_m(c)}{log_m(b)} + \frac{log_m(a)}{log_m(b)} \\ c = log_c(a) + log_c(b) = \frac{log_m(a)}{log_m(c)} + \frac{log_m(b)}{log_m(c)} \\ \end{cases}\]Let’s define \(l_a = log_m(a)\), \(l_b = log_m(b)\), \(l_c = log_m(c)\).
We observe that, \(x,y,z\) can be written as:
\[\begin{cases} x=\frac{l_b+l_c}{l_a} \\ y=\frac{l_c+l_a}{l_b} \\ z=\frac{l_a+l_b}{l_c} \end{cases}\]The expression required to be proven becomes:
\[\frac{1}{2+\frac{l_b+l_c}{l_a}}+\frac{1}{2+\frac{l_c+l_a}{l_b}}+\frac{1}{2+\frac{l_a+l_b}{l_c}} \le \frac{3}{4} \Leftrightarrow\] \[\frac{l_a}{l_a + \underbrace{(l_a + l_b + l_c)}_{s_l}} + \frac{l_b}{l_b+\underbrace{(l_a+l_b+l_c)}_{s_l}} + \frac{l_c}{l_c+\underbrace{(l_a+l_b+l_c)}_{s_l}} \le \frac{3}{4} \Leftrightarrow\] \[1-\frac{l_a}{l_a+s_l}+1-\frac{l_b}{l_b+s_l}+1-\frac{l_c}{l_c+s_l} \ge 3-\frac{3}{4} \Leftrightarrow\] \[\frac{s_l}{l_a+s_l} + \frac{s_l}{l_b+s_l} + \frac{s_l}{l_c+s_l} \ge \frac{9}{4} \Leftrightarrow\] \[4*s_l(\frac{1}{l_a+s_l} + \frac{1}{l_b+s_l} + \frac{1}{l_c+s_l}) \ge 9\]Let’s suppose \(l_a \le l_b \le l_c\), also \(l_a>0\), then:
\[4*s_l(\frac{1}{l_a+s_l} + \frac{1}{l_b+s_l} + \frac{1}{l_c+s_l}) \ge 4*s_l(\frac{1}{l_a+s_l} + \frac{1}{l_a+s_l} + \frac{1}{l_a+s_l}) \ge 9\]So:
\[4*s_l(\frac{1}{l_a+s_l}) \ge 3 \Leftrightarrow\] \[1-s_l(\frac{1}{l_a+s_l}) \le \frac{1}{4} \Leftrightarrow\] \[\frac{l_a}{l_a+l_a+l_b+l_c} \le \frac{l_a}{4*l_a} \le \frac{1}{4}\]\(z=xy^2+x-1\), and \(z=2xy-1\).
We can write \(xy^2+x-1=2xy-1\). Eventually, this relationship becomes equivalent to \(x(y-1)^2\). But because \(x \neq 0 \Rightarrow y=1\).
If \(y=1\), then \(x=\frac{z+y}{2}\).
We pick a base \(d\) in the interval of \(a,b,c\). Basically we as for Problem 18.
After we do the substitution, we need to prove:
\[\frac{y+z}{x} + \frac{z+x}{y} + \frac{x+y}{z} \ge \frac{4x}{y+z} + \frac{4y}{z+x} + \frac{4z}{x+y}.\]Then, we need to prove that inequality holds true for:
\[\frac{y+z}{x} \ge \frac{4x}{y+z}\]Which is easy to prove using AM-GM inequality.
]]>With a rush of adrenaline, I made the daring decision to clone the repository and build that game of torture. And it worked. It’s called ./opensuplaplex
now.
^{This is me beating the 3rd level.}
]]>Free suggestions in the beginning. If you follow all of them, you win. | |
Turn-Based Mode (the sinusoid doesn’t drop automatically) |
^{(Source code)}
Controls
s
;x
;a
;z
;q
;w
;p
;To win the game, you need to reduce the signal as close to zero as possible. It’s hard but not impossible. There’s a current threshold of unit * 0.3
. Surviving is not winning. The Path of the Alternating Phases is boredom.
You lose if the original signal spikes outside the game buffer (canvas).
A professional player turns off the suggestions, now enabled by default. If you are a savant, you can compute the Fourier Series Coefficients in your head. Cancel that noise!
To better understand what is happening, check out this first article of a series.
The game was developed using p5js.
The source code (here) is not something I am particularly proud of.
Some discussion from around the web:
^{This game is a joke I put together during a weekend. I’m sorry for the graphics.}
]]>The newest family member started walking and talking, which is/was delightful. My second newest family member started to pose simple questions requiring complicated answers, which is always challenging.
Software Engineering-wise, I am not precisely a Professional Software Engineer anymore since I switched to management years ago. But for reasons I am too afraid to admit, I still practice a technical hobby called Recreational Programming^{1}. This means that I program for fun in my limited spare time (a few hours a week); it makes me feel nostalgic for the years when programming was not a job.
In 2023, I didn’t achieve much and blogged little, but I did a few things I am proud of:
From a cultural perspective, 2023 was thin. I haven’t read much, or seen impactful movies (with a few notable exceptions):
Music-wise, in 2023, I experimented with different genres I hadn’t touched before. I am more of a prog-rock guy, but 2023 was the year of alternative rock, punk, and the aesthetics of the 80s. My favorite tracks of 2023 (as reported by Spotify are):
^{1}I’ve first heard about Recreational Programming by watching @tsoding, and it made sense.
]]>