Commit 9e640f21 authored by Vincent Tam's avatar Vincent Tam

Bug fix: adapted almost all articles to mmark

parent 45213bd2
Pipeline #58839437 passed with stages
in 46 seconds
......@@ -8,7 +8,6 @@ categories:
tags:
- set theory
draft: false
markup: mmark
---
### Purpose
......
......@@ -25,13 +25,17 @@ J'ai cassé la tête sur l'ensemble qui engendre la tribu produit pedant des
heures. Ce n'est pas facile à voir des « cylindres mesurables de dimension
finie » dans $\mathcal{C}_0$, $\mathcal{C}_1$ non plus !
<p>
La définition dans Meyre m'a rappelé celle de la topologie produit. Alors,
c'est plus agréable à comprendre. A mon niveau, ce n'est pas évident à voir que
$$\Big\{\{f \in E^\Bbb{T} \mid f(t_i) \in B_i, i \in \{1,\dots,n\}\} \bigm|
t_j \in \Bbb{T}, B_j \in \Er \forall j \in \{1,\dots,n\}\Big\}$$
<div>
$$
\Big\{\{f \in E^\Bbb{T} \mid f(t_i) \in B_i, i \in \{1,\dots,n\}\} \bigm|
t_j \in \Bbb{T}, B_j \in \Er \forall j \in \{1,\dots,n\}\Big\}
$$
</div>
représente des cylindres sans le livre de Meyre.
</p>
### Processus
......
......@@ -14,10 +14,11 @@ markup: mmark
### Motivation
$\gdef\vois#1#2{\mathcal{V}_{#1}(#2)}$
$$ \gdef\vois#1#2{\mathcal{V}_{#1}(#2)} $$
Nets and filters are used for describing convergence in a non-metric space $X$.
Denote the collection of (open) neighbourhoods of $x \in X$ by $\vois{X}{x}$.
Denote the collection of (open) neighbourhoods of $x \in X$ by $$\vois{X}{x}$$.
### Definitions and examples
......@@ -62,7 +63,7 @@ A picture guides through the direct verification. Note that all intersections
should be _nonempty_ in this context.
Convergence of a filter base to a point
: $\mathcal{F} \to x$ if $\vois{X}{x} \subseteq \mathcal{G}$, where
: $\mathcal{F} \to x$ if $$\vois{X}{x} \subseteq \mathcal{G}$$, where
$\mathcal{G}$ is the _filter generated by $\mathcal{F}$_.
### Equivalent ways to define continuity
......
......@@ -39,17 +39,17 @@ set complement.)
The trick of preimage by metric $d$ _wouldn't_ help because $A$ _isn't_
necessarily $\mu$-continuous. (i.e. $\mu(\partial A) > 0$)
<div>
$$F_\epsilon = \{x \in S \mid d(x,A^\complement) \ge \epsilon\}$$
</div>
$$
F_\epsilon = \{x \in S \mid d(x,A^\complement) \ge \epsilon\}
$$
In this case, _all_ outer open approximation $F_{1/n}^\complement$ would contain
$\partial A$, which we may not want.
<div>
$$\bigcap_{\epsilon>0} F_\epsilon^\complement =
\{x \in S \mid d(x,A^\complement) \le 0\} = \overline{A^\complement}$$
</div>
$$
\bigcap_{\epsilon>0} F_\epsilon^\complement =
\{x \in S \mid d(x,A^\complement) \le 0\} = \overline{A^\complement}
$$
By construction of $\mathcal{C}$, each member $A \in \mathcal{C}$ only has outer
approximation. When we take complement $A^\complement$, we're turning things
......@@ -72,8 +72,8 @@ a Polish space is, by definition, separation, proofs often start with dense
sequence $(x_k)_k$ in $S$.
One of my classmates asked why we _couldn't_ simply take the compact set to be
$K = \bigcup_{k=1}^{n_p} \bar{B}(x_k,\frac1p)$. My instructor reminded us that
a closed unit ball is compact iff the space is finite dimensional.
$$K = \bigcup_{k=1}^{n_p} \bar{B}(x_k,\frac1p)$$. My instructor reminded us
that a closed unit ball is compact iff the space is finite dimensional.
For example, in $\R^\N$, i.e. the space of real-valued sequences, the
orthonormal basis $(e_n)_n$ is a sequence in the closed unit ball centred at the
......@@ -83,8 +83,8 @@ two distinct elements is $2^{1/p}$ for any $p>0$.)
To make $K$ compact, we need to make it totally bounded, so that total
boundedness and completeness are equivalent to compactness in metric spaces.
<div>
$$K = \bigcap_{p>0}\bigcup_{k=1}^{n_p}\overline{B}\left(x_k,\frac{1}{p}\right)$$
</div>
$$
K = \bigcap_{p>0}\bigcup_{k=1}^{n_p}\overline{B}\left(x_k,\frac{1}{p}\right)
$$
Since that's my personal reminder, I'll leave out the technical details.
......@@ -84,20 +84,24 @@ pour tout $i = 0, \dots, K$. On impose la loi uniforme sur le voisinage de
chacun, et on considère la chaîne de prisonniers.
> Thm (Boissard, Cohen, Norris, ???, 2015)
> $$\Big(\frac{1}{\sqrt{n}} X\_{n,k} (t)\Big)\_{t\ge0} \xrightarrow{\mathcal{D}} (B_t)\_{t\ge0},$$
>
> $$
> \Big(\frac{1}{\sqrt{n}} X\_{n,k} (t)\Big)\_{t\ge0} \xrightarrow{\mathcal{D}} (B_t)\_{t\ge0},
> $$
>
> où $(B_t)\_{t\ge0}$ est un mouvement brownien de variance
> $\frac{K}{K+2}$.
> <i class="fas fa-exclamation-triangle fa-lg fa-pull-left"></i>
> The above equation is probably _wrong_.
« champs des vecteurs » $A_k$ sur $G_k$. $$A_k = B_k + \nabla f_k$$
« champs des vecteurs » $A_k$ sur $G_k$. $A_k = B_k + \nabla f_k$
$\nabla f_k$ est le gradient du potentiel $V_k$.
<div>
$$Z_{n,K}^{(1)} = Z_{n,K}^{(0)} + \sum_{i = 1}^n B_K (Y_K(i),Y_K(i+1)) +
\sum_{i = 1}^n \nabla f_k (Y_K(i),Y_K(i))$$
</div>
$$
Z_{n,K}^{(1)} = Z_{n,K}^{(0)} + \sum_{i = 1}^n B_K (Y_K(i),Y_K(i+1)) +
\sum_{i = 1}^n \nabla f_k (Y_K(i),Y_K(i))
$$
- cas $n = K$: $X_{n,K}$ est une urne d'Ehrenfrest (déguisée)
- cas $n \simeq K$: ???
......@@ -106,8 +110,11 @@ Nouveau « chaos » introduit par l'accelération de $n$
- cas $n << K$:
> Thm
> $$\left(\frac{X\_{n,K}(t)-X\_{n,K}(0)}{\sqrt{n}}\right)\_{t\ge0}
> \xrightarrow[n/K \to 0]{\mathcal{D}} (B\_t)\_{t\ge0}$$
>
> $$
> \left(\frac{X\_{n,K}(t)-X\_{n,K}(0)}{\sqrt{n}}\right)\_{t\ge0}
> \xrightarrow[n/K \to 0]{\mathcal{D}} (B\_t)\_{t\ge0}
> $$
### Défaillance
......@@ -123,6 +130,4 @@ Nouveau « chaos » introduit par l'accelération de $n$
### Relational databases and bayesian networks
See [Max Halford's slides][slides] on GitHub.
[slides]: https://maxhalford.github.io/slides/phd-about/
Max Halford's slides on GitHub are _gone_.
......@@ -19,7 +19,12 @@ Supposons _toutes_ les notations dans [_Espace de trajectoires_][pp].
La mesurabilité de l'application dans le sous-titre est basée sur l'égalité
suivante.
$$\Bor{\R}{\OXT} \cap \CO = \Bor{\CO}$$
<div>
$$
\Bor{\R}{\OXT} \cap \CO = \Bor{\CO}
$$
</div>
J'ai passé _quatres heures_ pour comprendre
......@@ -46,7 +51,8 @@ Plus précisément, la réponse est cachée dans le mot « aléatoire », ce qui
$\mathcal{A}$. Les élements dans $\Bor{\R}{\OXT}$ est juste des ensembles
engendrés par des « produits directs/cylindres de dimensions finies ». Comme
chauqe borélien composant (indexé par $t \in \Bbb{T}$, appartenant à $\Bor{\R}$)
entre facilement dans $\mathcal{A}$, on voit que le membre à gauche dans l'égalité au-dessus entraîne la mesurabilité de l'application « $\mapsto$ ».
entre facilement dans $\mathcal{A}$, on voit que le membre à gauche dans
l'égalité au-dessus entraîne la mesurabilité de l'application « $\mapsto$ ».
#### Vérité de l'égalité
##### Inclusion directe
......@@ -71,7 +77,6 @@ muni de _la topologie produit_, grâce à ma lecture de la preuve du
[théorème de Tychonov][tyt] par Nicolas Bourbaki présentée dans le 2e chap. du
livre d'analyse réelle et de proba de Dudley.
<div>
$$
\begin{aligned}
\left(\prod_{i \in I} X_i,\mathcal{T}\right) & \xrightarrow{\text{projection } \pi_i}
......@@ -80,7 +85,6 @@ x & \xmapsto{\pi_{i_1}} x(i_1) \\
x & \xmapsto{\pi_{i_2}} x(i_2)
\end{aligned}
$$
</div>
La topologie produit $\mathcal{T}$ est _la plus petite topologie rendant toutes
les projections $(\pi\_i)\_{i \in I}$ continues_. Autrement dit, on fixe les
......@@ -142,9 +146,9 @@ y(t)+\epsilon+\frac1n \right]\right) \right\}
$$
</div>
où $\pi\_t : \CO \to \R$ désigne la projection canonique comme dans la partie, et
où $\pi\_t : \CO \to \R$ désigne la projection canonique comme dans la partie,
et
<div>
$$
A_{t^\prime} =
\begin{cases}
......@@ -153,7 +157,6 @@ A_{t^\prime} =
\R &\quad \text{otherwise}
\end{cases}
$$
</div>
### Référence
......
......@@ -17,9 +17,9 @@ De nos jours, je trouve la façon dont ils l'ont écrit assez difficile à
comprendre. Je suis plus à l'aise avec $\sup$ que "b.s." que désigne "borne
supérieure". Ils se sont servi de $M[f]$ pour $\lVert f \rVert_{\rm Lip}$, où
<div>
$$\lVert f \rVert_{\rm Lip} = \sup_{x \ne y} \frac{|f(x) - f(y)|}{d(x, y)}.$$
</div>
$$
\lVert f \rVert_{\rm Lip} = \sup_{x \ne y} \frac{|f(x) - f(y)|}{d(x, y)}.
$$
Je me suis demandé la raison pour laquelle cette dernière application positive
était une *semi*-norme au lieu d'une vraie norme. En effet, ils l'ont traitée
......@@ -31,49 +31,35 @@ la norme lipschitzienne. On peut supposer, sans perte de généralité, qu'il
existe un point $x_0 \in S$ ($\mathscr{X}$ dans l'article) tel que $f_n(x_0)$
est nul pour tout $n \in \N$.
<p>
C'est classique qu'on commence par une suite $(f_n)$ de Cauchy dont on établit
une limite simple $f_\infty$. Le paragraphe dernier permet l'égalité
suivante.
</p>
C'est classique qu'on commence par une suite $$(f_n)$$ de Cauchy dont on établit
une limite simple $$f_\infty$$. Le paragraphe dernier permet l'égalité
suivante.
<div>
$$
\begin{aligned}
|f_{n+k}(x)-f_n(x)| &= |(f_{n+k}(x)-f_n(x))-(f_{n+k}(x_0)-f_n(x_0))| \\
&\le \lVert f_{n+k} - f_n \rVert_{\rm Lip} \; d(x,x_0)
\end{aligned}
$$
</div>
<p>
En faisant $n \to \infty$, et on a $f_n \to f_\infty$.
</p>
En faisant $n \to \infty$, et on a $$f_n \to f_\infty$$.
<p>
Soit $\epsilon > 0$. Il existe $N \in \N$ tel que $\forall n \ge N, \forall k
\in \N$, $\lVert f_{n+k} - f_n \rVert_{\rm Lip} < \epsilon$.
</p>
Soit $\epsilon > 0$. Il existe $N \in \N$ tel que $\forall n \ge N, \forall k
\in \N$, $$\lVert f_{n+k} - f_n \rVert_{\rm Lip} < \epsilon$$.
<div>
$$
\forall n \ge N, \forall k \in \N, \forall x \ne y,
\frac{|(f_{n+k}(x)-f_n(x)) - (f_{n+k}(y)-f_n(y))|}{d(x, y)} < \epsilon
$$
</div>
$$
\forall n \ge N, \forall k \in \N, \forall x \ne y,
\frac{|(f_{n+k}(x)-f_n(x)) - (f_{n+k}(y)-f_n(y))|}{d(x, y)} < \epsilon
$$
Faisons $k \to \infty$.
<div>
$$
\forall n \ge N, \forall x \ne y,
\frac{|(f_\infty(x)-f_n(x)) - (f_\infty(y)-f_n(y))|}{d(x, y)} \le \epsilon
$$
</div>
$$
\forall n \ge N, \forall x \ne y,
\frac{|(f_\infty(x)-f_n(x)) - (f_\infty(y)-f_n(y))|}{d(x, y)} \le \epsilon
$$
<p>
Donc $\forall n \ge N, \lVert f_\infty-f_n \rVert_{\rm Lip} \le \epsilon$, ce
qui achève la preuve.
</p>
Donc $$\forall n \ge N, \lVert f_\infty-f_n \rVert_{\rm Lip} \le \epsilon$$, ce
qui achève la preuve.
[1]: http://archive.numdam.org/article/ASENS_1953_3_70_3_267_0.pdf
......@@ -10,30 +10,39 @@ draft: false
markup: mmark
---
Oriented affine $k$-simplex $\sigma = [{\bf p}\_0,{\bf p}\_1,\dots,{\bf p}\_k]$
Oriented affine $k$-simplex $\sigma = [{\bf p}_0,{\bf p}_1,\dots,{\bf p}_k]$
: A $k$-surface given by the *affine* function
$$\sigma\left(\sum\_{i=1}^k a\_i {\bf e}\_i \right) := {\bf p}\_0 +
\sum\_{i=1}^k a\_i ({\bf p}\_i - {\bf p}\_0) \tag{1},$$
where ${\bf p}\_i \in \R^n$ for all $i \in \\{1,\dots,k\\}$.
In particular, $\sigma({\bf 0})={\bf p}\_0$ and for each $i\in\\{1,\dots,k\\}$,
$\sigma({\bf e}\_i)={\bf p}\_i$.
Standard simplex $Q^k := [{\bf 0}, {\bf e}\_1, \dots, {\bf e}\_k]$
$$
\sigma\left(\sum_{i=1}^k a_i {\bf e}_i \right) := {\bf p}_0 +
\sum_{i=1}^k a_i ({\bf p}_i - {\bf p}_0) \tag{1},
$$
where ${\bf p}_i \in \R^n$ for all $i \in \{1,\dots,k\}$.
In particular, $\sigma({\bf 0})={\bf p}_0$ and for each $i\in\{1,\dots,k\}$,
$\sigma({\bf e}_i)={\bf p}_i$.
Standard simplex $Q^k := [{\bf 0}, {\bf e}_1, \dots, {\bf e}_k]$
: A particular type of oriented affine $k$-simplex with the standard basis
$\\{{\bf e}\_1, \dots, {\bf e}\_k\\}$ of $\R^k$.
$$Q^k := \left\\{ \sum\_{i=1}^k a\_i {\bf e}\_i \Biggm|
\forall i \in \\{1,\dots,k\\}, a_i \ge 0, \sum\_{i=1}^k a\_i = 1 \right\\}$$
$\{{\bf e}_1, \dots, {\bf e}_k\}$ of $\R^k$.
Note that an oriented affine $k$-simplex $\sigma$ has *parameter domain* $Q^k$.
$$
Q^k := \left\{ \sum_{i=1}^k a_i {\bf e}_i \Biggm|
\forall i \in \{1,\dots,k\}, a_i \ge 0, \sum_{i=1}^k a_i = 1 \right\}
$$
Note that an oriented affine $k$-simplex $\sigma$ has *parameter domain*
$Q^k$.
Affine $k$-chain $\Gamma$
: a finite collection of oriented affine $k$-simplexes $\sigma_1,\dots,\sigma_r$
Boundary of an oriented affine $k$-simplex $\partial \sigma$
: an affine $k-1$-chain
$$
\partial \sigma = \sum_{j=0}^k (-1)^j \;\underbrace{
[{\bf p}\_0,\dots,{\bf p}\_{j-1},{\bf p}\_{j+1},\dots,{\bf p}\_k]}\_{{\bf p}\_j
[{\bf p}_0,\dots,{\bf p}_{j-1},{\bf p}_{j+1},\dots,{\bf p}_k]}_{{\bf p}_j
\text{ removed}} \tag{2}
$$
......@@ -42,17 +51,21 @@ We shall make _no_ use of the following proposition.
> **Proposition** Let $\sigma$ be an oriented affine $k$-simplex.
> Then $\partial^2 \sigma = 0$.
>
> _Proof_: In this $k-2$-chain, each $k-2$-simplex with ${\bf p}\_i$ and
> ${\bf p}\_j$ removed can be obtained in two ways. WLOG, assume $i < j$.
> _Proof_: In this $k-2$-chain, each $k-2$-simplex with ${\bf p}_i$ and
> ${\bf p}_j$ removed can be obtained in two ways. WLOG, assume $i < j$.
>
> 1. ${\bf p}\_i$ removed first, followed by ${\bf p}\_j$. These two operations
> 1. ${\bf p}_i$ removed first, followed by ${\bf p}_j$. These two operations
> give factors $(-1)^i$ and $(-1)^{j-1}$. The "$-1$" in the exponent "$j-1$" is
> a result of ${\bf p}\_j$'s left-shifting after removal of ${\bf p}\_i$.
> 2. ${\bf p}\_j$ removed first, followed by ${\bf p}\_i$. These two operations
> give factors $(-1)^j$ and $(-1)^i$. Removing ${\bf p}\_j$ *doesn't* affect
> ${\bf p}\_i$'s position.
> a result of ${\bf p}_j$'s left-shifting after removal of ${\bf p}_i$.
> 2. ${\bf p}_j$ removed first, followed by ${\bf p}_i$. These two operations
> give factors $(-1)^j$ and $(-1)^i$. Removing ${\bf p}_j$ *doesn't* affect
> ${\bf p}_i$'s position.
>
> These two factors cancel each other.
>
> These two factors cancel each other. $$\tag*{$\square$}$$
> $$
> \tag*{$\square$}
> $$
Integral of a $0$-form over an _oriented 0-simplex_
: Let $\sigma = \pm {\bf p}_0$ be an _oriented 0-simplex_.
......@@ -64,42 +77,37 @@ this to make a progress.
In the proof of Stokes' Theorem, the author has taken $\sigma = Q^k$.
<div>
$$
\begin{aligned}
\partial \sigma &= [{\bf e}_1,\dots,{\bf e}_k]+\sum_{i=1}^k (-1)^i \tau_i \\
&= (-1)^{r-1} \tau_0 + \sum_{i=1}^k (-1)^i \tau_i,
\end{aligned}
$$
</div>
$$
\begin{aligned}
\partial \sigma &= [{\bf e}_1,\dots,{\bf e}_k]+\sum_{i=1}^k (-1)^i \tau_i \\
&= (-1)^{r-1} \tau_0 + \sum_{i=1}^k (-1)^i \tau_i,
\end{aligned}
$$
where $\tau_0 = [{\bf e}\_r, {\bf e}\_1, \dots, {\bf e}\_{r-1}, {\bf
e}\_{r+1},\dots, {\bf e}\_k]$, and $\tau_i = [{\bf 0}, {\bf e}\_1, \dots,
{\bf e}\_{i-1}, {\bf e}\_{i+1},\dots, {\bf e}\_k]$ for $i \in \\{1,\dots,k\\}$.
where $$\tau_0 = [{\bf e}_r, {\bf e}_1, \dots, {\bf e}_{r-1}, {\bf
e}_{r+1},\dots, {\bf e}_k]$$, and $$\tau_i = [{\bf 0}, {\bf e}_1, \dots,
{\bf e}_{i-1}, {\bf e}_{i+1},\dots, {\bf e}_k]$$ for $i \in \{1,\dots,k\}$.
Each $\tau_i$ admits $Q^k$ as its parameter domain.
Put ${\bf x} = \tau_0({\bf u})$ with ${\bf u} \in Q^{k-1}$. I need some
straightforward calculations to know each coordinate of $\bf x$.
<div>
$$
\begin{aligned}
{\bf x} &= \tau_0({\bf u}) \\
&= [{\bf e}_r, {\bf e}_1, \dots, {\bf e}_{r-1}, {\bf e}_{r+1}, \dots,
{\bf e}_k]({\bf u}) \\
&= {\bf e}_r + u_1 ({\bf e}_1 - {\bf e}_r) + u_2 ({\bf e}_2 - {\bf e}_r) +
\dots + u_{r-1} ({\bf e}_{r-1} - {\bf e}_r) \\
&+ u_r ({\bf e}_{r+1} - {\bf e}_r) + \dots + u_{k-1} ({\bf e}_k - {\bf e}_r)\\
&= \sum_{j=1}^{r-1} u_j {\bf e}_j
+ \left(1 - \sum_{i = 1}^{k-1} u_i \right) {\bf e}_r
+ \sum_{j=r+1}^k u_{j-1} {\bf e}_j \\
\therefore x_j &= \begin{cases}
u_j & 1 \le j \le r-1 \\
1 - \sum_{i = 1}^{k-1} u_i & j = i \\
u_{j-1} & r+1 \le j \le k
\end{cases}
\end{aligned}
$$
</div>
$$
\begin{aligned}
{\bf x} &= \tau_0({\bf u}) \\
&= [{\bf e}_r, {\bf e}_1, \dots, {\bf e}_{r-1}, {\bf e}_{r+1}, \dots,
{\bf e}_k]({\bf u}) \\
&= {\bf e}_r + u_1 ({\bf e}_1 - {\bf e}_r) + u_2 ({\bf e}_2 - {\bf e}_r) +
\dots + u_{r-1} ({\bf e}_{r-1} - {\bf e}_r) \\
&+ u_r ({\bf e}_{r+1} - {\bf e}_r) + \dots + u_{k-1} ({\bf e}_k - {\bf e}_r)\\
&= \sum_{j=1}^{r-1} u_j {\bf e}_j + \left(1 - \sum_{i = 1}^{k-1} u_i \right)
{\bf e}_r + \sum_{j=r+1}^k u_{j-1} {\bf e}_j \\[4ex]
\therefore x_j &= \begin{cases}
u_j & 1 \le j \le r-1 \\
1 - \sum_{i = 1}^{k-1} u_i & j = i \\
u_{j-1} & r+1 \le j \le k
\end{cases}
\end{aligned}
$$
Reference: Rudin's PMA
......@@ -25,8 +25,8 @@ The problem statement
The integral test will do.
<div>
$$\begin{aligned}
$$
\begin{aligned}
& \int_3^{+\infty} \frac{1}{x\cdot\ln(x)\cdot\ln(\ln(x))^p} \,dx \\
&= \int_3^{+\infty} \frac{1}{\ln(x)\cdot\ln(\ln(x))^p} \,d(\ln x) \\
&= \int_3^{+\infty} \frac{1}{\ln(\ln(x))^p} \,d(\ln(\ln(x))) \\
......@@ -34,8 +34,8 @@ $$\begin{aligned}
[\ln(\ln(\ln(x)))]_3^{+\infty} & \text{if } p = 1 \\
\left[\dfrac{[\ln(\ln(x))}{p+1}]^{p+1} \right]_3^{+\infty} & \text{if } p \ne 1
\end{cases}
\end{aligned}$$
</div>
\end{aligned}
$$
When $p \ge 1$, the improper integral diverges. When $p < 1$, it converges.
......
......@@ -24,9 +24,11 @@ unsure how to do so.
https://math.stackexchange.com/q/3021650/290189, I'll assume the independence of
$(X_n)$. By Chebylshev's inequality,
> <div>$$P(|\bar{X}_n|>\epsilon) = P(|\sum_{k=1}^n X_k|> n\epsilon) \le
\frac{\sum_{k=1}^n var(X_k)}{n^2 \epsilon^2} = \frac{\sum_{k=1}^n
(\ln(k))^2}{n^2 \epsilon^2} \le \frac{(\ln(n))^2}{n \epsilon} \to 0$$</div>
> $$
> P(|\bar{X}_n|>\epsilon) = P(|\sum_{k=1}^n X_k|> n\epsilon) \le
> \frac{\sum_{k=1}^n var(X_k)}{n^2 \epsilon^2} = \frac{\sum_{k=1}^n
> (\ln(k))^2}{n^2 \epsilon^2} \le \frac{(\ln(n))^2}{n \epsilon} \to 0
> $$
Recall: $(\ln(n))^p/n \to 0$ whenever $p \ge 1$. To see this, make a change of
variables $n = e^x$, so that it becomes $x^p / e^x$.
......
......@@ -15,8 +15,12 @@ I intend to post this for [a Borel-Cantelli lemma exercise][3099953] on
[Math.SE].
> The target event is $\{\exists i_0 \in \Bbb{N} : \forall i \ge i_0, X_i =
> 1\}$, whose complement is $$\{\forall i_0 \in \Bbb{N} : \exists i \ge i_0, X_i
> = 0\} = \limsup_i \{X_i = 0\}.$$
> 1\}$, whose complement is
>
> $$
> \{\forall i_0 \in \Bbb{N} : \exists i \ge i_0, X_i > = 0\}
> = \limsup_i \{X_i = 0\}.
> $$
>
> To apply Borel-Cantelli, one has to determine whether $\sum_i P(X_i =
> 0)<+\infty$.
......
......@@ -6,10 +6,12 @@ categories:
tags:
- linear programming
- Math.SE
draft: true
draft: false
markup: mmark
---
> **Update**: The question is now _put on hold_ as unclear.
I intended to answer [김종현's problem on Math.SE][1]. However, the programs in
the question body _aren't_ typeset in [MathJax][2]. As a result, I downvoted
and closed this question because found it _unclear_. From the proposed dual,
......@@ -19,20 +21,16 @@ further. Here's my intended answer:
> First, you have to properly write the primal as
<div> $$\begin{array}{ccc} a & b & c \\ \hline d & e & f \\ \hdashline g & h & i \end{array}$$ </div>
<div>
$$
\begin{array}{rrrrrrrrrrrrrrr}
\max \quad & z = & 3w_1 & + & 4 w_2 & + & 5w_3 & && \\
\mbox{s.t.} \quad & & w_1 & - & w_2 & & & - & \varepsilon_1 & & & & & \le 0 && \\
& & & & w_2 & - & w_3 & & & - & \varepsilon_2 & & & \le 0 && \\
& & & & & & w_3 & & & & & - & \varepsilon_3 & \le 0&& \\
\begin{alignedat}{8}
\max \quad & z = & 3w_1 & + & 4 w_2 & + & 5w_3 & && \\
\text{s.t.} \quad & & w_1 & - & w_2 & & & - & \varepsilon_1 & & & & & \le 0 && \\
& & & & w_2 & - & w_3 & & & - & \varepsilon_2 & & & \le 0 && \\
& & & & & & w_3 & & & & & - & \varepsilon_3 & \le 0&& \\
& & & & & & & & 2\varepsilon_1 & + & 3\varepsilon_2 & + & 4\varepsilon_3 &\ge 1 && \\
& & w_1 & + & w_2 & + & w_3 & & & & & & & = 1, &&
\end{array}
\end{alignedat}
$$
</div>
> Your claimed dual
>
......
$(document).ready(function() {
renderMathInElement(document.body, {
delimiters: [
{left: "$", right: "$", display: false},
{left: "$$", right: "$$", display: true},
{left: "$", right: "$", display: false}
],
macros: {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment