id
stringlengths 1
260
| contents
stringlengths 1
234k
|
---|---|
21573
|
\section{Smallest Triplet of Consecutive Integers Divisible by Cube}
Tags: Cube Numbers
\begin{theorem}
The smallest sequence of triplets of consecutive integers each of which is divisible by a cube greater than $1$ is:
:$\tuple {1375, 1376, 1377}$
\end{theorem}
\begin{proof}
We will show that:
{{begin-eqn}}
{{eqn | l = 1375
| r = 11 \times 5^3
| c =
}}
{{eqn | l = 1376
| r = 172 \times 2^3
| c =
}}
{{eqn | l = 1377
| r = 51 \times 3^3
| c =
}}
{{end-eqn}}
is the smallest such triplet.
Each number in such triplets of consecutive integers is divisible by a cube of some prime number.
Only $2, 3, 5, 7, 11$ are less than $\sqrt [3] {1377}$.
Since the numbers involved are small, we can check the result by brute force.
For general results one is encouraged to use the Chinese Remainder Theorem.
\end{proof}
|
21574
|
\section{Smallest Triplet of Consecutive Integers each Divisible by Fourth Power}
Tags: Fourth Powers
\begin{theorem}
This triplet of consecutive integers has the property that each of them is divisible by a fourth power:
:$33 \, 614, 33 \, 615, 33 \, 616$
This is the smallest such triplet.
\end{theorem}
\begin{proof}
{{begin-eqn}}
{{eqn | l = 33 \, 614
| r = 14 \times 7^4
| c =
}}
{{eqn | l = 33 \, 615
| r = 415 \times 3^4
| c =
}}
{{eqn | l = 33 \, 616
| r = 2101 \times 2^4
| c =
}}
{{end-eqn}}
Each number in such triplets of consecutive integers is divisible by a fourth power of some prime number.
Only $2, 3, 5, 7, 11, 13$ are less than $\sqrt [4] {33 \, 616}$.
\end{proof}
|
21575
|
\section{Smallest Triplet of Primitive Pythagorean Triangles with Same Area}
Tags: 13,123,110, Specific Numbers
\begin{theorem}
The smallest set of $3$ primitive Pythagorean triangles which all have the same area are:
:the $4485-5852-7373$ triangle
:the $3059-8580-9109$ triangle
:the $1380-19 \, 019-19 \, 069$ triangle.
That area is $13 \, 123 \, 110$.
\end{theorem}
\begin{proof}
We have that:
:the $4485-5852-7373$ triangle $T_1$ is Pythagorean
:the $3059-8580-9109$ triangle $T_2$ is Pythagorean
:the $1380-19 \, 019-19 \, 069$ triangle $T_3$ is Pythagorean.
Then from Area of Triangle, their areas $A_1$, $A_2$ and $A_3$ respectively are given by:
{{begin-eqn}}
{{eqn | l = A_1
| r = \dfrac {4485 \times 5852} 2
| c =
}}
{{eqn | r = \dfrac {\paren {3 \times 5 \times 13 \times 23} \times \paren {2^2 \times 7 \times 11 \times 19} } 2
| c =
}}
{{eqn | r = 2 \times 3 \times 5 \times 7 \times 11 \times 13 \times 19 \times 23
| c =
}}
{{eqn | r = 13 \, 123 \, 110
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = A_2
| r = \dfrac {3059 \times 8580} 2
| c =
}}
{{eqn | r = \dfrac {\paren {7 \times 19 \times 23} \times \paren {2^2 \times 3 \times 5 \times 11 \times 13} } 2
| c =
}}
{{eqn | r = 2 \times 3 \times 5 \times 7 \times 11 \times 13 \times 19 \times 23
| c =
}}
{{eqn | r = 13 \, 123 \, 110
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = A_3
| r = \dfrac {1380 \times 19 \, 019} 2
| c =
}}
{{eqn | r = \dfrac {\paren {2^2 \times 3 \times 5 \times 23} \times \paren {7 \times 11 \times 13 \times 19} } 2
| c =
}}
{{eqn | r = 2 \times 3 \times 5 \times 7 \times 11 \times 13 \times 19 \times 23
| c =
}}
{{eqn | r = 13 \, 123 \, 110
| c =
}}
{{end-eqn}}
{{ProofWanted|It remains to be shown that this is the smallest such triple.}}
\end{proof}
|
21576
|
\section{Smallest n for which 3 over n produces 3 Egyptian Fractions using Greedy Algorithm when 2 Sufficient}
Tags: Fibonacci's Greedy Algorithm, 25
\begin{theorem}
Consider proper fractions of the form $\dfrac 3 n$ expressed in canonical form.
Let Fibonacci's Greedy Algorithm be used to generate a sequence $S$ of Egyptian fractions for $\dfrac 3 n$.
The smallest $n$ for which $S$ consists of $3$ terms, where $2$ would be sufficient, is $25$.
\end{theorem}
\begin{proof}
We have that:
{{begin-eqn}}
{{eqn | l = \frac 3 {25}
| r = \frac 1 9 + \frac 2 {225}
| c = as $\ceiling {25 / 3} = \ceiling {8.333\ldots} = 9$
}}
{{eqn | r = \frac 1 9 + \frac 1 {113} + \frac 1 {25 \, 425}
| c = as $\ceiling {225 / 2} = \ceiling {112.5} = 113$
}}
{{end-eqn}}
But then we have:
{{begin-eqn}}
{{eqn | l = \frac 3 {25}
| r = \frac 6 {50}
| c =
}}
{{eqn | r = \frac 5 {50} + \frac 1 {50}
| c =
}}
{{eqn | r = \frac 1 {10} + \frac 1 {50}
| c =
}}
{{end-eqn}}
By Condition for 3 over n producing 3 Egyptian Fractions using Greedy Algorithm when 2 Sufficient, we are to find the smallest $n$ such that:
:$n \equiv 1 \pmod 6$
:$\exists d: d \divides n$ and $d \equiv 2 \pmod 3$
The first few $n \ge 4$ which satisfies $n \equiv 1 \pmod 6$ are:
:$7, 13, 19, 25$
of which $7, 13, 19$ are primes, so they do not have a divisor of the form $d \equiv 2 \pmod 3$.
We see that $5 \divides 25$ and $5 \equiv 2 \pmod 3$.
Hence the result.
{{qed}}
\end{proof}
|
21577
|
\section{Smallest n needing 6 Numbers less than n so that Product of Factorials is Square}
Tags: 527, Factorials, Square Numbers
\begin{theorem}
Let $n \in \Z_{>0}$ be a positive integer.
Then it is possible to choose at most $6$ positive integers less than $n$ such that the product of their factorials is square.
The smallest $n$ that actually requires $6$ numbers to be chosen is $527$.
\end{theorem}
\begin{proof}
Obviously the product cannot be a square if $n$ is a prime.
For $n$ composite, we can write:
:$n = a b$
where $a, b \in \Z_{>1}$.
Then:
{{begin-eqn}}
{{eqn | o =
| r = n! \paren {n - 1}! \paren {a!} \paren {a - 1}! \paren {b!} \paren {b - 1}!
}}
{{eqn | r = n a b \paren {\paren {n - 1}! \paren {a - 1}! \paren {b - 1}!}^2
}}
{{eqn | r = \paren {n! \paren {a - 1}! \paren {b - 1}!}^2
}}
{{end-eqn}}
which is a square.
Hence no more than $6$ factorials is required.
To show that $527$ is the smallest that actually requires $6$, observe that:
{{tidy}}
{{explain|It might be worth extracting some of the below statements into lemmata, for example: "If $n$ is itself square, then so is $n! \paren {n - 1}!$" and "... Then $n! \paren {n - 1}! b! \paren {b - 1}!$ is square" -- they're really easy to prove, even I can do them :-) but it takes more than a glance to recognise that they are true.}}
If $n$ is itself square, then so is $n! \paren {n - 1}!$.
If $n$ is not square-free, write $n = a^2 b$, where $b$ is square-free.
Then $n! \paren {n - 1}! b! \paren {b - 1}!$ is square.
If $n$ is divisible by $2$, write $n = 2 m$.
Then $\paren {2 m}! \paren {2 m - 1}! \paren {m!} \paren {m - 1}! \paren {2!}$ is square.
If $n$ is divisible by $3$, write $n = 3 m$.
Then $\paren {3 m}! \paren {3 m - 1}! \paren {2 m}! \paren {2 m - 1}! \paren {3!}$ is square.
If $n$ is divisible by $5$, write $n = 5 m$.
Then $\paren {5 m}! \paren {5 m - 1}! \paren {m!} \paren {m - 1}! \paren {6!}$ is square.
If $n$ is divisible by $7$, write $n = 7 m$.
Then $\paren {7 m}! \paren {7 m - 1}! \paren {5 m}! \paren {5 m - 1}! \paren {7!}$ is square.
If $n$ is divisible by $11$, write $n = 11 m$.
Then $\paren {11 m}! \paren {11 m - 1}! \paren {7 m}! \paren {7 m - 1}! \paren {11!}$ is square.
The remaining numbers less than $527$ that are not of the above forms are:
:$221, 247, 299, 323, 377, 391, 403, 437, 481, 493$
Each of the following is a product of $5$ factorials which is square:
:$221! \, 220! \, 18! \, 11! \, 7!$
:$247! \, 246! \, 187! \, 186! \, 20!$
:$299! \, 298! \, 27! \, 22!$
:$323! \, 322! \, 20! \, 14! \, 6!$
:$377! \, 376! \, 29! \, 23! \, 10!$
:$391! \, 389! \, 24! \, 21! \, 17!$
:$403! \, 402! \, 33! \, 30! \, 14!$
:$437! \, 436! \, 51! \, 49! \, 28!$
:$481! \, 479! \, 38! \, 33! \, 22!$
:$493! \, 491! \, 205! \, 202! \, 7!$
{{finish|The fact that $527$ has no such representation can be verified by a direct (but lengthy) computation.}}
\end{proof}
|
21578
|
\section{Smallest n such that 6 n + 1 and 6 n - 1 are both Composite}
Tags: Prime Numbers, 20
\begin{theorem}
The smallest positive integer $n$ such that $6 n + 1$ and $6 n - 1$ are both composite is $20$.
\end{theorem}
\begin{proof}
Running through the positive integers in turn:
{{begin-eqn}}
{{eqn | l = 6 \times 1 - 1
| r = 5
| c = which is prime
}}
{{eqn | l = 6 \times 1 + 1
| r = 7
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 2 - 1
| r = 11
| c = which is prime
}}
{{eqn | l = 6 \times 2 + 1
| r = 13
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 3 - 1
| r = 17
| c = which is prime
}}
{{eqn | l = 6 \times 3 + 1
| r = 19
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 4 - 1
| r = 23
| c = which is prime
}}
{{eqn | l = 6 \times 4 + 1
| r = 25 = 5^2
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 5 - 1
| r = 29
| c = which is prime
}}
{{eqn | l = 6 \times 5 + 1
| r = 31
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 6 - 1
| r = 35 = 5 \times 7
| c = and so is composite
}}
{{eqn | l = 6 \times 5 + 1
| r = 37
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 7 - 1
| r = 41
| c = which is prime
}}
{{eqn | l = 6 \times 7 + 1
| r = 43
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 8 - 1
| r = 47
| c = which is prime
}}
{{eqn | l = 6 \times 8 + 1
| r = 49 = 7^2
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 9 - 1
| r = 53
| c = which is prime
}}
{{eqn | l = 6 \times 9 + 1
| r = 55 = 5 \times 11
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 10 - 1
| r = 59
| c = which is prime
}}
{{eqn | l = 6 \times 10 + 1
| r = 61
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 11 - 1
| r = 65 = 5 \times 13
| c = and so is composite
}}
{{eqn | l = 6 \times 11 + 1
| r = 67
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 12 - 1
| r = 71
| c = which is prime
}}
{{eqn | l = 6 \times 12 + 1
| r = 73
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 13 - 1
| r = 77 = 7 \times 11
| c = and so is composite
}}
{{eqn | l = 6 \times 13 + 1
| r = 79
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 14 - 1
| r = 83
| c = which is prime
}}
{{eqn | l = 6 \times 14 + 1
| r = 85 = 5 \times 17
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 15 - 1
| r = 89
| c = which is prime
}}
{{eqn | l = 6 \times 15 + 1
| r = 91 = 7 \times 13
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 16 - 1
| r = 95 = 5 \times 19
| c = and so is composite
}}
{{eqn | l = 6 \times 16 + 1
| r = 97
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 17 - 1
| r = 101
| c = which is prime
}}
{{eqn | l = 6 \times 17 + 1
| r = 103
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 18 - 1
| r = 107
| c = which is prime
}}
{{eqn | l = 6 \times 18 + 1
| r = 109
| c = which is prime
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 19 - 1
| r = 113
| c = which is prime
}}
{{eqn | l = 6 \times 19 + 1
| r = 115 = 5 \times 23
| c = and so is composite
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 6 \times 20 - 1
| r = 119 = 7 \times 17
| c = and so is composite
}}
{{eqn | l = 6 \times 20 + 1
| r = 121 = 11^2
| c = and so is composite
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21579
|
\section{Smith Numbers are Infinite in Number}
Tags: Smith Numbers are Infinite in Number, Open Questions, Smith Numbers
\begin{theorem}
There are infinitely many Smith numbers.
\end{theorem}
\begin{proof}
First we prove the the algorithm above does generate Smith numbers.
Let $n \ge 2$.
We have:
:$m = 10^n - 1 = 3 \times 3 \times R_n$
where $R_n$ is the repunit with $n$ digits.
We apply the Lemma, taking note that $r \ge 3$:
:$\map {S_p} m < 9 \map N m - 0.54 \times 3 = 9 n - 1.62$
Since both $\map {S_p} m$ and $9 n$ are integers, the inequality can be rewritten as:
:$h = 9 n - \map {S_p} m \ge 2$
By the Division Algorithm:
:$\exists! a, b \in \Z: \paren {h - 2} = 7 b + a, 0 \le a < 7$
Since $h - 2 \ge 0$, we must also have $b \ge 0$.
Take $x = a + 2$.
Then $2 \le x \le 8$ and:
:$h - 7 b = a + 2 = x$
Hence both $b, x$ exist and are within the desired range.
Note that:
:$\map {S_p} {\set {2, 3, 4, 5, 8, 7, 15} } = \set {2, 3, 4, 5, 6, 7, 8}$
so we can the corresponding value of $t \in \set {2, 3, 4, 5, 8, 7, 15}$ for each $2 \le x \le 8$ such that $\map {S_p} t = x$.
Since $b \ge 0$, $M = t m \times 10^b$ is an integer.
To show that $M$ is a Smith number, we need to show:
:$\map {S_p} M = \map S M$
We have:
{{begin-eqn}}
{{eqn | l = \map {S_p} M
| r = \map {S_p} t + \map {S_p} m + \map {S_p} {2^b \times 5^b}
}}
{{eqn | r = x + \paren {9 n - h} + 7 b
}}
{{eqn | r = 9 n
}}
{{end-eqn}}
Note that:
:$t \le 15 < 99 = 10^2 - 1 \le 10^n - 1 = m$
Hence we can apply Generalization of Multiple of Repdigit Base minus $1$:
{{begin-eqn}}
{{eqn | l = \map S M
| r = \map S {t m \times 10^b}
}}
{{eqn | r = \map S {\sqbrk {\paren {t - 1} \paren {10^n - t} } \times 10^b}
| c = Generalization of Multiple of Repdigit Base minus $1$
}}
{{eqn | r = \map S {\sqbrk {\paren {t - 1} \paren {m - \paren {t - 1} } } }
| c = $10^b$ only adds trailing zeros
}}
{{eqn | r = \map S m
| c = no carries occur in the subtraction $m - \paren {t - 1}$
}}
{{eqn | r = 9 n
}}
{{end-eqn}}
and thus $\map {S_p} M = \map S M = 9 n$, showing that $M$ is indeed a Smith number.
{{qed|lemma}}
Note that for each $n$, we can generate a Smith number that is greater than $m = 10^n - 1$.
To generate an infinite sequence of Smith numbers, we choose $n$ equal to the number of digits of $M$ previously generated.
Then the next Smith number will be strictly greater than the previous one, thus forming a strictly increasing infinite sequence of Smith numbers.
{{qed}}
\end{proof}
|
21580
|
\section{Smooth Homotopy is an Equivalence Relation}
Tags: Topology, Homotopy Theory
\begin{theorem}
Let $X$ and $Y$ be smooth manifolds.
Let $K \subseteq X$ be a (possibly empty) subset of $X$.
Let $\map {\CC^\infty} {X, Y}$ be the set of all smooth mappings from $X$ to $Y$.
Define a relation $\sim$ on $\map \CC {X, Y}$ by $f \sim g$ if $f$ and $g$ are smoothly homotopic relative to $K$.
Then $\sim$ is an equivalence relation.
\end{theorem}
\begin{proof}
We examine each condition for equivalence.
\end{proof}
|
21581
|
\section{Smooth Real Function times Derivative of Dirac Delta Distribution}
Tags: Dirac Delta Distribution, Dirac Delta Function
\begin{theorem}
Let $\alpha \in \map {\CC^\infty} \R$ be a smooth real function.
Let $\delta \in \map {\DD'} \R$ be the Dirac delta distribution.
Then in the distributional sense it holds that:
:$\alpha \cdot \delta' = \map \alpha 0 \delta' - \map {\alpha'} 0 \delta$
\end{theorem}
\begin{proof}
Let $\phi \in \map \DD \R$ be a test function.
{{begin-eqn}}
{{eqn | l = \map {\alpha \cdot \delta'} \phi
| r = \map {\delta'} {\alpha \phi}
| c = {{Defof|Multiplication of Distribution by Smooth Function}}
}}
{{eqn | r = - \map \delta {\paren {\alpha \phi}'}
| c = {{Defof|Distributional Derivative}}
}}
{{eqn | r = - \map \delta {\alpha' \phi + \alpha \phi'}
}}
{{eqn | r = - \map {\alpha'} 0 \map \phi 0 - \map \alpha 0 \map {\phi'} 0
| c = {{Defof|Dirac Delta Distribution}}
}}
{{eqn | r = -\map {\alpha'} 0 \map \delta \phi - \map \alpha 0 \map \delta {\phi'}
| c = {{Defof|Dirac Delta Distribution}}
}}
{{eqn | r = -\map {\alpha'} 0 \map \delta \phi + \map \alpha 0 \map {\delta'} \phi
| c = {{Defof|Distributional Derivative}}
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21582
|
\section{Smooth Real Function times Derivative of Dirac Delta Distribution/Corollary}
Tags: Dirac Delta Function
\begin{theorem}
Let $\delta \in \map {\DD'} \R$ be the Dirac delta distribution.
Then in the distributional sense it holds that:
:$x \delta' = - \delta$
\end{theorem}
\begin{proof}
From Smooth Real Function times Derivative of Dirac Delta Distribution:
:$\alpha \cdot \delta' = \map \alpha 0 \delta' - \map {\alpha'} 0 \delta$
where $\alpha$ is a smooth function.
If $\map \alpha x = x$, then:
:$x \delta' = - \delta$
{{qed}}
\end{proof}
|
21583
|
\section{Smullyan's Drinking Principle}
Tags: Smullyan's Drinking Principle, Logic, Veridical Paradoxes, Paradoxes
\begin{theorem}
Suppose that there is at least one person in the pub.
Then there is a person $x$ in the pub such that if $x$ is drinking, then everyone in the pub is drinking.
\end{theorem}
\begin{proof}
We have two choices:
:$\forall y : \map D y$
and
:$\neg \forall y : \map D y$
Suppose $\forall y : \map D y$.
By True Statement is implied by Every Statement:
:$\map D x \implies \forall y : \map D y$
By Existential Generalisation:
:$\exists x : \paren{ \map D x \implies \forall y : \map D y }$
Now suppose:
:$\neg \forall y : \map D y$
By De Morgan's Laws (Predicate Logic)/Denial of Universality:
:$\exists y : \neg \map D y$
Switch the variable $y$ with $x$.
Thus, for some $x$:
:$\neg \map D x$
By False Statement implies Every Statement, we have:
:$\map D x \implies \forall y : \map D y$
By Existential Generalisation:
:$\exists x : \paren{ \map D x \implies \forall y : \map D y }$
Thus, $\exists x : \paren{ \map D x \implies \forall y : \map D y }$ holds both when:
:$\forall y : \map D y$
and when:
:$\neg \forall y : \map D y$
concluding the proof.
{{qed}}
\end{proof}
|
21584
|
\section{Snake Lemma}
Tags: Named Theorems, Homological Algebra
\begin{theorem}
Let $A$ be a commutative ring with unity.
Let:
::$\begin{xy}\xymatrix@L+2mu@+1em{
&
M_1 \ar[r]_*{\alpha_1}
\ar[d]^*{\phi_1}
&
M_2 \ar[r]_*{\alpha_2}
\ar[d]^*{\phi_2}
&
M_3 \ar[d]^*{\phi_3}
\ar[r]
&
0
\\
0 \ar[r]
&
N_1 \ar[r]_*{\beta_1}
&
N_2 \ar[r]_*{\beta_2}
&
N_3
&
}\end{xy}$
be a commutative diagram of $A$-modules.
Suppose that the rows are exact.
Then we have a commutative diagram:
::$\begin{xy}\xymatrix@L+2mu@+1em{
&
\ker \phi_1 \ar[r]_*{\tilde\alpha_1}
\ar[d]^*{\iota_1}
&
\ker \phi_2 \ar[r]_*{\tilde\alpha_2}
\ar[d]^*{\iota_2}
&
\ker \phi_3
\ar[d]^*{\iota_3}
&
\\
&
M_1 \ar[r]_*{\alpha_1}
\ar[d]^*{\phi_1}
&
M_2 \ar[r]_*{\alpha_2}
\ar[d]^*{\phi_2}
&
M_3 \ar[d]^*{\phi_3}
\ar[r]
&
0
\\
0 \ar[r]
&
N_1 \ar[r]_*{\beta_1}
\ar[d]^*{\pi_1}
&
N_2 \ar[r]_*{\beta_2}
\ar[d]^*{\pi_2}
&
N_3 \ar[d]^*{\pi_3}
\\
&
\operatorname{coker} \phi_1 \ar[r]_*{\bar\beta_1}
&
\operatorname{coker} \phi_2 \ar[r]_*{\bar\beta_2}
&
\operatorname{coker} \phi_3
&
}\end{xy}$
where, for $i = 1, 2, 3$ respectively $i = 1, 2$:
* $\ker \phi_i$ is the kernel of $\phi_i$
* $\operatorname{coker} \phi_i$ is the cokernel of $\phi_i$
* $\iota_i$ is the inclusion mapping
* $\pi_i$ is the quotient epimorphism
* $\tilde \alpha_i = \alpha_i {\restriction \ker \phi_i}$, the restriction of $\alpha_i$ to $\ker \phi_i$
* $\bar \beta_i$ is defined by:
::$\forall n_i + \Img {\phi_i} \in \operatorname {coker} \phi_i : \map {\bar \beta_i} {n_i + \Img {\phi_i} } = \map {\beta_i} {n_i} + \Img {\phi_{i + 1} }$
Moreover there exists a morphism $\delta : \ker \phi_3 \to \operatorname{coker} \phi_1$ such that we have an exact sequence:
::$\begin{xy}\xymatrix@L+2mu@+1em{
\ker \phi_1 \ar[r]_*{\tilde\alpha_1}
&
\ker \phi_2 \ar[r]_*{\tilde\alpha_2}
&
\ker\phi_3 \ar[r]_*{\delta}
&
\operatorname{coker}\phi_1 \ar[r]_*{\bar\beta_1}
&
\operatorname{coker}\phi_2 \ar[r]_*{\bar\beta_2}
&
\operatorname{coker}\phi_3
}\end{xy}$
\end{theorem}
\begin{proof}
{{ProofWanted}}
Category:Homological Algebra
Category:Named Theorems
\end{proof}
|
21585
|
\section{Socrates is Mortal}
Tags: Predicate Logic, Classic Problems, Socrates is Mortal, Logic
\begin{theorem}
:$(1): \quad$ ''All humans are mortal.''
:$(2): \quad$ ''{{AuthorRef|Socrates}} is human.''
:$(3): \quad$ ''Therefore {{AuthorRef|Socrates}} is mortal.''
\end{theorem}
\begin{proof}
Let $x$ be an object variable from the universe of '''rational beings'''.
Let $\map H x$ denote the propositional function ''$x$ is '''human'''''.
Let $\map M x$ denote the propositional function ''$x$ is '''mortal'''''.
Let $S$ be a proper name that denotes {{AuthorRef|Socrates}}.
The argument can then be expressed as:
{{begin-eqn}}
{{eqn | n = 1
| q = \forall x
| l = \map H x
| o = \implies
| r = \map M x
| c =
}}
{{eqn | ll= \therefore
| l = \map H S
| o = \implies
| r = \map M S
| c = Universal Instantiation
}}
{{eqn | n = 2
| l = \map H S
| o =
| c =
}}
{{eqn | n = 3
| ll= \therefore
| l = \map M S
| o =
| c = Modus Ponendo Ponens
}}
{{end-eqn}}
That is:
:''{{AuthorRef|Socrates}} is mortal.''
{{qed}}
\end{proof}
|
21586
|
\section{Socrates is Mortal/Variant}
Tags: Propositional Logic, Classic Problems, Socrates is Mortal
\begin{theorem}
:$(1): \quad$ ''If {{AuthorRef|Socrates}} is a man then {{AuthorRef|Socrates}} is mortal.''
:$(2): \quad$ ''{{AuthorRef|Socrates}} is a man.''
:$(3): \quad$ ''Therefore {{AuthorRef|Socrates}} is mortal.''
\end{theorem}
\begin{proof}
Let $P$ denote the simple statement ''{{AuthorRef|Socrates}} is a man.''.
Let $Q$ denote the simple statement ''{{AuthorRef|Socrates}} is mortal.''.
The argument can then be expressed as:
{{begin-eqn}}
{{eqn | n = 1
| l = P
| o = \implies
| r = Q
| c =
}}
{{eqn | n = 2
| l = P
| o =
| c =
}}
{{eqn | n = 3
| ll= \therefore
| l = Q
| o =
| c = Modus Ponendo Ponens
}}
{{end-eqn}}
That is:
:''{{AuthorRef|Socrates}} is mortal.''
{{qed}}
\end{proof}
|
21587
|
\section{Solution by Integrating Factor/Examples/y' + y = x^-1}
Tags: Examples of Solution by Integrating Factor
\begin{theorem}
Consider the linear first order ODE:
:$(1): \quad \dfrac {\d y} {\d x} + y = \dfrac 1 x$
with the initial condition $\tuple {1, 0}$.
This has the particular solution:
:$y = \ds e^{-x} \int_1^x \dfrac {e^\xi \rd \xi} \xi$
\end{theorem}
\begin{proof}
This is a linear first order ODE with constant coefficents in the form:
:$\dfrac {\d y} {\d x} + a y = \map Q x$
where:
:$a = 1$
:$\map Q x = \dfrac 1 x$
with the initial condition $y = 0$ when $x = 1$.
Thus from Solution to Linear First Order ODE with Constant Coefficients with Initial Condition:
{{begin-eqn}}
{{eqn | l = y
| r = e^{-x} \int_1^x \dfrac {e^\xi \rd \xi} \xi + 0 \cdot e^{x - 1}
| c =
}}
{{eqn | r = e^{-x} \int_1^x \dfrac {e^\xi \rd \xi} \xi
| c =
}}
{{end-eqn}}
From Primitive of $\dfrac {e^x} x$ has no Solution in Elementary Functions, further work on this is not trivial.
{{qed}}
\end{proof}
|
21588
|
\section{Solution by Integrating Factor/Examples/y' - 3y = sin x}
Tags: Examples of Solution by Integrating Factor, Solution by Integrating Factor: Examples: y' - 3y
\begin{theorem}
The linear first order ODE:
:$\dfrac {\d y} {\d x} - 3 y = \sin x$
has the general solution:
:$y = \dfrac 1 {10} \paren {3 \sin x - \cos x} + C e^{3 x}$
\end{theorem}
\begin{proof}
This is a linear first order ODE in the form:
:$\dfrac {\d y} {\d x} + \map P x y = \map Q x$
where:
:$\map P x y = -3 y$
:$\map Q x = \sin x$
Thus:
{{begin-eqn}}
{{eqn | l = \int \map P x \rd x
| r = \int -3 \rd x
| c =
}}
{{eqn | r = -3 x
| c =
}}
{{eqn | ll= \leadsto
| l = e^{\int P \rd x}
| r = e^{-3 x}
| c =
}}
{{end-eqn}}
Thus from Solution by Integrating Factor:
{{begin-eqn}}
{{eqn | l = \dfrac {\d} {\d x} \paren {e^{-3 x} y}
| r = e^{-3 x} \sin x
| c =
}}
{{eqn | ll= \leadsto
| l = e^{-3 x} y
| r = \int e^{-3 x} \sin x \rd x + C
| c =
}}
{{eqn | r = \frac {e^{3 x} \paren {3 \sin x - \cos x} } {3^2 + 1^2} + C
| c = Primitive of $e^{a x} \sin b x$
}}
{{eqn | ll= \leadsto
| l = y
| r = \dfrac 1 {10} \paren {3 \sin x - \cos x} + C e^{3 x}
| c =
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21589
|
\section{Solution of Constant Coefficient Homogeneous LSOODE/Complex Roots of Auxiliary Equation}
Tags: Linear Second Order ODEs, Solution of Constant Coefficient Homogeneous LSOODE
\begin{theorem}
{{:Solution of Constant Coefficient Homogeneous LSOODE}}
Let $p^2 < 4 q$.
Then $(1)$ has the general solution:
:$y = e^{a x} \paren {C_1 \cos b x + C_2 \sin b x}$
where:
:$m_1 = a + i b$
:$m_2 = a - i b$
\end{theorem}
\begin{proof}
Consider the auxiliary equation of $(1)$:
:$(2): \quad m^2 + p m + q$
Let $p^2 < 4 q$.
From Solution to Quadratic Equation with Real Coefficients, $(2)$ has two complex roots:
{{begin-eqn}}
{{eqn | l = m_1
| r = -\frac p 2 + i \sqrt {q - \frac {p^2} 4}
}}
{{eqn | l = m_2
| r = -\frac p 2 - i \sqrt {q - \frac {p^2} 4}
}}
{{end-eqn}}
As $p^2 < 4 q$ we have that:
:$\sqrt {q - \dfrac {p^2} 4} \ne 0$
and so:
:$m_1 \ne m_2$
Let:
{{begin-eqn}}
{{eqn | l = m_1
| r = a + i b
}}
{{eqn | l = m_2
| r = a - i b
}}
{{end-eqn}}
where $a = -\dfrac p 2$ and $b = \sqrt {q - \dfrac {p^2} 4}$.
From Exponential Function is Solution of Constant Coefficient Homogeneous LSOODE iff Index is Root of Auxiliary Equation:
{{begin-eqn}}
{{eqn | l = y_a
| r = e^{m_1 x}
}}
{{eqn | l = y_b
| r = e^{m_2 x}
}}
{{end-eqn}}
are both particular solutions to $(1)$.
We can manipulate $y_a$ and $y_b$ into the following forms:
{{begin-eqn}}
{{eqn | l = y_a
| r = e^{m_1 x}
| c =
}}
{{eqn | r = e^{\paren {a + i b} x}
| c =
}}
{{eqn | r = e^{a x} e^{i b x}
| c =
}}
{{eqn | n = 3
| r = e^{a x} \paren {\cos b x + i \sin b x}
| c = Euler's Formula
}}
{{end-eqn}}
and:
{{begin-eqn}}
{{eqn | l = y_b
| r = e^{m_2 x}
| c =
}}
{{eqn | r = e^{\paren {a - i b} x}
| c =
}}
{{eqn | r = e^{a x} e^{-i b x}
| c =
}}
{{eqn | n = 4
| r = e^{a x} \paren {\cos b x - i \sin b x}
| c = Euler's Formula: Corollary
}}
{{end-eqn}}
Hence:
{{begin-eqn}}
{{eqn | l = y_a + y_b
| r = e^{a x} \paren {\cos b x + i \sin b x} + e^{a x} \paren {\cos b x - i \sin b x}
| c = adding $(3)$ and $(4)$
}}
{{eqn | r = 2 e^{a x} \cos b x
| c =
}}
{{eqn | ll= \leadsto
| l = \frac {y_a + y_b} 2
| r = e^{a x} \cos b x
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = y_b - y_a
| r = e^{a x} \paren {\cos b x - i \sin b x} - e^{a x} \paren {\cos b x + i \sin b x}
| c = subtracting $(4)$ from $(3)$
}}
{{eqn | r = 2 e^{a x} \sin b x
| c =
}}
{{eqn | ll= \leadsto
| l = \frac {y_b - y_a} 2
| r = e^{a x} \sin b x
| c =
}}
{{end-eqn}}
Let:
{{begin-eqn}}
{{eqn | l = y_1
| r = \frac {y_a + y_b} 2
}}
{{eqn | r = e^{a x} \cos b x
| c =
}}
{{eqn | l = y_2
| r = \frac {y_b - y_a} 2
}}
{{eqn | r = e^{a x} \sin b x
| c =
}}
{{end-eqn}}
We have that:
{{begin-eqn}}
{{eqn | l = \frac {y_1} {y_2}
| r = \frac {e^{a x} \cos b x} {e^{a x} \sin b x}
}}
{{eqn | r = \cot b x
}}
{{end-eqn}}
As $\cot b x$ is not zero for all $x$, $y_1$ and $y_2$ are linearly independent.
From Linear Combination of Solutions to Homogeneous Linear 2nd Order ODE:
:$y_1 = \dfrac {y_a + y_b} 2$
:$y_2 = \dfrac {y_b - y_b} 2$
are both particular solutions to $(1)$.
It follows from Two Linearly Independent Solutions of Homogeneous Linear Second Order ODE generate General Solution that:
:$y = C_1 e^{a x} \cos b x + C_2 e^{a x} \sin b x$
or:
:$y = e^{a x} \paren {C_1 \cos b x + C_2 \sin b x}$
is the general solution to $(1)$.
{{qed}}
\end{proof}
|
21590
|
\section{Solution of Constant Coefficient Homogeneous LSOODE/Equal Real Roots of Auxiliary Equation}
Tags: Linear Second Order ODEs, Solution of Constant Coefficient Homogeneous LSOODE
\begin{theorem}
{{:Solution of Constant Coefficient Homogeneous LSOODE}}
Let $p^2 = 4 q$.
Then $(1)$ has the general solution:
:$y = C_1 e^{m_1 x} + C_2 x e^{m_1 x}$
\end{theorem}
\begin{proof}
Consider the auxiliary equation of $(1)$:
:$(2): \quad m^2 + p m + q$
Let $p^2 = 4 q$.
From Solution to Quadratic Equation with Real Coefficients, $(2)$ has one (repeated) root, that is:
:$m_1 = m_2 = -\dfrac p 2$
From Exponential Function is Solution of Constant Coefficient Homogeneous LSOODE iff Index is Root of Auxiliary Equation:
:$y_1 = e^{m_1 x}$
is a particular solution to $(1)$.
From Particular Solution to Homogeneous Linear Second Order ODE gives rise to Another:
:$\map {y_2} x = \map v x \, \map {y_1} x$
where:
:$\ds v = \int \dfrac 1 { {y_1}^2} e^{-\int P \rd x} \rd x$
is also a particular solution of $(1)$.
We have that:
{{begin-eqn}}
{{eqn | l = \int P \rd x
| r = \int p \rd x
| c =
}}
{{eqn | r = p x
| c =
}}
{{eqn | ll= \leadsto
| l = e^{-\int P \rd x}
| r = e^{-p x}
| c =
}}
{{eqn | r = e^{2 m_1 x}
| c =
}}
{{end-eqn}}
Hence:
{{begin-eqn}}
{{eqn | l = v
| r = \int \dfrac 1 { {y_1}^2} e^{-\int P \rd x} \rd x
| c = Definition of $v$
}}
{{eqn | r = \int \dfrac 1 {e^{2 m_1 x} } e^{2 m_1 x} \rd x
| c = as $y_1 = e^{m_1 x}$
}}
{{eqn | r = \int \rd x
| c =
}}
{{eqn | r = x
| c =
}}
{{end-eqn}}
and so:
{{begin-eqn}}
{{eqn | l = y_2
| r = v y_1
| c = Definition of $y_2$
}}
{{eqn | r = x e^{m_1 x}
| c =
}}
{{end-eqn}}
From Two Linearly Independent Solutions of Homogeneous Linear Second Order ODE generate General Solution:
:$y = C_1 e^{m_1 x} + C_2 x e^{m_1 x}$
{{qed}}
\end{proof}
|
21591
|
\section{Solution of Constant Coefficient Homogeneous LSOODE/Real Roots of Auxiliary Equation}
Tags: Linear Second Order ODEs, Solution of Constant Coefficient Homogeneous LSOODE
\begin{theorem}
{{:Solution of Constant Coefficient Homogeneous LSOODE}}
Let $p^2 > 4 q$.
Then $(1)$ has the general solution:
:$y = C_1 e^{m_1 x} + C_2 e^{m_2 x}$
\end{theorem}
\begin{proof}
Consider the auxiliary equation of $(1)$:
:$(2): \quad m^2 + p m + q$
Let $p^2 > 4 q$.
From Solution to Quadratic Equation with Real Coefficients, $(2)$ has two real roots:
{{begin-eqn}}
{{eqn | l = m_1
| r = -\frac p 2 + \sqrt {\frac {p^2} 4 - q}
}}
{{eqn | l = m_2
| r = -\frac p 2 - \sqrt {\frac {p^2} 4 - q}
}}
{{end-eqn}}
As $p^2 > 4 q$ we have that:
:$\sqrt {\dfrac {p^2} 4 - q} \ne 0$
and so:
:$m_1 \ne m_2$
From Exponential Function is Solution of Constant Coefficient Homogeneous LSOODE iff Index is Root of Auxiliary Equation:
{{begin-eqn}}
{{eqn | l = y_1
| r = e^{m_1 x}
}}
{{eqn | l = y_2
| r = e^{m_2 x}
}}
{{end-eqn}}
are both particular solutions to $(1)$.
We also have that:
{{begin-eqn}}
{{eqn | l = \frac {y_1} {y_2}
| r = \frac {e^{m_1 x} } {e^{m_2 x} }
}}
{{eqn | r = e^{\paren {m_1 - m_2} x}
}}
{{eqn | o = \ne
| r = 0
| c = as $m_1 \ne m_2$
}}
{{end-eqn}}
Thus $y_1$ and $y_2$ are linearly independent.
It follows from Two Linearly Independent Solutions of Homogeneous Linear Second Order ODE generate General Solution that:
:$y = C_1 e^{m_1 x} + C_2 e^{m_2 x}$
is the general solution to $(1)$.
{{qed}}
\end{proof}
|
21592
|
\section{Solution of Linear 2nd Order ODE Tangent to X-Axis}
Tags: Linear Second Order ODEs
\begin{theorem}
Let $\map {y_p} x$ be a particular solution to the homogeneous linear second order ODE:
:$(1): \quad \dfrac {\d^2 y} {\d x^2} + \map P x \dfrac {\d y} {\d x} + \map Q x y = 0$
on a closed interval $\closedint a b$.
Let there exist $\xi \in \closedint a b$ such that the curve in the cartesian plane described by $y = \map {y_p} x$ is tangent to the $x$-axis at $\xi$.
Then $\map {y_p} x$ is the zero constant function:
:$\forall x \in \closedint a b: \map {y_p} x = 0$
\end{theorem}
\begin{proof}
{{AimForCont}} $y_p$ is not the zero constant function.
From Particular Solution to Homogeneous Linear Second Order ODE gives rise to Another, there exists another particular solution to $(1)$ such that $y_1$ and $y_2$ are linearly independent.
At the point $\xi$:
:$\map {y_p} \xi = 0$
:$\map { {y_p}'} \xi = 0$
Taking the Wronskian of $y_p$ and $y_2$:
:$\map W {y_p, y_2} = y_p {y_2}' - {y_p}' y_2$
But at $\xi$ this works out as zero.
It follows from Zero Wronskian of Solutions of Homogeneous Linear Second Order ODE iff Linearly Dependent that $y_p$ and $y_2$ cannot be linearly independent after all.
From this contradiction, $y_p$ must the zero constant function.
{{qed}}
\end{proof}
|
21593
|
\section{Solution of Linear Congruence}
Tags: Modulo Arithmetic, Linear Diophantine Equations, Solution of Linear Congruence
\begin{theorem}
Let $a x \equiv b \pmod n$ be a linear congruence.
The following results hold:
\end{theorem}
\begin{proof}
Consider the linear congruence $a x \equiv b \pmod n$.
Suppose $\exists x_0 \in \Z: a x_0 \equiv b \pmod n$.
Then $\exists y_0 \in Z: a x_0 - b = n y_0$ by definition of congruence.
Thus $x = x_0, y = y_0$ is a solution to the linear Diophantine equation $ax - ny = b$.
On the other hand, if $x = x_0, y = y_0$ is a solution to the linear Diophantine equation $ax - ny = b$, then it follows that $a x \equiv b \pmod n$.
Hence the problem of finding all integers satisfying the linear congruence $a x \equiv b \pmod n$ is the same problem as finding all the $x$ values in the linear Diophantine equation $ax - ny = b$.
Hence the following:
* It has solutions iff $\gcd \left\{{a, n}\right\} \backslash b$:
This follows directly from Solution of Linear Diophantine Equation: the linear Diophantine equation $ax - ny = b$ has solutions iff $\gcd \left\{{a, n}\right\} \backslash b$.
* If $\gcd \left\{{a, n}\right\} = 1$, the congruence has a unique solution:
Suppose then that $\gcd \left\{{a, n}\right\} = 1$.
From Solution of Linear Diophantine Equation, if $x = x_0, y = y_0$ is one solution to the linear Diophantine equation $ax - ny = b$, the general solution is:
:$\forall k \in \Z: x = x_0 + n k, y = y_0 + a k$
But $\forall k \in \Z: x_0 + n k \equiv x_0 \pmod n$.
Hence $x \equiv x_0 \pmod n$ is the only solution of $a x \equiv b \pmod n$.
* If $\gcd \left\{{a, n}\right\} = d$, the congruence has $d$ solutions which are given by the unique solution modulo $\dfrac n d$ of the congruence $\dfrac a d x \equiv \dfrac b d \pmod {\dfrac n d}$:
But $\gcd \left\{{\dfrac a d, \dfrac n d}\right\} = 1$ from Divide by GCD for Coprime Integers.
So the RHS has a unique solution modulo $\dfrac n d$, say:
:$x \equiv x_1 \pmod {\dfrac n d}$.
So the integers $x$ which satisfy $a x \equiv b \pmod n$ are exactly those of the form $x = x_1 + k \dfrac n d$ for some $k \in \Z$.
Consider the set of integers $\left\{{x_1, x_1 + \dfrac n d, x_1 + 2 \dfrac n d, \ldots, x_1 + \left({d-1}\right)\dfrac n d}\right\}$.
None of these are congruent modulo $n$ and none differ by as much as $n$.
Further, for any $k \in Z$, we have that $x_1 + k \dfrac n d$ is congruent modulo $n$ to one of them.
To see this, write $k = d q + r$ where $0 \le r < d$ from the Division Theorem.
Then:
{{begin-eqn}}
{{eqn | l=x_1 + k \frac n d
| r=x_1 + \left({d q + r}\right) \frac n d
| c=
}}
{{eqn | r=x_1 + n q + r \frac n d
| c=
}}
{{eqn | o=\equiv
| r=x_1 + r \frac n d
| rr=\pmod n
| c=
}}
{{end-eqn}}
So these are the $d$ solutions of $a x \equiv b \pmod n$.
{{qed}}
Category:Modulo Arithmetic
157957
157955
2013-09-06T20:46:00Z
Prime.mover
59
157957
wikitext
text/x-wiki
\end{proof}
|
21594
|
\section{Solution of Linear Congruence/Existence}
Tags: Modulo Arithmetic, Solution of Linear Congruence
\begin{theorem}
Let $a x \equiv b \pmod n$ be a linear congruence.
$a x \equiv b \pmod n$ has at least one solution {{iff}}:
: $\gcd \set {a, n} \divides b$
that is, {{iff}} $\gcd \set {a, n}$ is a divisor of $b$.
\end{theorem}
\begin{proof}
Consider the linear congruence $a x \equiv b \pmod n$.
Suppose $\exists x_0 \in \Z: a x_0 \equiv b \pmod n$.
Then $\exists y_0 \in Z: a x_0 - b = n y_0$ by definition of congruence.
Thus $x = x_0, y = y_0$ is a solution to the linear Diophantine equation $a x - n y = b$.
On the other hand, if $x = x_0, y = y_0$ is a solution to the linear Diophantine equation $a x - n y = b$, then it follows that $a x \equiv b \pmod n$.
Hence:
: the problem of finding all integers satisfying the linear congruence $a x \equiv b \pmod n$
is the same problem as:
: the problem of finding all the $x$ values in the linear Diophantine equation $a x - n y = b$.
From Solution of Linear Diophantine Equation:
The linear Diophantine equation $a x - n y = b$ has at least one solution {{iff}}:
:$\gcd \set {a, n} \divides b$
Hence the result.
{{qed}}
Category:Solution of Linear Congruence
\end{proof}
|
21595
|
\section{Solution of Linear Congruence/Number of Solutions}
Tags: Modulo Arithmetic, Solution of Linear Congruence
\begin{theorem}
Let $a x \equiv b \pmod n$ be a linear congruence.
Let $\gcd \set {a, n} = d$.
Then $a x \equiv b \pmod n$ has $d$ solutions which are given by the unique solution modulo $\dfrac n d$ of the congruence:
: $\dfrac a d x \equiv \dfrac b d \paren {\bmod \dfrac n d}$
\end{theorem}
\begin{proof}
From Solution of Linear Congruence: Existence:
:the problem of finding all integers satisfying the linear congruence $a x \equiv b \pmod n$
is the same problem as:
:the problem of finding all the $x$ values in the linear Diophantine equation $a x - n y = b$.
From Integers Divided by GCD are Coprime:
:$\gcd \set {\dfrac a d, \dfrac n d} = 1$
So the {{RHS}} has a unique solution modulo $\dfrac n d$, say:
:$x \equiv x_1 \paren {\bmod \dfrac n d}$
So the integers $x$ which satisfy $a x \equiv b \pmod n$ are exactly those of the form $x = x_1 + k \dfrac n d$ for some $k \in \Z$.
Consider the set of integers:
: $\set {x_1, x_1 + \dfrac n d, x_1 + 2 \dfrac n d, \ldots, x_1 + \paren {d - 1} \dfrac n d}$
None of these are congruent modulo $n$ and none differ by as much as $n$.
Further, for any $k \in Z$, we have that $x_1 + k \dfrac n d$ is congruent modulo $n$ to one of them.
To see this, write $k = d q + r$ where $0 \le r < d$ from the Division Theorem.
Then:
{{begin-eqn}}
{{eqn | l = x_1 + k \frac n d
| r = x_1 + \paren {d q + r} \frac n d
| c =
}}
{{eqn | r = x_1 + n q + r \frac n d
| c =
}}
{{eqn | o = \equiv
| r = x_1 + r \frac n d
| rr= \pmod n
| c =
}}
{{end-eqn}}
So these are the $d$ solutions of $a x \equiv b \pmod n$.
{{qed}}
Category:Solution of Linear Congruence
\end{proof}
|
21596
|
\section{Solution of Linear Diophantine Equation}
Tags: Linear Diophantine Equations, Diophantine Equations, Greatest Common Divisor
\begin{theorem}
The linear Diophantine equation:
:$a x + b y = c$
has solutions {{iff}}:
:$\gcd \set {a, b} \divides c$
where $\divides$ denotes divisibility.
If this condition holds with $\gcd \set {a, b} > 1$ then division by $\gcd \set {a, b}$ reduces the equation to:
:$a' x + b' y = c'$
where $\gcd \set {a', b'} = 1$.
If $x_0, y_0$ is one solution of the latter equation, then the general solution is:
:$\forall k \in \Z: x = x_0 + b' k, y = y_0 - a' k$
or:
:$\forall k \in \Z: x = x_0 + \dfrac b d k, y = y_0 - \dfrac a d k$
where $d = \gcd \set {a, b}$.
\end{theorem}
\begin{proof}
We assume that both $a$ and $b$ are non-zero, otherwise the solution is trivial.
The first part of the problem is a direct restatement of Set of Integer Combinations equals Set of Multiples of GCD:
The set of all integer combinations of $a$ and $b$ is precisely the set of integer multiples of the GCD of $a$ and $b$:
:$\gcd \set {a, b} \divides c \iff \exists x, y \in \Z: c = x a + y b$
Now, suppose that $x', y'$ is any solution of the equation.
Then we have:
:$a' x_0 + b' y_0 = c'$ and $a' x' + b' y' = c'$
Substituting for $c'$ and rearranging:
:$a' \paren {x' - x_0} = b' \paren {y_0 - y'}$
So:
:$a' \divides b' \paren {y_0 - y'}$
Since $\gcd \set {a', b'} = 1$, from Euclid's Lemma we have:
:$a' \divides \paren {y_0 - y'}$.
So $y_0 - y' = a' k$ for some $k \in \Z$.
Substituting into the above gives $x' - x_0 = b' k$ and so:
:$x' = x_0 + b' k, y' = y_0 - a'k$ for some $k \in \Z$
which is what we claimed.
Substitution again gives that the integers:
:$x_0 + b' k, y_0 - a' k$
constitute a solution of $a' x + b' y = c'$ for any $k \in \Z$.
{{qed}}
\end{proof}
|
21597
|
\section{Solution of Ljunggren Equation}
Tags: 239, Diophantine Equations, 13
\begin{theorem}
The only solutions of the Ljunggren equation:
:$x^2 + 1 = 2 y^4$
are:
:$x = 1, y = 1$
:$x = 239, y = 13$
{{OEIS|A229384}}
\end{theorem}
\begin{proof}
Setting $x = 1$:
{{begin-eqn}}
{{eqn | r = 1^2 + 1
| o =
| c =
}}
{{eqn | r = 2
| c =
}}
{{eqn | r = 2 \times 1^4
| c =
}}
{{end-eqn}}
and so $y = 1$.
Setting $x = 239$:
{{begin-eqn}}
{{eqn | r = 239^2 + 1
| o =
| c =
}}
{{eqn | r = 57122
| c =
}}
{{eqn | r = 2 \times 13^4
| c =
}}
{{end-eqn}}
and so $y = 13$.
{{ProofWanted|It remains to be shown this is the only solution. Perhaps trial and error for $y$ going from $0$ up to $13$ and then using Largest Prime Factor of $n^2 + 1$?}}
\end{proof}
|
21598
|
\section{Solution of Pell's Equation is a Convergent}
Tags: Continued Fractions, Pell's Equation, Diophantine Equations
\begin{theorem}
Let $x = a, y = b$ be a positive solution to Pell's Equation $x^2 - n y^2 = 1$.
Then $\dfrac a b$ is a convergent of $\sqrt n$.
\end{theorem}
\begin{proof}
Let $a^2 - n b^2 = 1$.
Then we have:
:$\paren {a - b \sqrt n} \paren {a + b \sqrt n} = 1$.
So:
:$a - b \sqrt n = \dfrac 1 {a + b \sqrt n} > 0$
and so $a > b \sqrt n$.
Therefore:
{{begin-eqn}}
{{eqn | l = \size {\sqrt n - \frac a b}
| r = \frac {a - b \sqrt n} b
| c =
}}
{{eqn | r = \frac 1 {b \paren {a + b \sqrt n} }
| c =
}}
{{eqn | o = <
| r = \frac 1 {b \paren {b \sqrt n + b \sqrt n} }
| c =
}}
{{eqn | r = \frac 1 {2 b^2 \sqrt n}
| c =
}}
{{eqn | o = <
| r = \frac 1 {2 b^2}
| c =
}}
{{end-eqn}}
The result follows from Condition for Rational to be a Convergent.
{{qed}}
Category:Pell's Equation
Category:Continued Fractions
\end{proof}
|
21599
|
\section{Solution of Second Order Differential Equation with Missing Dependent Variable}
Tags: Second Order ODEs
\begin{theorem}
Let $\map f {x, y', y''} = 0$ be a second order ordinary differential equation in which the dependent variable $y$ is not explicitly present.
Then $f$ can be reduced to a first order ordinary differential equation, whose solution can be determined.
\end{theorem}
\begin{proof}
Consider the second order ordinary differential equation:
:$(1): \quad \map f {x, y', y''} = 0$
Let a new dependent variable $p$ be introduced:
:$y' = p$
:$y'' = \dfrac {\d p} {\d x}$
Then $(1)$ can be transformed into:
:$(2): \quad \map f {x, p, \dfrac {\d p} {\d x} } = 0$
which is a first order ODE.
If $(2)$ has a solution which can readily be found, it will be expressible in the form:
:$(3): \quad \map g {x, p}$
which can then be expressed in the form:
:$\map g {x, \dfrac {\d y} {\d x} } = 0$
which is likewise subject to the techniques of solution of a first order ODE.
Hence such a second order ODE is reduced to the problem of solving two first order ODEs in succession.
{{qed}}
\end{proof}
|
21600
|
\section{Solution of Second Order Differential Equation with Missing Independent Variable}
Tags: Second Order ODEs
\begin{theorem}
Let $\map g {y, \dfrac {\d y} {\d x}, \dfrac {\d^2 y} {\d x^2} } = 0$ be a second order ordinary differential equation in which the independent variable $x$ is not explicitly present.
Then $g$ can be reduced to a first order ordinary differential equation, whose solution can be determined.
\end{theorem}
\begin{proof}
Consider the second order ordinary differential equation:
:$(1): \quad \map g {y, \dfrac {\d y} {\d x}, \dfrac {\d^2 y} {\d x^2} } = 0$
Let a new dependent variable $p$ be introduced:
:$y' = p$
Hence:
:$y'' = \dfrac {\d p} {\d x} = \dfrac {\d p} {\d y} \dfrac {\d y} {\d x} = p \dfrac {\d p} {\d y}$
Then $(1)$ can be transformed into:
:$(2): \quad \map g {y, p, p \dfrac {\d p} {\d y} = 0}$
which is a first order ODE.
If $(2)$ has a solution which can readily be found, it will be expressible in the form:
:$(3): \quad \map g {x, p}$
which can then be expressed in the form:
:$\map g {x, \dfrac {\d y} {\d x} }$
which is likewise subject to the techniques of solution of a first order ODE.
Hence such a second order ODE is reduced to the problem of solving two first order ODEs in succession.
{{qed}}
\end{proof}
|
21601
|
\section{Solution to Bernoulli's Equation}
Tags: Examples of First Order ODE, First Order ODEs, Bernoulli's Equation
\begin{theorem}
'''Bernoulli's equation''':
:$(1): \quad \dfrac {\d y} {\d x} + \map P x y = \map Q x y^n$
where:
:$n \ne 0, n \ne 1$
has the general solution:
:$\ds \frac {\map \mu x} {y^{n - 1} } = \paren {1 - n} \int \map Q x \map \mu x \rd x + C$
where:
:$\map \mu x = e^{\paren {1 - n} \int \map P x \rd x}$
\end{theorem}
\begin{proof}
Make the substitution:
:$z = y^{1 - n}$
in $(1)$.
Then we have:
{{begin-eqn}}
{{eqn | l = \frac {\d z} {\d y}
| r = \paren {1 - n} y^{-n}
| c = Power Rule for Derivatives
}}
{{eqn | ll= \leadsto
| l = \frac {\d z} {\d y} \frac {\d y} {\d x} + \map P x y \paren {1 - n} y^{-n}
| r = \map Q x y^n \paren {1 - n} y^{-n}
| c =
}}
{{eqn | ll= \leadsto
| l = \frac {\d z} {\d x} + \paren {1 - n} \map P x y^{1 - n}
| r = \paren {1 - n} \map Q x
| c = Chain Rule for Derivatives
}}
{{eqn | ll= \leadsto
| l = \frac {\d z} {\d x} + \paren {1 - n} \map P x z
| r = \paren {1 - n} \map Q x
| c =
}}
{{end-eqn}}
This is now a linear first order ordinary differential equation in $z$.
It has an integrating factor:
{{begin-eqn}}
{{eqn | l = \map \mu x
| r = e^{\int \paren {1 - n} \map P x \rd x}
}}
{{eqn | r = e^{\paren {1 - n} \int \map P x \rd x}
| c =
}}
{{end-eqn}}
and this can be used to obtain:
:$\ds \map \mu x z = \paren {1 - n} \int \map Q x \map \mu x \rd x + C$
Substituting $z = y^{1 - n} = \dfrac 1 {y^{n - 1} }$ finishes the proof.
{{qed}}
\end{proof}
|
21602
|
\section{Solution to Distributional Ordinary Differential Equation with Constant Coefficients}
Tags: Examples of Distributional Solutions, Examples of Hypoelliptic Operators, Distributional Derivatives
\begin{theorem}
Let $D$ be an ordinary differential operator with constant complex coefficients:
:$\ds D = \sum_{k \mathop = 0}^n a_k \paren {\dfrac \d {\d x}}^k$
Let $f \in \map {\CC^\infty} \R$ be a smooth real function.
Let $T \in \map {\DD'} \R$ be a distribution.
Let $T_f$ be a distribution associated with $f$.
Suppose $T$ is a distributional solution to $D T = T_f$.
Then $T = T_F$ where $F \in \map {\CC^\infty} \R$ is a classical solution to $D F = f$.
\end{theorem}
\begin{proof}
Let $\map P \xi$ be a polynomial over complex numbers such that:
:$\ds \map P \xi = \sum_{k \mathop = 0}^n a_k \xi^k = a_n \prod_{k \mathop = 0}^n \paren {\xi - \lambda_k}$
where $a_n \ne 0$.
Then there exists a polynomial $\map Q \xi$ such that:
:$\map P \xi = \paren {\xi - \lambda_n} \map Q \lambda$
Let:
:$\ds D = \sum_{k \mathop = 0}^n a_k \paren {\dfrac \d {\d x}}^k$
Then:
:$D = \map P {\dfrac \d {\d x} }$
Furthermore:
:$D = \paren {\dfrac \d {\d x} - \lambda_n} D_1$
where:
:$D_1 := \map Q {\dfrac \d {\d x} }$
Now we will use the principle of mathematical induction to show that:
:$\paren {DT = T_f, f \in \map {\CC^\infty} \R} \implies \paren {T = T_F, F \in \map {\CC^\infty} \R}$
\end{proof}
|
21603
|
\section{Solution to First Order Initial Value Problem}
Tags: Ordinary Differential Equations, First Order ODEs
\begin{theorem}
Let $\map y x$ be a solution to the first order ordinary differential equation:
:$\dfrac {\d y} {\d x} = \map f {x, y}$
which is subject to an initial condition: $\tuple {a, b}$.
Then this problem is equivalent to the integral equation:
:$\ds y = b + \int_a^x \map f {t, \map y t} \rd t$
\end{theorem}
\begin{proof}
From Solution to First Order ODE, the general solution of:
:$\dfrac {\d y} {\d x} = \map f {x, y}$
is:
:$\ds y = \int \map f {x, \map y x} \rd x + C$
When $x = a$, we have $y = b$.
Thus:
:$\ds b = \valueat {\int \map f {x, \map y x} \rd x + C} a$
{{MissingLinks|to the notation $[..]_a$}}
which gives:
:$\ds C = b - \valueat {\int \map f {x, \map y x} \rd x} a$
and so:
$\ds y = b + \int \map f {x, \map y x} \rd x - \valueat {\int \map f {x, \map y x} \rd x} a$
whence the result, by the Fundamental Theorem of Calculus.
{{qed}}
Category:First Order ODEs
\end{proof}
|
21604
|
\section{Solution to First Order ODE}
Tags: Ordinary Differential Equations, First Order ODEs
\begin{theorem}
Let:
:$\Phi = \dfrac {\d y} {\d x} = \map f {x, y}$
be a first order ordinary differential equation.
Then $\Phi$ has a general solution which can be expressed in terms of an indefinite integral of $\map f x$:
:$\ds y = \int \map f {x, y} \rd x + C$
where $C$ is an arbitrary constant.
\end{theorem}
\begin{proof}
Integrating both sides with respect to $x$:
{{begin-eqn}}
{{eqn | l = \int \paren {\frac {\d y} {\d x} } \rd x
| r = \int \map f {x, y} \rd x
| c =
}}
{{eqn | ll= \leadsto
| l = y + C_1
| r = \int \map f {x, y} \rd x
| c = {{Defof|Indefinite Integral}}: $C_1$ is arbitrary
}}
{{eqn | ll= \leadsto
| l = y
| r = \int \map f {x, y} \rd x + C
| c = replacing $-C_1$ with $C$
}}
{{end-eqn}}
The validity of this follows from Picard's Existence Theorem.
{{qed}}
Category:First Order ODEs
\end{proof}
|
21605
|
\section{Solution to Homogeneous Differential Equation}
Tags: First Order ODEs, Ordinary Differential Equations, Homogeneous Differential Equations
\begin{theorem}
Let:
:$\map M {x, y} + \map N {x, y} \dfrac {\d y} {\d x} = 0$
be a homogeneous differential equation.
It can be solved by making the substitution $z = \dfrac y x$.
Its solution is:
:$\ds \ln x = \int \frac {\d z} {\map f {1, z} - z} + C$
where:
:$\map f {x, y} = -\dfrac {\map M {x, y} } {\map N {x, y} }$
\end{theorem}
\begin{proof}
From the original equation, we see:
:$\dfrac {\d y} {\d x} = \map f {x, y} = -\dfrac {\map M {x, y} } {\map N {x, y} }$
From Quotient of Homogeneous Functions it follows that $\map f {x, y}$ is homogeneous of degree zero.
Thus:
:$\map f {t x, t y} = t^0 \map f {x, y} = \map f {x, y}$
Set $t = \dfrac 1 x$ in this equation:
{{begin-eqn}}
{{eqn | l = \map f {x, y}
| r = \map f {\paren {\frac 1 x} x, \paren {\frac 1 x} y}
| c =
}}
{{eqn | r = \map f {1, \frac y x}
| c =
}}
{{eqn | r = \map f {1, z}
| c =
}}
{{end-eqn}}
where $z = \dfrac y x$.
Then:
{{begin-eqn}}
{{eqn | l = z
| r = \frac y x
| c =
}}
{{eqn | ll= \leadsto
| l = y
| r = z x
| c =
}}
{{eqn | ll= \leadsto
| l = \frac {\d y} {\d x}
| r = z + x \frac {\d z} {\d x}
| c = Product Rule for Derivatives
}}
{{eqn | ll= \leadsto
| l = z + x \frac {\d z} {\d x}
| r = \map f {1, z}
| c =
}}
{{eqn | ll= \leadsto
| l = \int \frac {\d z} {\map f {1, z} - z}
| r = \int \frac {\d x} x
| c =
}}
{{end-eqn}}
This is seen to be a differential equation with separable variables.
On performing the required integrations and simplifying as necessary, the final step is to substitute $\dfrac y x$ back for $z$.
{{qed}}
\end{proof}
|
21606
|
\section{Solution to Legendre's Differential Equation}
Tags: Legendre's Differential Equation, Second Order ODEs
\begin{theorem}
The solution of Legendre's differential equation:
{{:Definition:Legendre's Differential Equation}}
can be obtained by Power Series Solution Method.
{{refactor|Include the actual solution here in the Theorem section and then proceed to derive each instance of that solution in the Proof section. If necessary, split it up into bits. At the moment it is too amorphous to be able to be followed easily. My eyes are too sore to do a good job on this tonight so I won't.}}
\end{theorem}
\begin{proof}
Let:
:$\ds y = \sum_{n \mathop = 0}^\infty a_n x^{k - n}$
such that:
:$a_0 \ne 0$
Differentating {{WRT|Differentiation}} $x$:
:$\ds \dot y = \sum_{n \mathop = 0}^\infty a_n \paren {k - n} x^{k - n - 1}$
:$\ds \ddot y = \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2}$
Substituting in the original equation:
{{begin-eqn}}
{{eqn | l = \paren {1 - x^2} \ddot y - 2 x \dot y + p \paren {p + 1} y
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = \paren {1 - x^2} \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2} - 2 x \sum_{n \mathop = 0}^\infty a_n \paren {k - n} x^{k - n - 1} + p \paren {p + 1} \sum_{n \mathop = 0}^\infty a_n x^{k - n}
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2} - x^2 \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2}
| o =
| c =
}}
{{eqn | l = {} - 2 x \sum_{n \mathop = 0}^\infty a_n \paren {k - n} x^{k - n - 1} + p \paren {p + 1} \sum_{n \mathop = 0}^\infty a_n x^{k - n}
| r = 0
| c =
}}
{{end-eqn}}
The summations are dependent upon $n$ and not $x$.
Therefore it is a valid operation to multiply the $x$'s into the summations, thus:
{{begin-eqn}}
{{eqn | ll= \leadsto
| l = \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2}
| o =
| c =
}}
{{eqn | l = {} - \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n} - 2 \sum_{n \mathop = 0}^\infty a_n \paren {k - n} x^{k - n} + p \paren {p + 1} \sum_{n \mathop = 0}^\infty a_n x^{k - n}
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = \sum_{n \mathop = 0}^\infty a_n \paren {k - n} \paren {k - n - 1} x^{k - n - 2}
| o =
| c =
}}
{{eqn | l = {} + \sum_{n \mathop = 0}^\infty a_n x^{k - n} \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n - 1} - 2 \paren {k - n} }
| r = 0
| c =
}}
{{end-eqn}}
Increasing the summation variable $n$ to $n + 2$:
{{begin-eqn}}
{{eqn | ll= \leadsto
| l = \sum_{n \mathop = 2}^\infty a_{n - 2} \paren {k - n + 2} \paren {k - n + 1} x^{k - n}
| o =
| c =
}}
{{eqn | l = {} + \sum_{n \mathop = 0}^\infty a_n x^{k - n} \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} }
| r = 0
| c =
}}
{{end-eqn}}
Taking the first 2 terms of the second summation out:
{{begin-eqn}}
{{eqn | ll= \leadsto
| l = \sum_{n \mathop = 2}^\infty a_{n - 2} \paren {k - n + 2} \paren {k - n + 1} x^{k - n}
| o =
| c =
}}
{{eqn | l = {} + a_0 x^k \paren {p \paren {p + 1} - k \paren {k + 1} } + a_1 x^{k - 1} \paren {p \paren {p + 1} - k \paren {k - 1} }
| o =
| c =
}}
{{eqn | l = {} + \sum_{n \mathop = 2}^\infty a_{n - 2} x^{k - n} + a_n \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} }
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = a_0 x^k \paren {p \paren {p + 1} - k \paren {k + 1} } + a_1 x^{k - 1} \paren {\paren {p + 1} - k \paren {k - 1} }
| o =
| c =
}}
{{eqn | l = {} + \sum_{n \mathop = 2}^\infty x^{k - n} \paren {a_{n - 2} \paren {k - n - 2} \paren {k - n + 1} + a_n \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} } }
| r = 0
| c =
}}
{{end-eqn}}
Equating each term to $0$:
{{begin-eqn}}
{{eqn | n = 1
| l = a_0 x^k \paren {p \paren {p + 1} - k \paren {k + 1} }
| r = 0
| c =
}}
{{eqn | n = 2
| l = a_1 x^{k - 1} \paren {p \paren {p + 1} - k \paren {k - 1} }
| r = 0
| c =
}}
{{eqn | n = 3
| l = \sum_{n \mathop = 2}^\infty x^{k - n} \paren {a_{n - 2} \paren {k - n - 2} \paren {k - n + 1} + a_n \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} } }
| r = 0
| c =
}}
{{end-eqn}}
Take equation $(1)$:
:$a_0 x^k \paren {p \paren {p + 1} - k \paren {k + 1} } = 0$
It is assumed that $a_0 \ne 0$ and $x^k$ can never be zero for any value of $k$.
Thus:
{{begin-eqn}}
{{eqn | l = p \paren {p + 1} - k \paren {k + 1}
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = p^2 - k^2 + p - k
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = \paren {p - k} \paren {p + k} + \paren {p - k}
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = \paren {p - k} \paren {p + k + 1}
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = p - k
| r = 0
| c =
}}
{{eqn | lo= \lor
| l = p + k + 1
| r = 0
| c =
}}
{{eqn | ll= \leadsto
| l = k
| r = p
| c =
}}
{{eqn | lo= \lor
| l = k
| r = p - 1
| c =
}}
{{end-eqn}}
Take equation $(2)$:
:$a_1 x^{k - 1} \paren {p \paren {p + 1} - k \paren {k - 1} } = 0$
As before, it is assumed that $x^{k - 1}$ can never be zero for any value of $k$.
Thus:
{{begin-eqn}}
{{eqn | l = a_1 \paren {p \paren {p + 1} - k \paren {k - 1} }
| r = 0
| c =
}}
{{eqn | l = a_1 k
| r = 0
| c = substituting the value of $k$ from $(1)$
}}
{{eqn | l = a_1
| r = 0
| c = as $2 k \ne 0$
}}
{{end-eqn}}
Take equation $(3)$:
:$\ds \sum_{n \mathop = 2}^\infty x^{k - n} \paren {a_{n - 2} \paren {k - n - 2} \paren {k - n + 1} + a_n \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} } } = 0$
As before, it is assumed that $x^{k - n}$ can never be zero for any value of $k$.
Thus:
{{begin-eqn}}
{{eqn | l = 0
| r = a_{n - 2} \paren {k - n - 2} \paren {k - n + 1} + a_n \paren {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} }
| c =
}}
{{eqn | ll= \leadsto
| l = a_n
| r = -\frac {\paren {k - n + 2} \paren {k - n + 1} } {p \paren {p + 1} - \paren {k - n} \paren {k - n + 1} } a_{n - 2}
| c =
}}
{{end-eqn}}
Since Legendre's differential equation is a second order ODE, it has two independent solutions.
\end{proof}
|
21607
|
\section{Solution to Linear First Order ODE with Constant Coefficients/With Initial Condition}
Tags: Linear First Order ODEs, Linear First Order ODEs with Constant Coefficients
\begin{theorem}
Consider the linear first order ODE with constant coefficients in the form:
:$(1): \quad \dfrac {\d y} {\d x} + a y = \map Q x$
with initial condition $\tuple {x_0, y_0}$
Then $(1)$ has the particular solution:
:$\ds y = e^{-a x} \int_{x_0}^x e^{a \xi} \map Q \xi \rd \xi + y_0 e^{a \paren {x - x_0} }$
\end{theorem}
\begin{proof}
From Solution to Linear First Order ODE with Constant Coefficients, the general solution to $(1)$ is:
:$(2): \quad \ds y = e^{-a x} \int e^{a x} \map Q x \rd x + C e^{-a x}$
Let $y = y_0$ when $x = x_0$.
We have:
:$(3): \quad y_0 = e^{-a x_0} \int e^{a x_0} \map Q {x_0} \rd x_0 + C e^{-a x_0}$
Thus:
{{begin-eqn}}
{{eqn | l = y e^{a x}
| r = \int e^{a x} \map Q x \rd x + C
| c = multiplying $(2)$ by $e^{a x}$
}}
{{eqn | l = y_0 e^{a x_0}
| r = \int e^{a x_0} \map Q {x_0} \rd x + C
| c = multiplying $(3)$ by $e^{a x}$
}}
{{eqn | ll= \leadsto
| l = y e^{a x}
| r = y_0 e^{a x_0} + \int e^{a x} \map Q x \rd x - \int e^{a x_0} \map Q {x_0} \rd x
| c = substituting for $C$ and rearranging
}}
{{eqn | r = y_0 e^{a x_0} + \int_{x_0}^x e^{a \xi} \map Q \xi \rd \xi
| c = Fundamental Theorem of Calculus
}}
{{eqn | ll= \leadsto
| l = y
| r = e^{a x} \int_{x_0}^x e^{a \xi} \map Q \xi \rd \xi + y_0 e^{-a \paren {x - x_0} }
| c = dividing by $e^{a x}$ and rearranging
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21608
|
\section{Solution to Linear First Order Ordinary Differential Equation}
Tags: Ordinary Differential Equations, Linear First Order ODEs, First Order ODEs, Solution to Linear First Order Ordinary Differential Equation
\begin{theorem}
A linear first order ordinary differential equation in the form:
:$\dfrac {\d y} {\d x} + \map P x y = \map Q x$
has the general solution:
:$\ds y = e^{-\int P \rd x} \paren {\int Q e^{\int P \rd x} \rd x + C}$
\end{theorem}
\begin{proof}
Consider the first order ordinary differential equation:
:$M \left({x, y}\right) + N \left({x, y}\right) \dfrac {\mathrm d y} {\mathrm d x} = 0$
We can put our equation:
:$(1) \quad \dfrac {\mathrm d y}{\mathrm d x} + P \left({x}\right) y = Q \left({x}\right)$
into this format by identifying:
:$M \left({x, y}\right) \equiv P \left({x}\right) y - Q \left({x}\right), N \left({x, y}\right) \equiv 1$
We see that:
:$\dfrac {\partial M} {\partial y} - \dfrac {\partial N}{\partial x} = P \left({x}\right)$
and hence:
:$P \left({x}\right) = \dfrac {\dfrac {\partial M} {\partial y} - \dfrac {\partial N}{\partial x}} N$
is a function of $x$ only.
It immediately follows from Integrating Factor for First Order ODE that:
:$e^{\int P \left({x}\right) dx}$
is an integrating factor for $(1)$.
So, multiplying $(1)$ by this factor, we get:
:$e^{\int P \left({x}\right) \ \mathrm d x} \dfrac {\mathrm d y} {\mathrm d x} + e^{\int P \left({x}\right) \ \mathrm d x} P \left({x}\right) y = e^{\int P \left({x}\right) \ \mathrm d x} Q \left({x}\right)$
We can now slog through the technique of Solution to Exact Differential Equation.
Alternatively, from the Product Rule for Derivatives, we merely need to note that:
:$\dfrac {\mathrm d} {\mathrm d x} \left({e^{\int P \left({x}\right) \ \mathrm d x} y}\right) = e^{\int P \left({x}\right) \ \mathrm d x} \dfrac {\mathrm d y} {\mathrm d x} + y e^{\int P \left({x}\right) \ \mathrm d x} P \left({x}\right) = e^{\int P \left({x}\right) \ \mathrm d x} \left({\dfrac {\mathrm d y} {\mathrm d x} + P \left({x}\right) y}\right)$
So, if we multiply $(1)$ all through by $e^{\int P \left({x}\right) \ \mathrm d x}$, we get:
:$\dfrac {\mathrm d} {\mathrm d x} \left({e^{\int P \left({x}\right) \ \mathrm d x} y}\right) = Q \left({x}\right)e^{\int P \left({x}\right) \ \mathrm d x}$
Integrating w.r.t. $x$ now gives us:
:$\displaystyle e^{\int P \left({x}\right) \ \mathrm d x} y = \int Q \left({x}\right) e^{\int P \left({x}\right) \ \mathrm d x} \ \mathrm d x + C$
whence we get the result by dividing by $e^{\int P \left({x}\right) \ \mathrm d x}$.
{{qed}}
\end{proof}
|
21609
|
\section{Solution to Quadratic Equation}
Tags: Polynomial Theory, Direct Proofs, Polynomial Equations, Algebra, Quadratic Equations
\begin{theorem}
The quadratic equation of the form $a x^2 + b x + c = 0$ has solutions:
:$x = \dfrac {-b \pm \sqrt {b^2 - 4 a c} } {2 a}$
\end{theorem}
\begin{proof}
Let $a x^2 + b x + c = 0$. Then:
{{begin-eqn}}
{{eqn | l = 4 a^2 x^2 + 4 a b x + 4 a c
| r = 0
| c = multiplying through by $4 a$
}}
{{eqn | ll= \leadsto
| l = \paren {2 a x + b}^2 - b^2 + 4 a c
| r = 0
| c = Completing the Square
}}
{{eqn | ll= \leadsto
| l = \paren {2 a x + b}^2
| r = b^2 - 4 a c
}}
{{eqn | ll= \leadsto
| l = x
| r = \frac {-b \pm \sqrt {b^2 - 4 a c} } {2 a}
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21610
|
\section{Solution to Quadratic Equation/Real Coefficients}
Tags: Polynomial Equations, Quadratic Equations
\begin{theorem}
Let $a, b, c \in \R$.
The quadratic equation $a x^2 + b x + c = 0$ has:
:Two real solutions if $b^2 - 4 a c > 0$
:One real solution if $b^2 - 4 a c = 0$
:Two complex solutions if $b^2 - 4 a c < 0$, and those two solutions are complex conjugates.
\end{theorem}
\begin{proof}
From Solution to Quadratic Equation:
:$x = \dfrac {-b \pm \sqrt {b^2 - 4 a c} } {2 a}$
If the discriminant $b^2 - 4 a c > 0$ then $\sqrt {b^2 - 4 a c}$ has two values and the result follows.
If the discriminant $b^2 - 4 a c = 0$ then $\sqrt {b^2 - 4 a c} = 0$ and $x = \dfrac {-b} {2 a}$.
If the discriminant $b^2 - 4 a c < 0$, then it can be written as:
:$b^2 - 4 a c = \paren {-1} \size {b^2 - 4 a c}$
Thus:
:$\sqrt {b^2 - 4 a c} = \pm i \sqrt {\size {b^2 - 4 a c} }$
and the two solutions are:
:$x = \dfrac {-b} {2 a} + i \dfrac {\sqrt {\size {b^2 - 4 a c} } } {2 a}, x = \dfrac {-b} {2 a} - i \dfrac {\sqrt {\size {b^2 - 4 a c} } } {2 a}$
and once again the result follows.
{{qed}}
\end{proof}
|
21611
|
\section{Solution to Simultaneous Homogeneous Linear First Order ODEs with Constant Coefficients}
Tags: Linear First Order ODEs, Systems of Differential Equations
\begin{theorem}
Consider the system of linear first order ordinary differential equations with constant coefficients:
{{begin-eqn}}
{{eqn | n = 1
| l = \dfrac {\d y} {\d x} + a y + b z
| r = 0
}}
{{eqn | n = 2
| l = \dfrac {\d x} {\d z} + c y + d z
| r = 0
}}
{{end-eqn}}
The general solution to $(1)$ and $(2)$ consists of the linear combinations of the following:
{{begin-eqn}}
{{eqn | l = y
| r = A_1 e^{k_1 x}
}}
{{eqn | l = z
| r = B_1 e^{k_1 x}
}}
{{end-eqn}}
and:
{{begin-eqn}}
{{eqn | l = y
| r = A_2 e^{k_2 x}
}}
{{eqn | l = z
| r = B_2 e^{k_2 x}
}}
{{end-eqn}}
where $A_1 : B_1 = A_2 : B_2 = r$
where $r$ is either of the roots of the quadratic equation:
:$\paren {k + a} \paren {k + d} - b c = 0$
\end{theorem}
\begin{proof}
We look for solutions to $(1)$ and $(2)$ of the form:
{{begin-eqn}}
{{eqn | n = 3
| l = y
| r = A e^{k x}
}}
{{eqn | n = 4
| l = z
| r = B e^{k x}
}}
{{end-eqn}}
We do of course have the Trivial Solution of Homogeneous Linear 1st Order ODE:
:$y = z = 0$
which happens when $A = B = 0$.
So let us investigate solutions where either or both of $A$ and $B$ are non-zero.
Substituting $(3)$ and $(4)$ into $(1)$ and $(2)$ and cancelling $e^{k x}$, we get::
{{begin-eqn}}
{{eqn | n = 5
| l = \paren {k + a} A + b B
| r = 0
}}
{{eqn | n = 6
| l = c A + \paren {k + d} B
| r = 0
}}
{{end-eqn}}
From $(5)$ and $(6)$ we get:
{{begin-eqn}}
{{eqn | n = 7
| o =
| r = \paren {\paren {k + a} \paren {k + d} - b c} A
| c =
}}
{{eqn | r = \paren {\paren {k + a} \paren {k + d} - b c} B
| c =
}}
{{eqn | r = 0
| c =
}}
{{end-eqn}}
So $A = B = 0$ unless $k$ is a root of the quadratic equation:
:$\paren {k + a} \paren {k + d} - b c = 0$
That is:
:$(8): \quad \begin {vmatrix} k + a & b \\ c & k + d \end {vmatrix} = 0$
where the above notation denotes the determinant.
Assume $(8)$ has distinct roots $k_1$ and $k_2$.
Taking $k = k_1$ and $k = k_2$ in $(7)$, we can obtain ratios $A_1 : B_1$ and $A_2 : B_2$ such that:
{{begin-eqn}}
{{eqn | l = y
| r = A_1 e^{k_1 x}
}}
{{eqn | l = z
| r = B_1 e^{k_1 x}
}}
{{end-eqn}}
and:
{{begin-eqn}}
{{eqn | l = y
| r = A_2 e^{k_2 x}
}}
{{eqn | l = z
| r = B_2 e^{k_2 x}
}}
{{end-eqn}}
are solutions of $(1)$ and $(2)$.
By taking arbitrary linear combinations of these, we obtain the general solution.
{{finish|Cover the case where $k_1 {{=}} k_2$. The source work is vague on this subject. Recommend this solution be reworked, preferably in conjunction with a more rigorous and thorough source work than the one used here.}}
\end{proof}
|
21612
|
\section{Solution to Simultaneous Linear Congruences}
Tags: Modulo Arithmetic
\begin{theorem}
Let:
{{begin-eqn}}
{{eqn | l = a_1 x
| o = \equiv
| r = b_1
| rr= \pmod {n_1}
| c =
}}
{{eqn | l = a_2 x
| o = \equiv
| r = b_2
| rr= \pmod {n_2}
| c =
}}
{{eqn | o = \ldots
| c =
}}
{{eqn | l = a_r x
| o = \equiv
| r = b_r
| rr= \pmod {n_r}
| c =
}}
{{end-eqn}}
be a system of simultaneous linear congruences.
This system has a simultaneous solution {{iff}}:
:$\forall i, j: 1 \le i, j \le r: \gcd \set {n_i, n_j}$ divides $b_j - b_i$.
If a solution exists then it is unique modulo $\lcm \set {n_1, n_2, \ldots, n_r}$.
\end{theorem}
\begin{proof}
We take the case where $r = 2$.
Suppose $x \in \Z$ satisfies both:
{{begin-eqn}}
{{eqn | l = a_1 x
| o = \equiv
| r = b_1
| rr= \pmod {n_1}
| c =
}}
{{eqn | l = a_2 x
| o = \equiv
| r = b_2
| rr= \pmod {n_2}
| c =
}}
{{end-eqn}}
That is, $\exists r, s \in \Z$ such that:
{{begin-eqn}}
{{eqn | l = x - b_1
| r = n_1 r
| c =
}}
{{eqn | l = x - b_2
| r = n_2 r
| c =
}}
{{end-eqn}}
Eliminating $x$, we get:
:$b_2 - b_1 = n_1 r - n_2 s$
The {{RHS}} is an integer combination of $n_1$ and $n_2$ and so is a multiple of $\gcd \left\{{n_1, n_2}\right\}$.
Thus $\gcd \set {n_1, n_2}$ divides $b_2 - b_1$, so this is a necessary condition for the system to have a solution.
To show sufficiency, we reverse the argument.
Suppose $\exists k \in \Z: b_2 - b_1 = k \gcd \set {n_1, n_2}$.
We know that $\exists u, v \in \Z: \gcd \set {n_1, n_2} = u n_1 + v n_2$ from Bézout's Identity.
Eliminating $\gcd \set {n_1, n_2}$, we have:
:$b_1 + k u n_1 = b_2 - k v n_2$.
Then:
:$b_1 + k u n_1 = b_1 + \paren {k u} n_1 \equiv b_1 \pmod {n_1}$
:$b_1 + k u n_1 = b_2 + \paren {k v} n_2 \equiv b_2 \pmod {n_2}$
So $b_1 + k u n_1$ satisfies both congruences and so simultaneous solutions do exist.
Now to show uniqueness.
Suppose $x_1$ and $x_2$ are both solutions.
That is:
:$x_1 \equiv x_2 \equiv b_1 \pmod {n_1}$
:$x_1 \equiv x_2 \equiv b_2 \pmod {n_2}$
Then from Intersection of Congruence Classes the result follows.
{{qed}}
The result for $r > 2$ follows by a tedious induction proof.
{{finish}}
Category:Modulo Arithmetic
\end{proof}
|
21613
|
\section{Solution to Simultaneous Linear Equations}
Tags: Sumultaneous Equations, Matrix Algebra, Linear Algebra, Simultaneous Linear Equations, Simultaneous Equations
\begin{theorem}
Let $\ds \forall i \in \closedint 1 m: \sum _{j \mathop = 1}^n {\alpha_{i j} x_j} = \beta_i$ be a system of simultaneous linear equations
where all of $\alpha_1, \ldots, a_n, x_1, \ldots x_n, \beta_i, \ldots, \beta_m$ are elements of a field $K$.
Then $x = \tuple {x_1, x_2, \ldots, x_n}$ is a solution of this system {{iff}}:
:$\sqbrk \alpha_{m n} \sqbrk x_{n 1} = \sqbrk \beta_{m 1}$
where $\sqbrk a_{m n}$ is an $m \times n$ matrix.
\end{theorem}
\begin{proof}
We can see the truth of this by writing them out in full.
:$\ds \sum_{j \mathop = 1}^n {\alpha_{i j} x_j} = \beta_i$
can be written as:
{{begin-eqn}}
{{eqn | l = \alpha_{1 1} x_1 + \alpha_{1 2} x_2 + \ldots + \alpha_{1 n} x_n
| r = \beta_1
| c =
}}
{{eqn | l = \alpha_{2 1} x_1 + \alpha_{2 2} x_2 + \ldots + \alpha_{2 n} x_n
| r = \beta_2
| c =
}}
{{eqn | o = \vdots
}}
{{eqn | l = \alpha_{m 1} x_1 + \alpha_{m 2} x_2 + \ldots + \alpha_{m n} x_n
| r = \beta_m
| c =
}}
{{end-eqn}}
while $\sqbrk \alpha_{m n} \sqbrk x_{n 1} = \sqbrk \beta_{m 1}$ can be written as:
:$\begin {bmatrix}
\alpha_{1 1} & \alpha_{1 2} & \cdots & \alpha_{1 n} \\
\alpha_{2 1} & \alpha_{2 2} & \cdots & \alpha_{2 n} \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_{m 1} & \alpha_{m 2} & \cdots & \alpha_{m n}
\end {bmatrix}
\begin {bmatrix}
x_1 \\ x_2 \\ \vdots \\ x_n
\end {bmatrix}
= \begin {bmatrix}
\beta_1 \\ \beta_2 \\ \vdots \\ \beta_m
\end {bmatrix}$
So the question:
:Find a solution to the following system of $m$ simultaneous linear equations in $n$ variables
is equivalent to:
:Given the following element $\mathbf A \in \map {\MM_K} {m, n}$ and $\mathbf b \in \map {\MM_K} {m, 1}$, find the set of all $\mathbf x \in \map {\MM_K} {n, 1}$ such that $\mathbf A \mathbf x = \mathbf b$
where $\map {\MM_K} {m, n}$ is the $m \times n$ matrix space over $S$.
{{qed}}
\end{proof}
|
21614
|
\section{Solutions of Linear 2nd Order ODE have Common Zero iff Linearly Dependent}
Tags: Linear Second Order ODEs
\begin{theorem}
Let $\map {y_1} x$ and $\map {y_2} x$ be particular solutions to the homogeneous linear second order ODE:
:$(1): \quad \dfrac {\d^2 y} {\d x^2} + \map P x \dfrac {\d y} {\d x} + \map Q x y = 0$
on a closed interval $\closedint a b$.
Let $y_1$ and $y_2$ both have a zero for the same value of $x$ in $\closedint a b$.
Then $y_1$ and $y_2$ are constant multiples of each other.
That is, $y_1$ and $y_2$ are linearly dependent.
\end{theorem}
\begin{proof}
Let $\xi \in \closedint a b$ be such that $\map {y_1} \xi = \map {y_2} \xi = 0$.
Consider the Wronskian $\map W {y_1, y_2}$ at $\xi$:
{{begin-eqn}}
{{eqn | l = \map W {\map {y_1} \xi, \map {y_2} \xi}
| r = \map {y_1} \xi \map { {y_2}'} \xi - \map {y_2} \xi \map { {y_1}'} \xi
| c =
}}
{{eqn | r = 0 \cdot \map { {y_2}'} \xi - 0 \cdot \map { {y_1}'} \xi
| c =
}}
{{eqn | r = 0
| c =
}}
{{end-eqn}}
From Zero Wronskian of Solutions of Homogeneous Linear Second Order ODE:
:$\forall x \in \closedint a b: \map W {\map {y_1} \xi, \map {y_2} \xi} = 0$
and so from Zero Wronskian of Solutions of Homogeneous Linear Second Order ODE iff Linearly Dependent:
:$y_1$ and $y_2$ are linearly dependent.
{{qed}}
\end{proof}
|
21615
|
\section{Solutions of Polynomial Congruences}
Tags: Number Theory, Modulo Arithmetic
\begin{theorem}
Let $\map P x$ be an integral polynomial.
Let $a \equiv b \pmod n$.
Then $\map P a \equiv \map P b \pmod n$.
In particular, $a$ is a solution to the polynomial congruence $\map P x \equiv 0 \pmod n$ {{iff}} $b$ is also.
\end{theorem}
\begin{proof}
Let $\map P x = c_m x^m + c_{m - 1} x^{m - 1} + \cdots + c_1 x + c_0$.
Since $a \equiv b \pmod n$, from Congruence of Product and Congruence of Powers, we have $c_r a^r \equiv c_r b^r \pmod n$ for each $r \in \Z: r \ge 1$.
From Modulo Addition we then have:
{{begin-eqn}}
{{eqn | l = \map P a
| r = c_m a^m + c_{m - 1} a^{m - 1} + \cdots + c_1 a + c_0
| c =
}}
{{eqn | o = \equiv
| r = c_m b^m + c_{m - 1} b^{m - 1} + \cdots + c_1 b + c_0
| rr= \pmod n
| c =
}}
{{eqn | o = \equiv
| r = \map P b
| rr= \pmod n
| c =
}}
{{end-eqn}}
In particular, $\map P a \equiv 0 \iff \map P b \equiv 0 \pmod n$.
That is, $a$ is a solution to the polynomial congruence $\map P x \equiv 0 \pmod n$ {{iff}} $b$ is also.
{{qed}}
Category:Modulo Arithmetic
\end{proof}
|
21616
|
\section{Solutions of Pythagorean Equation/General}
Tags: Solutions of Pythagorean Equation, Diophantine Equations, Pythagorean Triples
\begin{theorem}
Let $x, y, z$ be a solution to the Pythagorean equation.
Then $x = k x', y = k y', z = k z'$, where:
:$\tuple {x', y', z'}$ is a primitive Pythagorean triple
:$k \in \Z: k \ge 1$
\end{theorem}
\begin{proof}
Let $\tuple {x, y, z}$ be non-primitive solution to the Pythagorean equation.
Let:
:$\exists k \in \Z: k \ge 2, k \divides x, k \divides y$
such that $x \perp y$.
Then we can express $x$ and $y$ as $x = k x', y = k y'$.
Thus:
:$z^2 = k^2 x'^2 + k^2 y'^2 = k^2 z'^2$
for some $z' \in \Z$.
Let:
:$\exists k \in \Z: k \ge 2, k \divides x, k \divides z$
such that $x \perp z$
Then we can express $x$ and $z$ as $x = k x', z = k z'$.
Thus:
:$y^2 = k^2 z'^2 - k^2 x'^2 = k^2 y'^2$
for some $y' \in \Z$.
Similarly for any common divisor of $y$ and $z$.
Thus any common divisor of any pair of $x, y, z$ has to be a common divisor of Integers of the other.
Hence any non-primitive solution to the Pythagorean equation is a constant multiple of some primitive solution.
{{qed}}
\end{proof}
|
21617
|
\section{Solutions of Ramanujan-Nagell Equation}
Tags: 15, Ramanujan-Nagell Equation
\begin{theorem}
Integer solutions to the Ramanujan-Nagell equation:
:$x^2 + 7 = 2^n$
exist for only $5$ values of $n$:
:$3, 4, 5, 7, 15$
{{OEIS|A060728}}
The corresponding values of $x$ are:
:$1, 3, 5, 11, 181$
{{OEIS|A038198}}
\end{theorem}
\begin{proof}
By direct implementation:
{{begin-eqn}}
{{eqn | n = 1
| l = 1^2 + 7
| r = 1 + 7
| c =
}}
{{eqn | r = 8
| c =
}}
{{eqn | r = 2^3
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | n = 2
| l = 3^2 + 7
| r = 9 + 7
| c =
}}
{{eqn | r = 16
| c =
}}
{{eqn | r = 2^4
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | n = 3
| l = 5^2 + 7
| r = 25 + 7
| c =
}}
{{eqn | r = 32
| c =
}}
{{eqn | r = 2^5
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | n = 4
| l = 11^2 + 7
| r = 121 + 7
| c =
}}
{{eqn | r = 128
| c =
}}
{{eqn | r = 2^7
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | n = 5
| l = 181^2 + 7
| r = 32 \, 761 + 7
| c =
}}
{{eqn | r = 32 \, 768
| c =
}}
{{eqn | r = 2^{15}
| c =
}}
{{end-eqn}}
{{ProofWanted|It remains to be proved that these are the only solutions.}}
\end{proof}
|
21618
|
\section{Solutions to Approximate Fermat Equation x^3 = y^3 + z^3 Plus or Minus 1}
Tags: Approximate Fermat Equations
\begin{theorem}
The approximate Fermat equation:
:$x^3 = y^3 + z^3 \pm 1$
has the solutions:
{{begin-eqn}}
{{eqn | l = 9^3
| r = 6^3 + 8^3 + 1
}}
{{eqn | l = 103^3
| r = 64^3 + 94^3 - 1
| c =
}}
{{end-eqn}}
\end{theorem}
\begin{proof}
Performing the arithmetic:
{{begin-eqn}}
{{eqn | l = 6^3 + 8^3 + 1
| r = 216 + 512 + 1
}}
{{eqn | r = 729
| c =
}}
{{eqn | r = 9^3
| c =
}}
{{end-eqn}}
{{begin-eqn}}
{{eqn | l = 64^3 + 94^3 - 1
| r = 262 \, 144 + 830 \, 584 - 1
}}
{{eqn | r = 1 \, 092 \, 727
| c =
}}
{{eqn | r = 103^3
| c =
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21619
|
\section{Solutions to Diophantine Equation 16x^2+32x+20 = y^2+y}
Tags: Diophantine Equations
\begin{theorem}
The indeterminate Diophantine equation:
:$16x^2 + 32x + 20 = y^2 + y$
has exactly $4$ solutions:
:$\tuple {0, 4}, \tuple {-2, 4}, \tuple {0, -5}, \tuple {-2, -5}$
\end{theorem}
\begin{proof}
{{begin-eqn}}
{{eqn | l = 16 x^2 + 32 x + 20
| r = y^2 + y
| c =
}}
{{eqn | ll= \leadsto
| l = 16 x^2 + 32 x + 16 + 4
| r =
| c =
}}
{{eqn | l = 16 \paren {x^2 + 2 x + 1} + 4
| r =
| c =
}}
{{eqn | l = 16 \paren {x + 1}^2 + 4
| r = y^2 + y
| c =
}}
{{eqn | ll= \leadsto
| l = 64 \paren {x + 1}^2 + 16
| r = 4 y^2 + 4 y
| c =
}}
{{eqn | ll= \leadsto
| l = \paren {8 x + 8}^2 + 17
| r = 4 y^2 + 4 y + 1
| c =
}}
{{eqn | ll= \leadsto
| l = \paren {8 x + 8}^2 + 17
| r = \paren {2 y + 1}^2
| c =
}}
{{eqn | ll= \leadsto
| l = 17
| r = \paren {2 y + 1}^2 - \paren {8 x + 8}^2
| c =
}}
{{eqn | r = \paren {2 y + 1 - 8 x - 8} \paren {2 y + 1 + 8 x + 8}
| c =
}}
{{eqn | r = \paren {2 y - 8 x - 7} \paren {2 y + 8 x + 9}
| c =
}}
{{end-eqn}}
$17$ is prime and therefore the solution of only two sets of integer products:
{{begin-eqn}}
{{eqn | l = 17
| r = 1 \times 17
| c =
}}
{{eqn | l = 17
| r = -1 \times -17
| c =
}}
{{end-eqn}}
This leaves us with four systems of equations with four solutions:
{{begin-eqn}}
{{eqn | l = 1
| r = 2 y - 8 x - 7
| c =
}}
{{eqn | l = 17
| r = 2 y + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = 1 + 17
| r = 2y - 8x + 9 + 2y + 8x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = 18
| r = 4y + 2
| c =
}}
{{eqn | ll= \leadsto
| l = 4
| r = y
| c =
}}
{{eqn | l = 1
| r = 2 \paren 4 - 8 x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = 0
| r = x
| c =
}}
{{end-eqn}}
Hence the solution:
:$\tuple {0, 4}$
{{begin-eqn}}
{{eqn | l = 17
| r = 2 y - 8 x - 7
| c =
}}
{{eqn | l = 1
| r = 2 y + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = 17 + 1
| r = 2 y - 8 x - 7 + 2 y + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = 18
| r = 4 y + 2
| c =
}}
{{eqn | ll= \leadsto
| l = 4
| r = y
| c =
}}
{{eqn | l = 1
| r = 2 \tuple 4 + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = -2
| r = x
| c =
}}
{{end-eqn}}
Hence the solution:
:$\tuple {-2, 4}$
{{begin-eqn}}
{{eqn | l = -17
| r = 2 y - 8 x - 7
| c =
}}
{{eqn | l = -1
| r = 2 y + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = -1 - 17
| r = 2 y - 8 x + 9 + 2 y + 8 x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = -18
| r = 4 y + 2
| c =
}}
{{eqn | ll= \leadsto
| l = -5
| r = y
| c =
}}
{{eqn | l = -1
| r = -2 \paren {-5} - 8 x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = 0
| r = x
| c =
}}
{{end-eqn}}
Hence the solution:
:$\tuple {0, -5}$
{{begin-eqn}}
{{eqn | l = -1
| r = 2 y - 8 x - 7
| c =
}}
{{eqn | l = -17
| r = 2 y + 8 x + 9
| c =
}}
{{eqn | ll= \leadsto
| l = -1 - 17
| r = 2 y - 8 x + 9 + 2 y + 8 x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = -18
| r = 4 y + 2
| c =
}}
{{eqn | ll= \leadsto
| l = -5
| r = y
| c =
}}
{{eqn | l = -1
| r = 2 \paren {-5} - 8 x - 7
| c =
}}
{{eqn | ll= \leadsto
| l = -2
| r = x
| c =
}}
{{end-eqn}}
Hence the solution:
:$\tuple {-2, -5}$
{{qed}}
\end{proof}
|
21620
|
\section{Sommerfeld-Watson Transform}
Tags: Complex Analysis
\begin{theorem}
Let $\map f z$ be a mapping with isolated poles.
{{explain|Explain the context in which this theorem is placed. For example, what is the domain and range of $f$? One supposes $\C$ but it needs to be made clear.}}
Let $f$ go to zero faster than $\dfrac 1 {\size z}$ as $\size z \to \infty$.
{{explain|"faster"}}
Let $C$ be a contour that is deformed such that all poles of $\map f z$ are contained in $C$.
Then:
:$\ds \sum \limits_{n \mathop = -\infty}^\infty \paren {-1}^n \map f n = \frac 1 {2 i} \oint_C \frac {\map f z} {\sin \pi z} \rd z$
\end{theorem}
\begin{proof}
We know from the Residue Theorem:
{{begin-eqn}}
{{eqn | l = \oint_C \map f z \rd z
| r = 2 \pi i \, \sum \limits_{z_k} R_k(z_k)
| c =
}}
{{eqn | r = 2 \pi i \, \sum_{z_k} \lim_{z \mathop \to z_k} \paren {\paren {z - z_k} \frac {\map f z} {\sin \pi z} }
| c =
}}
{{end-eqn}}
This is for poles $z_k$ at order $N = 1$ because we say that simple poles exist for $\dfrac {\map f z} {\sin \pi z}$.
{{explain|Rewrite the above comprehensibly}}
Using l'Hôpital's rule:
{{begin-eqn}}
{{eqn | l = \oint_C \map f z \rd z
| r = 2 \pi i \sum_{z_k} \lim_{z \mathop \to z_k} \paren {\frac {\partial_z \paren {z - z_k} \map f z} {\partial_z \sin \pi z} }
| c =
}}
{{eqn | r = 2 \pi i \sum_{z_k} \lim_{z \mathop \to z_k} \paren {\frac {\map f z + \paren {z - z_k} \map {f'} z} {\pi \cos \pi z} }
| c =
}}
{{end-eqn}}
{{explain|$\partial_z$}}
But $\sin \pi z$ has poles at $z_k = n$ for some $n \in \Z$ which implies:
{{begin-eqn}}
{{eqn | l = \oint_C \map f z \rd z
| r = 2 \pi i \sum_{n \mathop = -\infty}^\infty \lim_{z \mathop \to n} \paren {\frac {\map f z + \paren {z - n} \map {f'} z} {\pi \cos \pi z} }
| c =
}}
{{eqn | r = 2 i \sum_{n \mathop = -\infty}^\infty \paren {\frac {\map f n} {\cos \pi n} }
| c =
}}
{{end-eqn}}
Finally:
:$\dfrac 1 {\cos \pi n} = \cos \pi n = \paren {-1}^n$
Therefore:
:$\ds \frac 1 {2 i} \oint_C \map f z \rd z = \sum_{n \mathop = -\infty}^\infty \paren {-1}^n \map f n$
{{qed}}
{{Namedfor|Arnold Johannes Wilhelm Sommerfeld|name2 = George Neville Watson|cat = Sommerfeld|cat2 = Watson}}
Category:Complex Analysis
\end{proof}
|
21621
|
\section{Sophie Germain's Identity}
Tags: Named Theorems, Fourth Powers, Algebra, Sophie Germain's Identity
\begin{theorem}
For any two numbers $x$ and $y$:
:$x^4 + 4 y^4 = \paren {x^2 + 2 y^2 + 2 x y} \paren {x^2 + 2 y^2 - 2 x y}$
\end{theorem}
\begin{proof}
Multiply out the RHS:
{{begin-eqn}}
{{eqn | o=
| r=\left({x^2 + 2y^2 + 2xy}\right) \left({x^2 + 2y^2 - 2xy}\right)
| c=
}}
{{eqn | r=x^4 + x^2.2y^2 - x^2.2xy + x^2.2y^2 + 4 y^2 - 2y^2.2xy + x^2.2xy + 2y^2.2xy - 2xy.2xy
| c=
}}
{{eqn | r=x^4 + 4 y^4
| c=by gathering up terms and cancelling
}}
{{end-eqn}}
{{qed}}
{{namedfor|Sophie Germain}}
Category:Algebra
Category:Named Theorems
47453
47452
2011-02-27T21:01:49Z
Prime.mover
59
47453
wikitext
text/x-wiki
\end{proof}
|
21622
|
\section{Sophie Germain Prime cannot be 6n+1}
Tags: Sophie Germain Primes
\begin{theorem}
Let $p$ be a Sophie Germain prime.
Then $p$ cannot be of the form $6 n + 1$, where $n$ is a positive integer.
\end{theorem}
\begin{proof}
Let $p$ be a Sophie Germain prime.
Then, by definition, $2 p + 1$ is prime.
{{AimForCont}} $p = 6 n + 1$ for some $n \in \Z_{>0}$.
Then:
{{begin-eqn}}
{{eqn | l = 2 p + 1
| r = 2 \paren {6 n + 1} + 1
| c =
}}
{{eqn | r = 12 n + 3
| c =
}}
{{eqn | r = 3 \paren {4 n + 1}
| c =
}}
{{end-eqn}}
and so $2 p + 1$ is not prime.
The result follows by Proof by Contradiction.
{{qed}}
Category:Sophie Germain Primes
\end{proof}
|
21623
|
\section{Sophomore's Dream}
Tags: Named Theorems, Integral Calculus, Definite Integrals
\begin{theorem}
'''Sophomore's Dream''' refers to two identities discovered in 1697 by {{AuthorRef|Johann Bernoulli}}.
\end{theorem}
\begin{proof}
The following is a proof of the second identity; the first follows the same lines.
By definition, we can express $x^x$ as:
{{begin-eqn}}
{{eqn | l = x^x
| r = \exp \left({x \ln x}\right)
| c = Definition of Power to Real Number
}}
{{eqn | r = \sum_{n \mathop = 0}^\infty \frac{x^n \left({\ln x}\right)^n}{n!}
| c = Euler's Number: Limit of Sequence implies Limit of Series
}}
{{end-eqn}}
Thus the exercise devolves into the following sum of integrals:
:$\displaystyle \int_0^1 x^x dx = \sum_{n \mathop = 0}^\infty \int_0^1 \frac{x^n \left({\ln x}\right)^n}{n!} \ \mathrm d x$
We can evaluate this by Integration by Parts.
Integrate:
: $\displaystyle \int x^m \left({\ln x}\right)^n \ \mathrm d x$
by taking $u = \left({\ln x}\right)^n$ and $\mathrm d v = x^m \mathrm d x$, which gives us:
: $\displaystyle \int x^m \left({\ln x}\right)^n \ \mathrm d x = \frac{x^{m+1}\left({\ln x}\right)^n}{m+1} - \frac n {m+1} \int x^{m+1} \frac{\left({\ln x}\right)^{n-1}} x \mathrm d x \qquad \text{ for } m \ne -1$
for $m \ne -1$.
Thus, by induction:
: $\displaystyle \int x^m \left({\ln x}\right)^n \ \mathrm d x = \frac {x^{m+1}} {m+1} \sum_{i \mathop = 0}^n \left({-1}\right)^i \frac{\left({n}\right)_i}{\left({m+1}\right)^i} \left({\ln x}\right)^{n-i}$
where $\left({n}\right)_i$ denotes the falling factorial.
In this case $m = n$, and they are integers, so:
:$\displaystyle \int x^n (\ln x)^n \ \mathrm d x = \frac{x^{n+1}}{n+1} \cdot \sum_{i \mathop = 0}^n \left({-1}\right)^i \frac{\left({n}\right)_i}{\left({n+1}\right)^i} \left({\ln x}\right)^{n-i}$
We integrate from $0$ to $1$.
By L'Hôpital's Rule, we have that:
:$\displaystyle \lim_{x \to 0^+} x^m \left({\ln x}\right)^n = 0$
Because of this, and the fact that $\ln 1 = 0$, all the terms vanish except the last term at $1$.
This yields:
:$\displaystyle \int_0^1 \frac{x^n \left({\ln x}\right)^n}{n!} \ \mathrm d x = \frac 1 {n!}\frac {1^{n+1}}{n+1} \left({-1}\right)^n \frac{\left({n}\right)_n}{\left({n+1}\right)^n} = \left({-1}\right)^n \left({n+1}\right)^{-\left({n+1}\right)}$
Summing these (and changing indexing so it starts at $n = 1$ instead of $n = 0$ yields the formula.
{{qed}}
\end{proof}
|
21624
|
\section{Sorgenfrey Line is Expansion of Real Line}
Tags: Sorgenfrey Line
\begin{theorem}
Let $\R = \struct {\R, d}$ be the metric space defined in Real Number Line is Metric Space.
Let $T = \struct {\R, \tau}$ be the Sorgenfrey line.
Then $T$ is an expansion of $\R$ as a topological space.
\end{theorem}
\begin{proof}
It is enough to prove that any open set in $\R$ is open in $T$.
Let $a, b \in \R$.
Then:
:$\ds \openint a b = \bigcup_{\epsilon \mathop > 0} \hointr {a + \epsilon} b$
Since $\hointr {a + \epsilon} b$ are open in $T$, $\openint a b$ is also open in $T$.
{{qed}}
Category:Sorgenfrey Line
\end{proof}
|
21625
|
\section{Sorgenfrey Line is First-Countable}
Tags: Sorgenfrey Line
\begin{theorem}
Let $\R$ be the set of real numbers.
Let $\BB = \set {\hointr a b: a, b \in \R}$.
Let $\tau$ be the topology generated by $\BB$, that is, the Sorgenfrey line.
Then $\tau$ is first-countable.
\end{theorem}
\begin{proof}
Let $\BB_x = \set {\hointr x {x + \dfrac 1 n} : n \in \N_{>0} }$.
We will show that:
:$(1): \quad \BB_x$ is countable
:$(2): \quad \BB_x$ is a local basis at $x$
$(1)$ follows from the fact that $\BB_x$ is a bijection from the set of natural numbers.
$(2)$ is demonstrated as follows:
By definition of local basis, it suffices to show that $\forall U \in \tau: x \in U: \exists B \in \BB_x: B \subseteq U$.
Pick any $U$ in $\tau$.
By definition of $\tau$, there exists $\hointr x {x + \epsilon} \subseteq U$ for some $\epsilon \in \R_{>0}$.
By the Archimedean Principle there exists $n \in \N$ such that $n > \dfrac 1 \epsilon$ (that is, $\dfrac 1 n < \epsilon$).
So:
:$x \in \hointr x {x + \dfrac 1 n} \subseteq \hointr x {x + \epsilon} \subseteq U$
Therefore $\BB_x$ is a local basis at $x$.
We have seen that the Sorgenfrey line has a countable local basis at every point.
By definition of first-countability, $\tau$ is first-countable.
{{qed}}
\end{proof}
|
21626
|
\section{Sorgenfrey Line is Hausdorff}
Tags: Hausdorff Spaces, Sorgenfrey Line
\begin{theorem}
Let $T = \struct {\R, \tau}$ be the Sorgenfrey line.
Then $T$ is Hausdorff.
\end{theorem}
\begin{proof}
Take $x, y \in \R$ such that $x \ne y$.
{{WLOG}}, assume that $x < y$.
From Real Numbers are Densely Ordered:
:$\exists t \in \R: x < t < y$
Then:
:$\hointr x t \cap \hointr t {y + 1} = \O$
We also have that $x \in \hointr x t$ by definition of half-open interval.
Also, as $t < y < y+1$ it is clear that $y \in \hointr t {y + 1}$.
By definition of the Sorgenfrey line, both are open in $T$.
Thus we have found two disjoint subsets of $\R$ which are open in $T$, such that one contains $x$ and the other contains $y$.
Hence the Sorgenfrey Line is Hausdorff by definition.
{{qed}}
\end{proof}
|
21627
|
\section{Sorgenfrey Line is Lindelöf}
Tags: Lindelöf Spaces, Sorgenfrey Line
\begin{theorem}
The Sorgenfrey line is Lindelöf.
\end{theorem}
\begin{proof}
Let $T = \struct {\R, \tau}$ be the Sorgenfrey line.
Let $\CC$ be an open cover for $\R$.
Define $\VV = \set {U^\circ_\R: U \in \CC}$
where $U^\circ_\R$ denotes the interior of $U$ in the real number line $R = \struct {\R, \tau_d}$ with the usual (Euclidean) topology.
By definition of interior:
:$\forall U \in \CC: U^\circ_\R \subseteq \R$
By Union is Smallest Superset:
:$W := \bigcup \VV \subseteq \R$
By Topological Subspace of Real Number Line is Lindelöf:
:$R_W$ is Lindelöf
where $R_W$ denotes topological subspace of $R$ on $W$.
By Set is Subset of Union/Set of Sets:
:$\forall A \in \VV: A \subseteq W$
By Intersection with Subset is Subset:
:$\forall A \in \VV: A \cap W = A$
By definition of topological subspace:
:$\forall A \in \VV: A$ is open in $R_W$
By definition
:$\VV$ is open cover
By definition of Lindelöf space:
:there exists a countable subcover $\SS$ of $\VV$
By definition of $\VV$:
:$\forall A \in \VV: \exists U \in \CC: A = U^\circ_\R$
By Axiom of Choice define a mapping $g: \VV \to \C C$:
:$\forall A \in \VV: A = \map g A^\circ_\R$
Define $K = \map {g^\to} \SS$
where $\map {g^\to} \SS$ denotes the image of $\SS$ under $g$.
Define $Y := \R \setminus \bigcup K$
By definition of cover:
:$\R \subseteq \bigcup \CC$
By definition of subset:
:$\forall x \in \R: x \in \bigcup \CC$
By definition of union:
:$\forall x \in \R: \exists U \in \CC: x \in U$
By Axiom of Choice define a mapping $f: \R \to \CC$ such that
:$\forall x \in \R: x \in \map f x$
Define $\BB := \set {\hointr x y: x, y \in \R}$
By definition of the Sorgenfrey line:
:$\BB$ is a basis of $T$.
By definition of $f$:
:$\forall x \in \R: \map f x \in \CC$
By definition of open cover:
:$\forall x \in \R: \map f x$ is open
By definition of a basis:
:$\forall x \in \R: \exists U_x \in \BB: x \in U_x \subseteq \map f x$
By definition of $\BB$:
:$\forall x \in \R: \exists y, z \in \R: x \in \hointr y z \subseteq \map f x$
By definition of half-open real interval:
:$\forall x \in \R: \exists z \in \R: x \in \hointr x z \subseteq \map f x$
By Axiom of Choice define a mapping $k: \R \to \BB$:
:$\forall x \in \R: \exists z \in \R: x \in \map k x = \hointr x z \subseteq \map f x$
We will prove that
:$(1): \quad \forall x, y \in Y: x \ne y \implies \paren x \cap \map k y = \O$
{{Mistake|For $\paren x$ should it really be $\map k x$?}}
Let $x, y \in Y$ such that $x \ne y$.
{{AimForCont}}:
:$\paren x \cap \map k y \ne \O$
By definitions of empty set and intersection:
:$\exists s: s \in \map k x \land s \in \map k y$
By definition of $k$:
:$\exists z_1 \in \R: \map k x = \hointr x {z_1} \subseteq \map f x$
and
:$\exists z_2 \in \R: \map k y = \hointr y {z_2}$
By Trichotomy Law for Real Numbers:
:$x < y$ or $x > y$
{{WLOG}}, suppose $x < y$
By definition of half-open real interval:
:$x \le s < z_1$ and $y \le s < z_2$
Then:
$y < z_1$
By definition of open real interval:
:$y \in \openint x {z_1}$
By Open Real Interval is Open Set:
:$\openint x {z_1}$ is topologically open in $R$
By definition of subset:
:$\openint x {z_1} \subseteq \hointr x {z_1}$
By Subset Relation is Transitive:
:$\openint x {z_1} \subseteq \map f x$
By Interior of Subset:
:$\openint x {z_1}^\circ_\R \subseteq \map f x^\circ_\R$
By Interior of Open Set:
:$\openint x {z_1}^\circ_\R \openint x z_1$
{{mistake|The relation is missing in the above -- subset or equals? Research needed}}
By definition of subset:
:$y \in \map f x^\circ_\R$
By definition of $\VV$:
:$\map f x^\circ_\R \in \VV$
By definition of union:
$y \in W$
By definition of interior:
:$\forall A \in \SS: A \subseteq \map g A$
By Set Union Preserves Subsets:
:$W \subseteq \bigcup \SS \subseteq \bigcup K$
By definition of subset:
:$y \in \bigcup K$
This contradicts $y \in Y$ by definition of difference.
Thus:
:$\map k x \cap \map k y = \O$
By Set of Pairwise Disjoint Intervals is Countable
:$\map {k^\to} Y$ is countable
We will prove that
:$k \restriction_Y$ is an injection
Let $x, y \in Y$ such that
:$\map {k \restriction_Y} x = \map {k \restriction_Y} Y$
{{AimForCont}}:
:$x \ne y$
Then by $(1)$:
:$\map k x \cap \map k y = \O$
By definition of restriction of mapping:
:$\map {k \restriction_Y} x = \map k x$
and
:$\map {k \restriction_Y} y = \map k y$
By definition of $k$:
:$x \in \map k x$
So:
:$x \in \map k y$
This contradicts:
:$\map k x \cap k \map k y = \O$
Thus $x = y$.
By Injection to Image is Bijection:
:$k \restriction_Y: Y \to \map {k^\to} Y$ is a bijection
By definitions of set equivalence and cardinality:
:$\card Y = \card {\map {k^\to} Y}$
where $\card Y$ denotes the cardinality of $Y$.
By Cardinality of Image of Set not greater than Cardinality of Set:
:$\card K \le \card \SS$ and $\card {\map {f^\to} Y} \le \card Y$
By Countable iff Cardinality not greater than Aleph Zero:
:$\card \SS \le \aleph_0$ and $\card Y \le \aleph_0$
Then:
:$\card K \le \aleph_0$ and $\card {\map {f^\to} Y} \le \aleph_0$
By Countable iff Cardinality not greater than Aleph Zero:
:$K$ is countable and $\map {f^\to} Y$ is countable
Thus by Countable Union of Countable Sets is Countable:
:$\GG := K \cup \map {f^\to} Y$ is countable
By definition of image of set:
:$K \subseteq \CC$ and $\map {f^\to} Y \subseteq \CC$
thus by corollary of Set Union Preserves Subsets:
:$\GG \subseteq \CC$
It remains to prove that
:$\GG$ is cover for $\R$
Let $x \in \R$.
By Union Distributes over Union: Sets of Sets:
:$\bigcup \GG = \paren {\bigcup K} \cup \bigcup \map {f^\to} Y$
{{AimForCont}}:
:$ x \notin \bigcup \GG$
By definition of union:
:$x \notin \bigcup K$ and $x \notin \bigcup \map {f^\to} Y$
By definition of difference:
:$x in Y$
By definition of image of set:
:$\map f x \in \map {f^\to} Y$
By definition of $f$:
:$x \in \map f x$
By definition of union:
:$x \in \bigcup \map {f^\to} Y$
This contradicts $x \notin \bigcup \map {f^\to} Y$.
Thus the result by Proof by Contradiction.
{{qed}}
\end{proof}
|
21628
|
\section{Sorgenfrey Line is Perfectly Normal}
Tags: Perfectly Normal Space, Perfectly Normal Spaces, Sorgenfrey Line
\begin{theorem}
Let $T = \struct {\R, \tau}$ be the Sorgenfrey line.
Then $T$ is perfectly normal.
\end{theorem}
\begin{proof}
From the definition of perfectly normal space, it is necessary to prove that $T$ is a $T_1$ space and that any closed set is $G_\delta$.
From $T_2$ Space is $T_1$ Space and Sorgenfrey Line is Hausdorff:
:the Sorgenfrey line is a $T_1$ space.
From Complement of $F_\sigma$ Set is $G_\delta$ Set it is sufficient to prove that an open set of $T$ is $F_\sigma$.
Let $W$ be any open set in $T$.
Let $O \subseteq W$ be the interior of $W$ with respect to the metric space:
:$\R = \struct {\R, d}$
where $d$ is the usual metric on $\R$.
From the definition of $T$, for each $x \in W \setminus O$, we can choose $h_x \in W$ such that $\hointr x {h_x} \subseteq W$.
Suppose $\hointr x {h_x} \cap \hointr y {h_y} \ne \O$ for some distinct points $x, y \in W \setminus O$.
Then either $x < y < h_x$ or $y < x < h_y$.
If $x < y < h_x$, then $y \in \openint x {h_x} \subseteq O$, contradicting $y \in W \setminus O$.
Similarly, if $y < x < h_y$, then $x \in \openint y {h_y}) \subseteq O$, contradicting $x \in W \setminus O$.
Thus $\family {\hointr x {h_x} }_{x \mathop \in W \setminus O}$ is a pairwise disjoint indexed family of open sets of $T$.
From Sorgenfrey Line is Separable and Separable Space satisfies Countable Chain Condition:
:$\set {\hointr x {h_x} : x \in W \setminus O}$ is countable
and thus:
:$W \setminus O$ is countable.
From Metric Space is Perfectly T4:
:$O$ is an $F_\sigma$ set in $\R$.
Thus from Sorgenfrey Line is Expansion of Real Line:
:$O$ is an $F_\sigma$ set in the Sorgenfrey line.
Since $W \setminus O$ is a countable union of singletons and $T$ is a $T_1$ space:
:$W \setminus O$ is an $F_\sigma$ set in $T$.
Since $W = O \cup \left({W \setminus O}\right)$ and $F_\sigma$ sets are closed under unions:
:$W$ is an $F_\sigma$ set in $T$.
{{qed}}
Category:Sorgenfrey Line
Category:Perfectly Normal Spaces
\end{proof}
|
21629
|
\section{Sorgenfrey Line is Separable}
Tags: Sorgenfrey Line
\begin{theorem}
The Sorgenfrey line is separable.
\end{theorem}
\begin{proof}
By Rationals are Everywhere Dense in Sorgenfrey Line:
:$\Q$ is dense in the Sorgenfrey line.
By Rational Numbers are Countably Infinite:
:$\Q$ is countable.
Thus by definition:
:The Sorgenfrey line is separable.
{{qed}}
\end{proof}
|
21630
|
\section{Sorgenfrey Line is Topology}
Tags: Sorgenfrey Line
\begin{theorem}
The Sorgenfrey Line is a topological space.
\end{theorem}
\begin{proof}
We have to check that $\BB = \set {\hointr a b: a, b \in \R}$ fulfills the axioms of being a basis for a topology.
By definition of synthetic basis we only have to check that:
:$(1): \quad \bigcup \BB = \R$
:$(2): \quad \forall B_1, B_2 \in \BB: \exists V \in \BB: V \subseteq B_1 \cap B_2$
We have that:
:$\forall n \in \Z: \hointr n {n + 1} \in \BB$
:$\R = \ds \bigcup_{n \mathop \in \Z} \hointr n {n + 1} \subseteq \bigcup \BB$
Hence $\R = \bigcup \BB$ and condition $(1)$ is fulfilled.
Now take $ B_1, B_2 \in \BB$ where:
:$B_1 = \hointr {a_1} {b_1}$
:$B_2 = \hointr {a_2} {b_2}$
Let $B_3$ be constructed as:
:$B_3 := \hointr {\max \set {a_1, a_2} } {\min \set {b_1, b_2} } \in \BB$
From the method of construction, it is clear that $B_3 = B_1 \cap B_2$.
Thus taking $V = B_3$, condition $(2)$ is fulfilled.
{{qed}}
\end{proof}
|
21631
|
\section{Sorgenfrey Line is not Second-Countable}
Tags: Sorgenfrey Line
\begin{theorem}
Let $T = \struct {\mathbb R, \tau}$ be the Sorgenfrey line.
Then $T$ is not second-countable.
\end{theorem}
\begin{proof}
Suppose $\BB$ is a basis for $\tau$.
By definition of basis:
:$\forall U \in \tau: \forall x \in U: \exists B \in \BB: x \in B \subseteq U$
For all $x \in \R$, pick $U = \hointr x {x + \epsilon} \in \tau$ for some $\epsilon > 0$.
Now:
:$\forall x \in \R: \exists B_x \in \BB: x \in B_x \subseteq \hointr x {x + \epsilon}$
This $\BB_x$ has an infimum equal to $x$.
So for different $x$, the corresponding $\BB_x$ is different.
So the cardinality of $\BB$ is at least $\size \R$, which is uncountable.
{{qed}}
\end{proof}
|
21632
|
\section{Sorgenfrey Line satisfies all Separation Axioms}
Tags: Sorgenfrey Line
\begin{theorem}
Let $T = \left({\R, \tau}\right)$ be the Sorgenfrey line.
Then $T$ satisfies all separation axioms.
\end{theorem}
\begin{proof}
We have Sorgenfrey Line is Perfectly Normal.
The result follows from Sequence of Implications of Separation Axioms.
{{qed}}
Category:Sorgenfrey Line
\end{proof}
|
21633
|
\section{Sound Proof System is Consistent}
Tags: Formal Systems
\begin{theorem}
Let $\LL$ be a logical language.
Let $\mathscr M$ be a formal semantics for $\LL$.
Let $\mathscr P$ be a proof system for $\LL$.
Suppose that $\mathscr P$ is sound for $\mathscr M$.
Then $\mathscr P$ is consistent.
\end{theorem}
\begin{proof}
By assumption, some logical formula $\phi$ is not an $\mathscr M$-tautology.
Since $\mathscr P$ is sound for $\mathscr M$, $\phi$ is also not a $\mathscr P$-theorem.
But then by definition $\mathscr P$ is consistent.
{{qed}}
\end{proof}
|
21634
|
\section{Soundness Theorem for Hilbert Proof System}
Tags: Propositional Logic, Hilbert Proof System Instance 1
\begin{theorem}
Let $\mathscr H$ be instance 1 of a Hilbert proof system.
Let $\mathrm{BI}$ be the formal semantics of boolean interpretations.
Then $\mathscr H$ is a sound proof system for $\mathrm{BI}$:
:Every $\mathscr H$-theorem is a tautology.
\end{theorem}
\begin{proof}
Recall the axioms of $\mathscr H$:
{{begin-axiom}}
{{axiom | lc = '''Axiom $1$:'''
| m = \mathbf A \implies \paren {\mathbf B \implies \mathbf A}
}}
{{axiom | lc = '''Axiom $2$:'''
| m = \paren {\mathbf A \implies \paren {\mathbf B \implies \mathbf C} } \implies \paren {\paren {\mathbf A \implies \mathbf B} \implies \paren {\mathbf A \implies \mathbf C} }
}}
{{axiom | lc = '''Axiom $3$:'''
| m = \paren {\neg \mathbf B \implies \neg \mathbf A} \implies \paren {\mathbf A \implies \mathbf B}
}}
{{end-axiom}}
That these are tautologies is shown on, respectively:
:True Statement is implied by Every Statement
:Self-Distributive Law for Conditional
{{WIP|See Talk:Self-Distributive Law for Conditional for Lord_Farin's take on this}}
:Rule of Transposition
That Modus Ponens infers tautologies from tautologies is shown on:
:Modus Ponendo Ponens
Since:
:All axioms of $\mathscr H$ are tautologies;
:All rules of inference of $\mathscr H$ preserve tautologies
it is guaranteed that every formal proof in $\mathscr H$ results in a tautology.
That is, all $\mathscr H$-theorems are tautologies.
{{qed}}
\end{proof}
|
21635
|
\section{Soundness Theorem for Propositional Tableaus and Boolean Interpretations}
Tags: Named Theorems, Propositional Logic, Propositional Calculus, Propositional Tableaus
\begin{theorem}
Tableau proofs (in terms of propositional tableaus) are a sound proof system for boolean interpretations.
That is, for every WFF $\mathbf A$:
:$\vdash_{\mathrm{PT} } \mathbf A$ implies $\models_{\mathrm{BI} } \mathbf A$
\end{theorem}
\begin{proof}
This is a corollary of the Extended Soundness Theorem for Propositional Tableaus and Boolean Interpretations:
Let $\mathbf H$ be a countable set of propositional formulas.
Let $\mathbf A$ be a propositional formula.
If $\mathbf H \vdash \mathbf A$, then $\mathbf H \models \mathbf A$.
In this case, we have $\mathbf H = \O$.
Hence the result.
{{qed}}
\end{proof}
|
21636
|
\section{Soundness Theorem for Semantic Tableaus}
Tags: Propositional Logic, Named Theorems
\begin{theorem}
Let $\mathbf A$ be a WFF of propositional logic.
Let $T$ be a completed tableau for $\mathbf A$.
Suppose that $T$ is closed.
Then $\mathbf A$ is unsatisfiable for boolean interpretations.
\end{theorem}
\begin{proof}
We will prove inductively the following claim for every node $t$ of $T$:
:If all leaves that are descendants of $t$ are marked closed, then $\map U t$ is unsatisfiable.
By the Semantic Tableau Algorithm, we know this statement to hold for the leaf nodes themselves.
For, a leaf $t$ is marked closed {{iff}} $\map U t$ contains a complementary pair.
The assertion follows from Set of Literals Satisfiable iff No Complementary Pairs.
Inductively, suppose that all children of a node $t$ satisfy the mentioned condition.
If all descendant leaf nodes of $t$ are marked closed, this evidently holds for the children $t', t''$ of $t$ as well.
Hence by hypothesis, $\map U {t'}$ and $\map U {t''}$ are unsatisfiable.
Let $\mathbf B$ be the WFF used by the Semantic Tableau Algorithm at $t$.
Let $\mathbf B_1, \mathbf B_2$ be the formulas added to $t'$ and $t''$.
First, the case that $\mathbf B$ is an $\alpha$-formula.
Then $t' = t''$, and $\mathbf B$ is semantically equivalent to $\mathbf B_1 \land \mathbf B_2$.
It follows that if:
:$v \models_{\mathrm{BI}} \map U t$
for some boolean interpretation $v$, then also:
:$v \models_{\mathrm{BI}} \map U {t'}$
which contradicts our hypothesis.
Thus, $\map U t$ is unsatisfiable.
Next, the case that $\mathbf B$ is a $\beta$-formula.
Then $\mathbf B$ is semantically equivalent to $\mathbf B_1 \lor \mathbf B_2$.
It follows that if:
:$v \models_{\mathrm{BI}} \map U t$
for some boolean interpretation $v$, then also one of the following must hold:
:$v \models_{\mathrm{BI}} \map U {t'}$
:$v \models_{\mathrm{BI}} \map U {t''}$
which contradicts our hypothesis.
Thus, $\map U t$ is unsatisfiable.
{{handwaving|"It follows", is obvious, and is tedious to write down}}
This proves our claim:
:If all leaves that are descendants of $t$ are marked closed, then $\map U t$ is unsatisfiable.
Applying this claim to the root node of $T$, we obtain the desired result.
{{qed}}
\end{proof}
|
21637
|
\section{Soundness and Completeness of Gentzen Proof System}
Tags: Propositional Logic, Named Theorems
\begin{theorem}
Let $\mathscr G$ be instance 1 of a Gentzen proof system.
Let $\mathrm{BI}$ be the formal semantics of boolean interpretations.
Then $\mathscr G$ is a sound and complete proof system for $\mathrm{BI}$.
\end{theorem}
\begin{proof}
This is an immediate consequence of:
* Provable by Gentzen Proof System iff Negation has Closed Tableau
* Soundness and Completeness of Semantic Tableaus
{{qed}}
\end{proof}
|
21638
|
\section{Soundness and Completeness of Semantic Tableaus}
Tags: Propositional Logic, Named Theorems
\begin{theorem}
Let $\mathbf A$ be a WFF of propositional logic.
Let $T$ be a completed semantic tableau for $\mathbf A$.
Then $\mathbf A$ is unsatisfiable {{iff}} $T$ is closed.
\end{theorem}
\begin{proof}
The two directions of this theorem are respectively addressed on:
:Soundness Theorem for Semantic Tableaus
:Completeness Theorem for Semantic Tableaus
{{qed}}
\end{proof}
|
21639
|
\section{Space in which All Convergent Sequences have Unique Limit not necessarily Hausdorff}
Tags: Hausdorff Spaces, Countable Complement Topology
\begin{theorem}
Let $T = \left({S, \tau}\right)$ be a topological space.
Let $T$ be such that all convergent sequences have a unique limit point.
Then it is not necessarily the case that $T$ is a Hausdorff space.
\end{theorem}
\begin{proof}
Let $T = \left({\R, \tau}\right)$ be the set of real numbers $\R$ with the countable complement topology.
From Countable Complement Space is not $T_2$, $T$ is not a Hausdorff space.
Suppose $\left\langle{x_n}\right\rangle$ is a sequence in $\R$ which converges to $x$.
Then $C = \left\{{x_n: x_n \ne x}\right\}$ is closed in $T$ because it is countable.
So $X \setminus C$ is a neighborhood of $x$.
This means there is some $N \in \N$ such that:
:$\forall n > N: x_n \in X \setminus C$
That is, $x_n = x$ for large $n$.
This means that if $x_n \to y$ then $y = x$, proving limit points in $T$ are unique.
{{qed}}
Category:Hausdorff Spaces
Category:Countable Complement Topology
\end{proof}
|
21640
|
\section{Space is Neighborhood of all its Points}
Tags: Neighborhoods
\begin{theorem}
Let $T = \struct {S, \tau}$ be a topological space.
Let $x \in S$.
Then $S$ is a neighborhood of $x$.
\end{theorem}
\begin{proof}
By the definition of the topology $\tau$, $S$ is an open set.
From Set is Open iff Neighborhood of all its Points, $S$ is a neighborhood of $x$.
{{qed}}
\end{proof}
|
21641
|
\section{Space is Separable iff Density not greater than Aleph Zero}
Tags: Separable Spaces, Denseness
\begin{theorem}
Let $T$ be a topological space.
Then:
:$T$ is separable {{iff}} $d \left({T}\right) \leq \aleph_0$
where
:$d \left({T}\right)$ denotes the density of $T$,
:$\aleph$ denotes the aleph mapping.
\end{theorem}
\begin{proof}
:$T$ is separable
{{iff}}
:there exists a countable subset of $T$ which is dense by definition of separable space
{{iff}}
:there exists a subset $A$ of $T$ such that $A$ is dense and exists an injection $A \to \N$ by definition of countable set
{{iff}}
:there exists a subset $A$ of $T$ such that $A$ is dense and $\left\vert{A}\right\vert \leq \left\vert{\N}\right\vert$ by Injection iff Cardinal Inequality
{{iff}}
:there exists a subset $A$ of $T$ such that $A$ is dense and $\left\vert{A}\right\vert \leq \aleph_0$ by Aleph Zero equals Cardinality of Naturals
{{iff}}
:$d \left({T}\right) \leq \aleph_0$ by definition of density
where $\left\vert{A}\right\vert$ denotes the cardinality of $A$.
{{qed}}
\end{proof}
|
21642
|
\section{Space of Almost-Zero Sequences is Everywhere Dense in 2-Sequence Space}
Tags: Denseness, Normed Vector Spaces
\begin{theorem}
Let $\struct {\ell^2, \norm {\, \cdot \,}_2}$ be the 2-sequence space equipped with Euclidean norm.
Let $c_{00}$ be the space of almost-zero sequences.
Then $c_{00}$ is everywhere dense in $\struct {\ell^2, \norm {\, \cdot \,}_2}$
\end{theorem}
\begin{proof}
Let $\mathbf x = \sequence {x_n}_{n \mathop \in \N} \in \ell^2$.
By definition of $\ell^2$:
:$\ds \sum_{i \mathop = 0}^\infty \size {x_i}^2 < \infty$
Let $\ds s_n := \sum_{i \mathop = 0}^n \size {x_i}^2$ be a sequence of partial sums of $\ds s = \sum_{i \mathop = 0}^\infty \size {x_i}^2$.
We have that $s$ is a convergent sequence:
:$\forall \epsilon \in \R_{>0}: \exists N \in \N: \forall n \in \N: n > N \implies \size {s_n - s} < \epsilon$
Note that:
{{begin-eqn}}
{{eqn | l = \size {s_n - s}
| r = \size {\sum_{i \mathop = 0}^n \size {x_i}^2 - \sum_{i \mathop = 0}^\infty \size {x_i}^2}
}}
{{eqn | r = \size {\sum_{i \mathop = n \mathop + 1}^\infty \size {x_i}^2}
}}
{{end-eqn}}
Let $\ds N \in \N : \sum_{n \mathop = N + 1}^\infty \size {x_n}^2 < \epsilon^2$
Let $\mathbf y := \tuple {x_1 \ldots, x_N, 0, \ldots}$.
By definition, $\mathbf y \in c_{00}$.
We have that:
{{begin-eqn}}
{{eqn | l = \norm {\mathbf x - \mathbf y}_2^2
| r = \sum_{i \mathop = 0}^\infty \size {x_i - y_i}^2
| c = {{Defof|Euclidean Norm}}
}}
{{eqn | r = \sum_{i \mathop = N + 1}^\infty \size {x_i}^2
}}
{{eqn | o = <
| r = \epsilon^2
}}
{{eqn | ll= \leadsto
| l = \norm {\mathbf x - \mathbf y}_2
| o = <
| r = \epsilon
}}
{{end-eqn}}
Hence by definition, $c_{00}$ is dense in $\ell^2$.
{{qed}}
\end{proof}
|
21643
|
\section{Space of Almost-Zero Sequences is not Closed in 2-Sequence Space}
Tags: Closed Sets
\begin{theorem}
Let $\struct {\ell^2, \norm {\, \cdot \,}_2}$ be the normed 2-sequence vector space.
Let $\struct {c_{00}, \norm {\, \cdot \,}_2}$ be the normed vector space of almost-zero sequences.
Then $\struct {c_{00}, \norm {\, \cdot \,}_2}$ is not closed in $\struct {\ell^2, \norm {\, \cdot \,}_2}$.
\end{theorem}
\begin{proof}
Let $\sequence {x_n}_{n \mathop \in \N}$ be a sequence in $c_{00}$ such that:
:$\ds x_n := \tuple {1, \frac 1 2, \ldots \frac 1 n, 0, \ldots}$
Let $\ds x := \tuple {1, \frac 1 2, \ldots, \frac 1 n, \ldots}$ with $n \in \N_{>0}$.
We have that $x \in \ell^2 \setminus c_{00}$ where $\setminus$ denotes set difference.
Then:
{{begin-eqn}}
{{eqn | l = \norm {x_n - x}_2^2
| r = \sum_{k \mathop = n \mathop + 1}^\infty \frac 1 {k^2}
| c = {{defof|P-Norm|$p$-Norm}}
}}
{{eqn | o = <
| r = \sum_{k \mathop = n \mathop + 1}^\infty \frac 1 {k \paren {k - 1} }
}}
{{eqn | r = \sum_{k \mathop = n \mathop + 1}^\infty \paren {\frac 1 {k - 1} - \frac 1 k}
}}
{{eqn | r = \frac 1 n
| c = {{defof|Telescoping Series}}
}}
{{end-eqn}}
Pass the limit $n \to \infty$
Then:
:$\ds \lim_{n \mathop \to \infty} \norm {x_n - x} = 0$
Hence, $\struct {c_{00}, \norm {\, \cdot \,}}$ does not contain its limit points.
By definition, it is not closed.
{{qed}}
\end{proof}
|
21644
|
\section{Space of Bounded Sequences with Pointwise Addition and Pointwise Scalar Multiplication on Ring of Sequences forms Vector Space}
Tags: Functional Analysis, Examples of Vector Spaces, Space of Bounded Sequences
\begin{theorem}
Let $\map {\ell^\infty} \C$ be the space of bounded sequences on $\C$.
Let $\struct {\C, +_\C, \times_\C}$ be the field of complex numbers.
Let $\paren +$ be the pointwise addition on the ring of sequences.
Let $\paren {\, \cdot \,}$ be the pointwise multiplication on the ring of sequences.
Then $\struct {\map {\ell^\infty} \C, +, \, \cdot \,}_\C$ is a vector space.
\end{theorem}
\begin{proof}
Let $\sequence {a_n}_{n \mathop \in \N}, \sequence {b_n}_{n \mathop \in \N}, \sequence {c_n}_{n \mathop \in \N} \in \map {\ell^\infty} \C$.
Let $\lambda, \mu \in \C$.
Let $\sequence 0 := \tuple {0, 0, 0, \dots}$ be a complex-valued function.
Let us use real number addition and multiplication.
Define pointwise addition as:
:$\sequence {a_n + b_n}_{n \mathop \in \N} := \sequence {a_n}_{n \mathop \in \N} +_\C \sequence {b_n}_{n \mathop \in \N}$.
Define pointwise scalar multiplication as:
:$\sequence {\lambda \cdot a_n}_{n \mathop \in \N} := \lambda \times_\C \sequence {a_n}_{n \mathop \in \N}$
Let the additive inverse be $\sequence {-a_n} := - \sequence {a_n}$.
\end{proof}
|
21645
|
\section{Space of Bounded Sequences with Supremum Norm forms Banach Space}
Tags: Functional Analysis, Banach Spaces, Space of Bounded Sequences
\begin{theorem}
Let $\struct {\map {\ell^\infty} \R, \norm {\, \cdot \,}_\infty}$ be the normed vector space of bounded sequences on $\R$.
Then $\struct {\map {\ell^\infty} \R, \norm {\, \cdot \,}_\infty}$ is a Banach space.
\end{theorem}
\begin{proof}
A Banach space is a normed vector space, where a Cauchy sequence converges {{WRT}} the supplied norm.
To prove the theorem, we need to show that a Cauchy sequence in $\struct {\map {\ell^\infty} \R, \norm {\,\cdot\,}_\infty}$ converges.
We take a Cauchy sequence $\sequence {x_n}_{n \mathop \in \N}$ in $\struct {\map {\ell^\infty} \R, \norm {\,\cdot\,}_\infty}$.
Then we consider the $k$th component and show, that a real Cauchy sequence $\sequence {x_n^{\paren k}}_{n \mathop \in \N}$ converges in $\struct {\R, \size {\, \cdot \,}}$ with the limit $x^{\paren k}$ and denote the entire set as $\mathbf x$.
Finally, we show that $\sequence {\mathbf x_n}_{n \in \N}$, composed of components $x_n^{\paren k},$ converges in $\struct {\map {\ell^\infty} \R, \norm {\,\cdot\,}_\infty}$ with the limit $\mathbf x$.
Let $\sequence {\mathbf x_n}_{n \mathop \in \N}$ be a Cauchy sequence in $\struct {\map {\ell^\infty} \R, \norm{\, \cdot \,}_\infty}$.
Denote the $k$th component of $\mathbf x_n$ by $x_n^{\paren k}$.
\end{proof}
|
21646
|
\section{Space of Bounded Sequences with Supremum Norm forms Normed Vector Space}
Tags: Examples of Normed Vector Spaces
\begin{theorem}
The vector space of bounded sequences with the supremum norm forms a normed vector space.
\end{theorem}
\begin{proof}
We have that:
:Space of Bounded Sequences with Pointwise Addition and Pointwise Scalar Multiplication on Ring of Sequences forms Vector Space
:Supremum norm on the space of bounded sequences is a norm
By definition, $\struct {\ell^\infty, \norm {\, \cdot \,}_\infty}$ is a normed vector space.
{{qed}}
\end{proof}
|
21647
|
\section{Space of Continuous on Closed Interval Real-Valued Functions with Pointwise Addition and Pointwise Scalar Multiplication forms Vector Space}
Tags: Functional Analysis, Examples of Vector Spaces
\begin{theorem}
Let $I := \closedint a b$ be a closed real interval.
Let $\map \CC I$ be the space of real-valued functions continuous on $I$.
Let $\struct {\R, +_\R, \times_\R}$ be the field of real numbers.
Let $\paren +$ be the pointwise addition of real-valued functions.
Let $\paren {\, \cdot \,}$ be the pointwise scalar multiplication of real-valued functions.
Then $\struct {\map \CC I, +, \, \cdot \,}_\R$ is a vector space.
\end{theorem}
\begin{proof}
Let $f, g, h \in \map \CC I$ such that:
:$f, g, h : I \to \R$
Let $\lambda, \mu \in \R$.
Let $\map 0 x$ be a real-valued function such that:
:$\map 0 x : I \to 0$.
Let us use real number addition and multiplication.
$\forall x \in I$ define pointwise addition as:
:$\map {\paren {f + g}} x := \map f x +_\R \map g x$.
Define pointwise scalar multiplication as:
:$\map {\paren {\lambda \cdot f}} x := \lambda \times_\R \map f x$
Let $\map {\paren {-f} } x := -\map f x$.
\end{proof}
|
21648
|
\section{Space of Continuous on Closed Interval Real-Valued Functions with Supremum Norm forms Banach Space}
Tags: Banach Spaces, Functional Analysis
\begin{theorem}
Let $I = \closedint a b$ be a closed real interval.
Let $\map \CC I$ be the space of real-valued functions, continuous on $I$.
Let $\norm {\,\cdot\,}_\infty$ be the supremum norm on real-valued functions, continuous on $I$.
Then $\struct {\map \CC I, \norm {\,\cdot\,}_\infty}$ is a Banach space.
\end{theorem}
\begin{proof}
A Banach space is a normed vector space, where a Cauchy sequence converges {{WRT}} the supplied norm.
To prove the theorem, we need to show that a Cauchy sequence in $\struct {\map \CC I, \norm {\,\cdot\,}_\infty}$ converges.
We take a Cauchy sequence $\sequence {x_n}_{n \mathop \in \N}$ in $\struct {\map \CC I, \norm {\,\cdot\,}_\infty}$.
Then we fix $t \in I$ and show, that a real Cauchy sequence $\sequence {\map {x_n} t}_{n \mathop \in \N}$ converges in $\struct {\R, \size {\, \cdot \,}}$ with the limit $\map x t$.
Then we prove the continuity of $\map x t$.
Finally, we show that $\sequence {x_n}_{n \mathop \in \N}$ converges in $\struct {\map \CC I, \norm {\,\cdot\,}_\infty}$ with the limit $\map x t$.
\end{proof}
|
21649
|
\section{Space of Continuous on Closed Interval Real-Valued Functions with Supremum Norm forms Normed Vector Space}
Tags: Examples of Normed Vector Spaces
\begin{theorem}
Let $I := \closedint a b$ be a closed real interval.
The space of continuous real-valued functions on $I$ with supremum norm forms a normed vector space.
\end{theorem}
\begin{proof}
We have that:
:Space of Continuous on Closed Interval Real-Valued Functions with Pointwise Addition and Pointwise Scalar Multiplication forms Vector Space
:Supremum Norm is Norm/Continuous on Closed Interval Real-Valued Function
By definition, $\struct {\map \CC I, \norm {\, \cdot \,}_\infty}$ is a normed vector space.
{{qed}}
\end{proof}
|
21650
|
\section{Space of Continuously Differentiable on Closed Interval Real-Valued Functions with C^1 Norm forms Normed Vector Space}
Tags: Examples of Normed Vector Spaces
\begin{theorem}
Space of Continuously Differentiable on Closed Interval Real-Valued Functions with $C^1$ norm forms a normed vector space.
\end{theorem}
\begin{proof}
Let $I := \closedint a b$ be a closed real interval.
We have that:
:Space of Continuously Differentiable on Closed Interval Real-Valued Functions with Pointwise Addition and Pointwise Scalar Multiplication forms Vector Space
:$\map {C^1} I$ norm on the space of continuously differentiable on closed interval real-valued functions is a norm
By definition, $\struct {\map {\CC^1} I, \norm {\, \cdot \,}_{1, \infty} }$ is a normed vector space.
{{qed}}
\end{proof}
|
21651
|
\section{Space of Continuously Differentiable on Closed Interval Real-Valued Functions with C^1 Norm is Banach Space}
Tags: , Banach Spaces
\begin{theorem}
Let $I := \closedint a b$ be a closed real interval.
Let $\map \CC I$ be the space of real-valued functions continuous on $I$.
Let $\map {\CC^1} I$ be the space of real-valued functions, continuously differentiable on $I$.
Let $\norm {\, \cdot \,}_{1, \infty}$ be the $\CC^1$ norm.
$\struct {\map {\CC^1} I, \norm {\, \cdot \,}_{1, \infty} }$ be the normed space of real-valued functions, continuously differentiable on $I$.
Then $\struct {\map {\CC^1} I, \norm {\, \cdot \,}_{1, \infty} }$ is a Banach space.
\end{theorem}
\begin{proof}
Let $\sequence {x_n}_{n \mathop \in \N}$ be a Cauchy sequence in $\struct {\map {\CC^1} I, \norm {\, \cdot \,}_{1, \infty} }$:
:$\forall \epsilon \in \R_{>0}: \exists N \in \N: \forall m, n \in \N: m, n \ge N: \norm {x_n - x_m}_{1, \infty} < \epsilon$
\end{proof}
|
21652
|
\section{Space of Continuously Differentiable on Closed Interval Real-Valued Functions with Pointwise Addition and Pointwise Scalar Multiplication forms Vector Space}
Tags: Functional Analysis, Examples of Vector Spaces
\begin{theorem}
Let $I := \closedint a b$ be a closed real interval.
Let $\map \CC I$ be a space of real-valued functions continuous on $I$.
Let $\map {\CC^1} I$ be a space of continuously differentiable functions on $I$.
Let $\struct {\R, +_\R, \times_\R}$ be the field of real numbers.
Let $\paren +$ be the pointwise addition of real-valued functions.
Let $\paren {\, \cdot \,}$ be the pointwise scalar multiplication of real-valued functions.
Then $\struct {\map {\CC^1} I, +, \, \cdot \,}_\R$ is a vector space.
\end{theorem}
\begin{proof}
From Space of Continuous on Closed Interval Real-Valued Functions with Pointwise Addition and Pointwise Scalar Multiplication forms Vector Space:
:$\struct {\map \CC I, +, \, \cdot \,}_\R$ is a vector space.
By Differentiable Function is Continuous:
:$\map {\CC^1} I \subset \map \CC I$
Let $f, g \in \map {\CC^1} I$.
Let $\alpha \in \R$.
Let $\map 0 x$ be a real-valued function such that:
:$\map 0 x : I \to 0$
Restrict $\paren +$ to $\map {\CC^1} I \times \map {\CC^1} I$.
Restrict $\paren {\, \cdot \,}$ to $\R \times \map {\CC^1} I$.
\end{proof}
|
21653
|
\section{Space of Piecewise Linear Functions on Closed Interval is Dense in Space of Continuous Functions on Closed Interval}
Tags: Functional Analysis
\begin{theorem}
Let $I = \closedint a b$.
Let $\map \CC I$ be the set of continuous functions on $I$.
Let $\map {\mathrm {PL} } I$ be the set of piecewise linear functions on $I$.
Let $d$ be the metric induced by the supremum norm.
Then $\map {\mathrm {PL} } I$ is dense in $\struct {\map \CC I, d}$.
\end{theorem}
\begin{proof}
Let $f \in \map \CC I$.
Let $\epsilon \in \R_{>0}$ be a real number.
From Open Ball Characterization of Denseness:
:it suffices to find a $p \in \map {\mathrm {PL} } I$ such that $p$ is contained in the open ball $\map {B_\epsilon} f$.
From Continuous Function on Closed Real Interval is Uniformly Continuous:
:$f$ is uniformly continuous on $I$.
That is:
:there exists a $\delta > 0$ such that for all $x, y \in I$ with $\size {x - y} < \delta$ we have $\size {\map f x - \map f y} < \epsilon/3$.
Let:
:$P = \{a_0 = a, a_1, a_2, \ldots, a_n = b\}$
be a finite subdivision of $I$, with:
:$\size {a_{i + 1} - a_i} < \delta$
for each $i$.
Let $p \in \map {\mathrm {PL} } I$ be such that:
:$\map p {a_i} = \map f {a_i}$
for each $i$, with $p$ continuous.
We can explicitly construct such a $p$ by connecting $\tuple {a_i, \map f {a_i} }$ to $\tuple {a_{i + 1}, \map f {a_{i + 1} } }$ with a straight line segment for each $i$.
Fix $x \in I$.
Note that there exists precisely one $i$ such that $a_i \le x \le a_{i + 1}$, fix this $i$.
We then have:
{{begin-eqn}}
{{eqn | l = \size {\map p x - \map f {a_i} }
| r = \size {\map p x - \map p {a_i} }
}}
{{eqn | o = \le
| r = \size {\map p {a_{i + 1} } - \map p {a_i} }
}}
{{eqn | r = \size {\map f {a_{i + 1} } - \map f {a_i} }
}}
{{eqn | o = <
| r = \epsilon/3
}}
{{end-eqn}}
since $\size {a_{i + 1} - a_i} < \delta$.
Since $\size {x - a_i} < \size {a_{i + 1} - a_i} < \delta$, we also have:
:$\size {\map f x - \map f {a_i} } < \epsilon/3$
So:
{{begin-eqn}}
{{eqn | l = \size {\map p x - \map f x}
| o = \le
| r = \size {\map p x - \map p {a_i} } + \size {\map f x - \map f {a_i} }
| c = Triangle Inequality
}}
{{eqn | o = <
| r = 2 \epsilon/3
}}
{{end-eqn}}
Note that $x \in I$ was arbitrary, so:
{{begin-eqn}}
{{eqn | l = \map d {f, p}
| r = \norm {f - p}_\infty
| c = {{Defof|Metric Induced by Norm}}
}}
{{eqn | r = \sup_{x \mathop \in I} \size {\map f x - \map p x}
}}
{{eqn | o = \le
| r = 2 \epsilon/3
}}
{{eqn | o = <
| r = \epsilon
}}
{{end-eqn}}
so $p \in \map {B_\epsilon} f$.
{{qed}}
Category:Functional Analysis
\end{proof}
|
21654
|
\section{Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval}
Tags: Meager Spaces, Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval, Function Spaces, Functional Analysis
\begin{theorem}
Let $I = \closedint a b$.
Let $\map \CC I$ be the set of continuous functions on $I$.
Let $\map \DD I$ be the set of continuous functions on $I$ that are differentiable at a point.
Let $d$ be the metric induced by the supremum norm.
Then $\map \DD I$ is meager in $\struct {\map \CC I, d}$.
\end{theorem}
\begin{proof}
Let:
:$\ds A_{n, \, m} = \set {f \in \map \CC I: \exists x \in I: \forall t \in I: 0 < \size {t - x} < \frac 1 m \implies \size {\frac {\map f t - \map f x} {t - x} } \le n}$
and:
:$\ds A = \bigcup_{\tuple {n, \, m} \mathop \in \N^2} A_{n, \, m}$
\end{proof}
|
21655
|
\section{Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval/Corollary}
Tags: Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval, Functional Analysis
\begin{theorem}
Let $I = \closedint a b$.
Let $\map \CC I$ be the set of continuous functions on $I$.
Then:
:there exists a function $f \in \map \CC I$ that is not differentiable anywhere.
\end{theorem}
\begin{proof}
Let $\map \DD I$ be the set of continuous functions on $I$ that are differentiable at a point.
Let $d$ be the metric induced by the supremum norm.
By Space of Continuous on Closed Interval Real-Valued Functions with Supremum Norm forms Banach Space:
:$\struct {\map \CC I, d}$ is a complete metric space.
By Baire Space is Non-Meager:
:$\map \CC I$ is non-meager in $\struct {\map \CC I, d}$.
By Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval:
:$\map \DD I$ is meager in $\struct {\map \CC I, d}$.
So:
:$\map \DD I \ne \map \CC I$.
That is, there exists a continuous function that is not differentiable anywhere.
{{qed}}
\end{proof}
|
21656
|
\section{Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval/Lemma 1}
Tags: Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval
\begin{theorem}
Let $I = \closedint a b$.
Let $\map \CC I$ be the set of continuous functions on $I$.
Let $\map \DD I$ be the set of continuous functions on $I$ that are differentiable at a point.
Let:
:$\ds A_{n, \, m} = \set {f \in \map \CC I: \text {there exists } x \in I \text { such that } \size {\frac {\map f t - \map f x} {t - x} } \le n \text { for all } t \text { with } 0 < \size {t - x} < \frac 1 m}$
and:
:$\ds A = \bigcup_{\tuple {n, \, m} \in \N^2} A_{n, \, m}$
Then:
:$\map \DD I \subseteq A$
\end{theorem}
\begin{proof}
Let $f \in \map \DD I$.
Then, $f$ is differentiable at some $x \in I$.
Let:
:$n = \floor {\size {\map {f'} x} } + 1$
where $\floor \cdot$ is the floor function.
Then:
:$\size {\map {f'} x} < n$
From the definition of the derivative, there exists $\delta > 0$ such that for all $t$ with $0 < \size {t - x} < \delta$, we have:
:$\ds \size {\frac {\map f t - \map f x} {t - x} - \map {f'} x} < 1 - \fractpart {\size {\map {f'} x} }$
From the Reverse Triangle Inequality, we then have:
:$\ds \size {\size {\frac {\map f t - \map f x} {t - x} } - \size {\map {f'} x} } < 1 - \fractpart {\size {\map {f'} x} }$
and so:
{{begin-eqn}}
{{eqn | l = \size {\frac {\map f t - \map f x} {t - x} }
| o = <
| r = \size {\map {f'} x} - \fractpart {\size {\map {f'} x} } + 1
}}
{{eqn | r = \floor {\size {\map {f'} x} } + 1
| c = {{Defof|Fractional Part}}
}}
{{eqn | r = n
}}
{{end-eqn}}
for all $t$ with $0 < \size {t - x} < \delta$.
Pick $m \in \N$ such that $\frac 1 m < \delta$.
We then have:
:$\ds \size {\frac {\map f t - \map f x} {t - x} } \le n$
for $t$ with $0 < \size {t - x} < \frac 1 m$.
That is:
:$f \in A_{n, \, m} \subseteq A$
so:
:$\map \DD I \subseteq A$
as required.
{{qed}}
Category:Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval
\end{proof}
|
21657
|
\section{Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval/Lemma 2/Lemma 2.1}
Tags: Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval
\begin{theorem}
Let $I = \closedint a b$.
Let $\map \CC I$ be the set of continuous functions on $I$.
Let $d$ be the metric induced by the supremum norm.
Let:
:$\ds A_{n, m} = \set {f \in \map \CC I: \exists x \in I: \forall t \in \R: 0 \lt \size {t - x} \lt \frac 1 m \implies \size {\frac {\map f t - \map f x} {t - x} \le n} }$
Then:
:for each $\tuple {n, m} \in \N^2$, $A_{n, m}$ is closed in $\tuple {\map \CC I, d}$.
\end{theorem}
\begin{proof}
Fix $\tuple {n, m} \in \N^2$.
From Space of Continuous on Closed Interval Real-Valued Functions with Supremum Norm forms Banach Space:
:$\tuple {\map \CC I, d}$ is complete.
Hence, from Subspace of Complete Metric Space is Closed iff Complete:
:$A_{n, m}$ is closed {{iff}} $\tuple {A_{n, m}, d}$ is complete.
Let $\sequence {f_i}_{i \mathop \in \N}$ be a Cauchy sequence in $\tuple {A_{n, m}, d}$.
Since $\tuple {\map \CC I, d}$ is complete, $\sequence {f_i}_{i \mathop \in \N}$ converges some $f \in \map \CC I$.
We aim to show that $f \in A_{n, m}$.
Since $f_i \in A_{n, m}$ for each $i \in \N$, there exists $x_i \in I$ such that:
:$\ds \size {\frac {\map {f_i} t - \map {f_i} {x_i} } {t - x_i} } \le n$ for each $t$ with $0 < \size {t - x_i} < \dfrac 1 m$.
Note that since $I$ is bounded, $\sequence {x_i}_{i \mathop \in \N}$ is bounded.
Therefore, by the Bolzano-Weierstrass Theorem:
:there exists a convergent subsequence of $\sequence {x_i}_{i \mathop \in \N}$, $\sequence {x_{i_k} }_{k \mathop \in \N}$.
Let:
:$\ds x = \lim_{k \mathop \to \infty} x_{i_k}$
Note that we also have:
:$\ds f = \lim_{k \mathop \to \infty} f_{i_k}$
From Subset of Metric Space contains Limits of Sequences iff Closed, since $I$ is closed, $x \in I$.
From Necessary Condition for Uniform Convergence:
:the sequence $\sequence {\map {f_{i_k} } {x_{i_k} } }_{k \mathop \in \N}$ converges to $\map f x$.
We therefore have:
{{begin-eqn}}
{{eqn | l = \size {\frac {\map f t - \map f x} {t - x} }
| r = \lim_{k \mathop \to \infty} \size {\frac {\map {f_{i_k} } t - \map {f_{i_k} } {x_{i_k} } } {t - x_{i_k} } }
}}
{{eqn | o = \le
| r = n
}}
{{end-eqn}}
for all $t$ with $0 < \size {t - x} < \dfrac 1 m$.
That is, $f \in A_{n, m}$.
{{qed}}
Category:Space of Somewhere Differentiable Continuous Functions on Closed Interval is Meager in Space of Continuous Functions on Closed Interval
\end{proof}
|
21658
|
\section{Space of Zero-Limit Sequences with Supremum Norm forms Banach Space}
Tags: Banach Spaces
\begin{theorem}
Let $c_0$ be the space of zero-limit sequences.
Let $\norm {\, \cdot \,}_\infty$ be the supremum norm.
Then $\struct {c_0, \norm {\, \cdot \,}_\infty}$ is a Banach space.
\end{theorem}
\begin{proof}
Let $\sequence {a_n}_{n \mathop \in \N}$ be a Cauchy sequence in $\struct {c_0, \norm {\, \cdot \,}_\infty}$.
Let $\struct {\ell^\infty, \norm {\, \cdot \,}_\infty}$ be the normed vector space of bounded sequences.
By Space of Zero-Limit Sequences with Supremum Norm forms Normed Vector Space, $\struct {c_0, \norm {\, \cdot \,}_\infty}$ is a subspace of $\struct {\ell^\infty, \norm {\, \cdot \,}_\infty}$.
Hence, $\sequence {a_n}_{n \mathop \in \N}$ is also a Cauchy sequence in $\struct {\ell^\infty, \norm {\, \cdot \,}_\infty}$.
By Space of Bounded Sequences with Supremum Norm forms Banach Space, $\sequence {a_n}_{n \mathop \in \N}$ converges to $a \in \ell^\infty$.
Denote $a_n = \sequence {a_n^{\paren m}}_{m \mathop \in \N}$ and $a = \sequence {a^{\paren m}}_{m \mathop \in \N}$.
By definition of convergent sequences:
:$\forall \epsilon \in \R_{> 0} : \exists N \in \N : \forall n \in \N : n > N \implies \norm {a_n - a}_\infty < \epsilon$
Then:
{{begin-eqn}}
{{eqn | q = \forall m \in \N
| l = \size {a_n^{\paren m} - a^{\paren m} }
| o = \le
| r = \max_{m \mathop \in \N} \size {a_n^{\paren m} - a^{\paren m} }
}}
{{eqn | r = \norm {a_n - a}_\infty
| c = {{defof|Supremum Norm}}
}}
{{eqn | o = <
| r = \epsilon
}}
{{end-eqn}}
Since $a_n \in c_0$:
:$\forall \epsilon \in \R_{\mathop > 0} : \exists M \in \R_{\mathop > 0} : \forall m \in \N : m > M \implies \size {a_n^{\paren m}} < \epsilon$
For all $m > M$ we also have that:
{{begin-eqn}}
{{eqn | l = \size {a^{\paren m} }
| o = \le
| r = \size {a^{\paren m} - a_n^{\paren m} + a_n^{\paren m} }
}}
{{eqn | o = \le
| r = \size {a^{\paren m} - a_n^{\paren m} } + \size {a_n^{\paren m} }
| c = {{NormAxiomVector|3}}
}}
{{eqn | o = <
| r = \epsilon + \epsilon
}}
{{eqn | r = 2 \epsilon
}}
{{end-eqn}}
In other words:
:$\forall \epsilon' \in \R_{>0} : \exists M' \in \R_{>0} : \forall m \in \N : m > M' \implies \size {a^{\paren m} } < \epsilon'$
where $\epsilon' = 2\epsilon$ and $M' = M$.
By definition of zero-limit sequences, $a \in c_0$.
Therefore, in $\struct {c_0, \norm {\, \cdot \,}_\infty}$ a Cauchy sequence is also convergent in $\struct {c_0, \norm {\, \cdot \,}_\infty}$.
By definition, $\struct {c_0, \norm {\, \cdot \,}_\infty}$ is a Banach space.
{{qed}}
\end{proof}
|
21659
|
\section{Space such that Intersection of Open Sets containing Point is Singleton may not be Hausdorff}
Tags: Hausdorff Spaces
\begin{theorem}
Let $T = \struct {S, \tau}$ be a topological space.
Let $x \in S$ be arbitrary.
Let $T$ be such that the intersection of all open sets containing $x$ is $\set x$:
:$\ds \bigcap_{\substack {H \mathop \in \tau \\ x \mathop \in H} } = \set x$
Then it is not necessarily the case that $T$ is a Hausdorff space.
\end{theorem}
\begin{proof}
Let $T$ be the finite complement topology on the real numbers $\R$, for example.
The open sets of $T$ are subsets of $\R$ of the form $U$ such that $\R \setminus U$ is finite, together with $\O$.
Let $x \in \R$ be arbitrary.
Let $K = \ds \bigcap_{\substack {H \mathop \in \tau \\ x \mathop \in H} }$, that is, the intersection of all open sets of $T$ containing $x$.
Let $y \in \R$ such that $y \ne x$.
Note that the set $\set y$ is finite.
Thus $\R \setminus \set y$ is an open set of $T$ which contains $x$ but not $y$.
Hence $y \notin K$.
As $y$ is arbitrary, it follows that the only element of $\R$ which is in $K$ is $x$ itself.
That is:
:$\ds \bigcap_{\substack {H \mathop \in \tau \\ x \mathop \in H} } = \set x$
But from Finite Complement Space is not Hausdorff, $T$ is not a Hausdorff space.
{{qed}}
\end{proof}
|
21660
|
\section{Space with Open Point is Non-Meager}
Tags: Non-Meager Spaces, Second Category Spaces
\begin{theorem}
Let $T = \struct {S, \tau}$ be a topological space.
Let $x \in S$ be an open point.
Then $T$ is a non-meager space.
\end{theorem}
\begin{proof}
Let $x \in S$ be an open point of $T$.
That is:
:$\set x \in \tau$
Recall that:
:a topological space is non-meager if it is not meager
and:
:a topological space is meager {{iff}} it is a countable union of subsets of $S$ which are nowhere dense in $S$.
{{AimForCont}} that $T$ is meager.
Let:
:$\ds T = \bigcup \SS$
where $\SS$ is a countable set of subsets of $S$ which are nowhere dense in $S$.
Then:
:$\exists H \in \S S: x \in H$
and so:
:$\set x \subseteq H$
We have that $H$ is nowhere dense in $T$.
By definition, its closure $H^-$ contains no open set of $T$ which is non-empty.
But from Set is Subset of its Topological Closure we have that:
:$H \subseteq H^-$
So by Subset Relation is Transitive:
:$\set x \subseteq H^-$
So $H$ is not nowhere dense.
Therefore $T$ cannot be a countable union of subsets of $S$ which are nowhere dense in $S$.
That is, $T$ is not meager.
Hence the result by definition of non-meager.
{{qed}}
\end{proof}
|
21661
|
\section{Spacing Limit Theorem}
Tags: Probability Theory, Named Theorems
\begin{theorem}
Let $X_{\paren i}$ be the $i$th ordered statistic of $N$ samples from a continuous random distribution with density function $\map {f_X} x$.
Then the spacing between the ordered statistics given $X_{\paren i}$ converges in distribution to exponential for sufficiently large sampling according to:
:$N \paren {X_{\paren {i + 1} } - X_{\paren i} } \xrightarrow D \map \exp {\dfrac 1 {\map f { X_{\paren i} } } }$
as $N \to \infty$ for $i = 1, 2, 3, \dotsc, N - 1$.
\end{theorem}
\begin{proof}
Given $i$ and $N$, the ordered statistic $X_{\paren i}$ has the probability density function:
:$\map {f_{X_{\paren i} } } {x \mid i, N} = \dfrac {N!} {\paren {i - 1}! \paren {N - i}!} \map {F_X} x^{i - 1} \paren {1 - \map {F_X} x}^{N - i} \map {f_X} x$
where $\map {F_X} x$ is the cumulative distribution function of $X$.
{{MissingLinks|The page Definition:Conditional Probability needs to be expanded so as to define and explain the notation $\tuple {x \mid i, N}$ in the context of a PDF}}
Let $Y_i = N \paren {X_{\paren {i + 1} } - X_{\paren i} }$ be an independent spacing variable valid for $i = 1, 2, 3, \dotsc, N - 1$ which is always positive.
The joint density function of both $X_{\paren i}$ and $Y_{i}$ is then:
:$\map {f_{X_{\paren i}, Y_i} } {x, y \mid i, N} = \dfrac {\paren {N - 1}!} {\paren {i - 1}! \paren {N - i - 1}!} \map {F_X} x^{i - 1} \paren {1 - \map {F_X} {x + \dfrac y N} }^{N - i - 1} \map {f_X} x \map {f_X} {x + \dfrac y N}$
The conditional density function of $Y_i$ given $X_{\paren i}$ is:
:$f_{Y_i} = \dfrac {f_{X_{\paren i}, Y_i} } {f_{X_{\paren i} } }$
which turns into:
:$\map {f_{Y_i} } {y \mid x = X_{\paren i}, i, N} = \dfrac {N - i} N \dfrac {\paren {1 - \map {F_X} {x + \dfrac y N} }^{N - i - 1} } {\paren {1 - \map {F_X} x}^{N - i} } \map {f_X} {x + \dfrac y N}$
The conditional cumulative function of $Y_i$ given $X_{\paren i}$ is:
:$\map {F_{Y_i} } {y \mid x = X_{\paren i}, i, N} = 1 - \paren {\dfrac {1 - \map {F_X} {x + \dfrac y N} } {1 - \map {F_X} x} }^{N - i}$
The following Taylor expansion in $y$ is an approximation of $\map {F_X} {x + \dfrac y N}$:
:$\map {F_X} {x + \dfrac y N} = \map {F_X} x + \map {f_X} x \dfrac y N + \map \OO {N^{-2} }$
Inserting this produces:
:$\map {F_{Y_i} } {y \mid x = X_{\paren i}, i, N} = 1 - \paren {1 - \dfrac {\map {f_X} x y} {N \paren {1 - \map {F_X} x} } + \map \OO {N^{-2} } }^{N - i}$
The limit as $N$ gets large is the exponential function:
:$\map {F_{Y_i} } {y \mid x = X_{\paren i}, i, N} = 1 -e^{- \map {f_X} x y \dfrac {1 - \dfrac i N} {1 - \map {F_X} x} } + \map \OO {N^{-1} }$
The distribution of $F_X$ is uniform by definition of a random pick:
:$\map {F_X} x \sim \map U {0, 1}$
Therefore $i$ is uniformly distributed as well:
:$\map {F_X} {X_{\paren i} } \approx \dfrac i N$
The limit of $F_{Y_i}$ is then:
:$\ds \lim_{N \mathop \to \infty} \map {F_{Y_i} } {y \mid x = X_{\paren i}, i, N} = 1 - e^{- \map {f_X} x y}$
{{qed}}
Category:Probability Theory
Category:Named Theorems
\end{proof}
|
21662
|
\section{Special Linear Group is Subgroup of General Linear Group}
Tags: General Linear Group, Special Linear Group, Group Theory, Matrix Algebra, Group Examples
\begin{theorem}
Let $K$ be a field whose zero is $0_K$ and unity is $1_K$.
Let $\SL {n, K}$ be the special linear group of order $n$ over $K$.
Then $\SL {n, K}$ is a subgroup of the general linear group $\GL {n, K}$.
\end{theorem}
\begin{proof}
Because the determinants of the elements of $\SL {n, K}$ are not $0_K$, they are invertible.
So $\SL {n, K}$ is a subset of $\GL {n, K}$.
Now we need to show that $\SL {n, K}$ is a subgroup of $\GL {n, K}$.
Let $\mathbf A$ and $\mathbf B$ be elements of $\SL {n, K}$.
As $\mathbf A$ is invertible we have that it has an inverse $\mathbf A^{-1} \in \GL {n, K}$.
From Determinant of Inverse Matrix:
:$\map \det {\mathbf A^{-1} } = \dfrac 1 {\map \det {\mathbf A} }$
and so:
:$\map \det {\mathbf A^{-1} } = 1$
So $\mathbf A^{-1} \in \SL {n, K}$.
Also, from Determinant of Matrix Product:
:$\map \det {\mathbf A \mathbf B} = \map \det {\mathbf A} \map \det {\mathbf B} = 1$
Hence the result from the Two-Step Subgroup Test.
{{qed}}
\end{proof}
|
21663
|
\section{Special Linear Group is not Abelian}
Tags: Special Linear Group
\begin{theorem}
Let $K$ be a field whose zero is $0_K$ and unity is $1_K$.
Let $\SL {n, K}$ be the special linear group of order $n$ over $K$.
Then $\SL {n, K}$ is not an abelian group.
\end{theorem}
\begin{proof}
From Special Linear Group is Subgroup of General Linear Group we have that $\SL {n, K}$ is a group.
From Matrix Multiplication is not Commutative it follows that $\SL {n, K}$ is not abelian.
{{qed}}
\end{proof}
|
21664
|
\section{Special Orthogonal Group is Group}
Tags: Orthogonal Groups
\begin{theorem}
Let $k$ be a field.
The $n$th orthogonal group on $k$ is a group.
\end{theorem}
\begin{proof}
A direct corollary of Special Orthogonal Group is Subgroup of Orthogonal Group.
{{qed}}
\end{proof}
|
21665
|
\section{Special Orthogonal Group is Subgroup of Orthogonal Group}
Tags: Orthogonal Groups
\begin{theorem}
Let $k$ be a field.
Let $\map {\operatorname O} {n, k}$ be the $n$th orthogonal group on $k$.
Let $\map {\operatorname {SO} } {n, k}$ be the $n$th special orthogonal group on $k$.
Then $\map {\operatorname {SO} } {n, k}$ is a subgroup of $\map {\operatorname O} {n, k}$.
\end{theorem}
\begin{proof}
We have that Unit Matrix is Proper Orthogonal, so $\map {\operatorname {SO} } {n, k}$ is not empty.
Let $\mathbf A, \mathbf B \in \map {\operatorname {SO} } {n, k}$.
Then, by definition, $\mathbf A$ and $\mathbf B$ are proper orthogonal.
Then by Inverse of Proper Orthogonal Matrix is Proper Orthogonal:
:$\mathbf B^{-1}$ is a proper orthogonal matrix.
By Product of Proper Orthogonal Matrices is Proper Orthogonal Matrix:
:$\mathbf A \mathbf B^{-1}$ is a proper orthogonal matrix.
Thus by definition of special orthogonal group:
:$\mathbf A \mathbf B^{-1} \in \map {\operatorname {SO} } {n, k}$
Hence the result by One-Step Subgroup Test.
{{qed}}
Category:Orthogonal Groups
\end{proof}
|
21666
|
\section{Spectrum of Bounded Linear Operator is Non-Empty}
Tags:
\begin{theorem}
Suppose $B$ is a Banach space, $\mathfrak{L}(B, B)$ is the set of bounded linear operators from $B$ to itself, and $T \in \mathfrak{L}(B, B)$. Then the spectrum of $T$ is non-empty.
\end{theorem}
\begin{proof}
Let $f : \Bbb C \to \mathfrak{L}(B,B)$ be the resolvent mapping defined as $f(z) = (T - zI)^{-1}$. Suppose the spectrum of $T$ is empty, so that $f(z)$ is well-defined for all $z\in\Bbb C$.
We first show that $\|f(z)\|_*$ is uniformly bounded by some constant $C$.
Observe that
:$ \norm{f(z)}_* = \norm{ (T-zI)^{-1} }_* = \frac{1}{|z|} \norm{ (I - T/z)^{-1} }_*. \tag{1}$
For $|z| \geq 2\|T\|_*$, Operator Norm is Norm implies that $\|T/z\|_* \leq \frac{ \|T\|_* }{ 2\|T\|_*} = 1/2$, so by $(1)$ and Invertibility of Identity Minus Operator, we get
{{begin-eqn}}
{{eqn | l = \norm{f(z)}_*
| r = \frac{1}{ \size z } \norm{ \sum_{j=0}^\infty \left(\frac{T}{z} \right)^j }_*
}}
{{eqn | o = \leq
| r = \frac{1}{ \size z } \sum_{j=0}^\infty \frac{\norm T_*^j}{\size z^j}
| c = by Triangle Inequality and Operator Norm on Banach Space is Submultiplicative on each term
}}
{{eqn | o = \leq
| r = \frac{1}{ 2\norm T_* } \sum_{j=0}^\infty \frac{\norm T_*^j}{(2\norm T_*)^j}
| c = as $\size z \geq 2\norm T_*$
}}
{{eqn | o \leq
| r = \frac{1}{2\norm T_* } \sum_{j=0}^\infty 1/2^j
}}
{{eqn | o = <
| r = \infty.
}}
{{end-eqn}}
Therefore, the norm of $f(z)$ is bounded for $|z| \geq 2\|T\|_*$ by some constant $C_1$.
Next, consider the disk $|z| \leq 2\|T\|_*$ in the complex plane. It is compact. Since $f(z)$ is continuous on the disk by Resolvent Mapping is Continuous, and since Norm is Continuous, we get from Continuous Function on Compact Space is Bounded that $\|f\|_*$ is bounded on the this disk by some constant $C_2$.
Thus, $\|f(z)\|_*$ is bounded for all $z\in\Bbb C$ by $C = \max \{C_1, C_2\}$.
Finally, pick any $x\in B$ and $\ell \in B^*$, the dual of $B$. Define the function $g : \Bbb C \to \Bbb C$ by $g(z) = \ell(f(z)x)$.
Since $f$ has empty spectrum, Resolvent Mapping is Analytic and Strongly Analytic iff Weakly Analytic together imply that $g$ is an entire function. Thus we have
{{begin-eqn}}
{{eqn | l = \size{ g(z) }
| r = \size{ \ell((T - zI)^{-1} x) }
}}
{{eqn | o = \leq
| r = \norm{\ell}_{B^*} \norm { (T - zI)^{-1} }_* \norm{x}_B
| c = since $\ell$ and $(T-zI)^{-1}$ are bounded by assumption
}}
{{eqn | r = \norm{\ell}_{B^*} \norm{x}_B C
| o = \leq
| c = by the above
}}
{{eqn | o = <
| r = \infty.
}}
{{end-eqn}}
So $g$ is a bounded entire function. It is therefore equal to some constant $K$ by Liouville's Theorem.
But the inequality above $|g(z)| \leq \norm{\ell}_{B^*} \norm { (T - zI)^{-1} }_* \norm{x}_B$, together with Resolvent Mapping Converges to 0 at Infinity, implies $|K| = \lim_{z\to\infty} |g(z)| \leq 0$. So $g$ is the constant function $0$.
We have therefore shown that $\ell(f(z)x) = 0$ for any $x\in B, \ell \in B^*$. This implies from Condition for Bounded Linear Operator to be Zero that $f(z) = 0$, and in particular that $f(0) = T^{-1} = 0$.
But this is a contradiction, since our assumption that the spectrum of $T$ is empty implies that $T$ has a two-sided bounded inverse.
{{qed}}
\end{proof}
|
21667
|
\section{Spectrum of Ring is Nonempty}
Tags: Commutative Algebra
\begin{theorem}
Let $A$ be a non-trivial commutative ring with unity.
Then its prime spectrum is non-empty:
:$\Spec A \ne \O$
\end{theorem}
\begin{proof}
This is a reformulation of Ring with Unity has Prime Ideal.
{{qed}}
\end{proof}
|
21668
|
\section{Speed of Hour Hand}
Tags: Clocks
\begin{theorem}
Consider an analogue clock $C$.
The hour hand of $C$ rotates at $\dfrac 1 2$ of a degree of arc per minute.
\end{theorem}
\begin{proof}
It takes $12$ hours, for the hour hand to go round the dial one time.
That is, in $12$ minutes the hour hand travels $360 \degrees$.
So in $1$ hour, the hour hand travels $\dfrac {360} {12} \degrees$, that is, $30 \degrees$.
So in $1$ minute, the hour hand travels $\dfrac 1 {60} \times 30 \degrees$, that is, $\dfrac 1 2 \degrees$.
{{qed}}
\end{proof}
|
21669
|
\section{Speed of Minute Hand}
Tags: Clocks
\begin{theorem}
Consider an analogue clock $C$.
The minute hand of $C$ rotates at $6$ degrees of arc per minute.
\end{theorem}
\begin{proof}
It takes one hour, that is $60$ minutes, for the minute hand to go round the dial one time.
That is, in $60$ minutes the minute hand travels $360 \degrees$.
So in $1$ minute, the minute hand travels $\dfrac {360} {60} \degrees$, that is, $6 \degrees$.
{{qed}}
\end{proof}
|
21670
|
\section{Sphere in Normed Division Ring is Sphere in Induced Metric}
Tags: Definitions: Sphere, Definitions: Normed Division Rings, Definitions: Open Balls
\begin{theorem}
Let $\struct{R, \norm {\,\cdot\,} }$ be a normed division ring.
Let $d$ be the metric induced by the norm $\norm {\,\cdot\,}$.
Let $a \in R$.
Let $\epsilon \in \R_{>0}$ be a strictly positive real number.
Let $\map {S_\epsilon} {a; \norm {\,\cdot\,} }$ denote the sphere in the normed division ring $\struct {R, \norm {\,\cdot\,} }$.
Let $\map {S_\epsilon} {a; d }$ denote the sphere in the metric space $\struct {R, d}$.
Then:
:$\map {S_\epsilon} {a; \norm {\,\cdot\,} }$ = $\map {S_\epsilon} {a; d }$
\end{theorem}
\begin{proof}
{{begin-eqn}}
{{eqn | l = x
| o = \in
| r = \map {S_\epsilon} {a; \norm {\,\cdot\,} }
| c =
}}
{{eqn | ll= \leadstoandfrom
| l = \norm {x - a}
| r = \epsilon
| c = {{Defof|Sphere in Normed Division Ring}}
}}
{{eqn | ll= \leadstoandfrom
| l = \map d {x, a}
| r = \epsilon
| c = {{Defof|Metric Induced by Norm on Division Ring}}
}}
{{eqn | ll= \leadstoandfrom
| l = x
| o = \in
| r = \map {S_\epsilon} {a; d }
| c = {{Defof|Sphere}} in $\struct {R, d}$
}}
{{end-eqn}}
The result follows from Equality of Sets.
{{qed}}
\end{proof}
|
21671
|
\section{Sphere is Disjoint Union of Open Balls in P-adic Numbers}
Tags: P-adic Number Theory, Topology of P-adic Numbers
\begin{theorem}
Let $p$ be a prime number.
Let $\struct {\Q_p, \norm {\,\cdot\,}_p}$ be the $p$-adic numbers.
Let $\Z_p$ be the $p$-adic integers.
Let $a \in \Q_p$.
For all $\epsilon \in \R_{>0}$:
:let $\map {S_\epsilon} a$ denote the sphere of $a$ of radius $\epsilon$.
:let $\map {B_\epsilon} a$ denote the open ball of $a$ of radius $\epsilon$.
Then:
:$\ds \forall n \in Z: \map {S_{p^{-n} } } a = \bigcup_{i \mathop = 1}^{p - 1} \map {B_{p^{-n} } } {a + i p^n}$
\end{theorem}
\begin{proof}
For all $\epsilon \in \R_{>0}$:
:let $\map {B^-_\epsilon} a$ denote the closed ball of $a$ of radius $\epsilon$.
Let $n \in \Z$.
Then:
{{begin-eqn}}
{{eqn | l = \map {S_{p^{-n} } } a
| r = \map {B^-_{p^{-n} } } a \setminus \map {B_{p^{-n} } } a
| c = Sphere is Set Difference of Closed and Open Ball in P-adic Numbers
}}
{{eqn | r = \paren {\bigcup_{i \mathop = 0}^{p - 1} \map {B_{p^{-n} } } {a + i p^n} } \setminus \map {B_{p^{-n} } } a
| c = Closed Ball is Disjoint Union of Open Balls in P-adic Numbers
}}
{{eqn | r = \paren {\bigcup_{i \mathop = 1}^{p - 1} \map {B_{p^{-n} } } {a + i p^n} \cup \map {B_{p^{-n} } } {a + 0 \cdot p^n} } \setminus \map {B_{p^{-n} } } a
| c = Union is Associative and commutative
}}
{{eqn | r = \paren {\bigcup_{i \mathop = 1}^{p - 1} \map {B_{p^{-n} } } {a + i p^n} \cup \map {B_{p^{-n} } } a } \setminus \map {B_{p^{-n} } } a
| c = $a + 0 \cdot p^n = a$
}}
{{eqn | r = \paren {\bigcup_{i \mathop = 1}^{p-1} \map {B_{p^{-n} } } {a + i p^n} } \setminus \map {B_{p^{-n} } } a
| c = Set Difference with Union is Set Difference
}}
{{eqn | r = \bigcup_{i \mathop = 1}^{p - 1} \paren {\map {B_{p^{-n} } } {a + i p^n} \setminus \map {B_{p^{-n} } } a }
| c = Set Difference is Right Distributive over Union
}}
{{end-eqn}}
From Closed Ball is Disjoint Union of Open Balls in P-adic Numbers:
:$\set {\map {B_{p^{-n} } } {a + i p^n}: i = 0, \dots, p - 1}$ is a set of pairwise disjoint open balls.
Continuing from above:
{{begin-eqn}}
{{eqn | l = \map {S_{p^{-n} } } a
| r = \bigcup_{i \mathop = 1}^{p - 1} \paren {\map {B_{p^{-n} } } {a + i p^n} \setminus \map {B_{p^{-n} } } a}
| c =
}}
{{eqn | r = \bigcup_{i \mathop = 1}^{p - 1} \map {B_{p^{-n} } } {a + i p^n}
| c = Set Difference with Disjoint Set
}}
{{end-eqn}}
{{qed}}
\end{proof}
|
21672
|
\section{Sphere is Set Difference of Closed Ball with Open Ball}
Tags: Metric Spaces, Open Balls, Closed Balls
\begin{theorem}
Let $M = \struct{A, d}$ be a metric space or pseudometric space.
Let $a \in A$.
Let $\epsilon \in \R_{>0}$ be a strictly positive real number.
Let $\map {{B_\epsilon}^-} {a; d}$ denote the closed $\epsilon$-ball of $a$ in $M$.
Let $\map {B_\epsilon} {a; d}$ denote the open $\epsilon$-ball of $a$ in $M$.
Let $\map {S_\epsilon} {a; d}$ denote the $\epsilon$-sphere of $a$ in $M$.
Then:
:$\map {S_\epsilon} {a; d} = \map { {B_\epsilon}^-} {a; d} \setminus \map {B_\epsilon} {a; d}$
\end{theorem}
\begin{proof}
{{begin-eqn}}
{{eqn | l = \map {S_\epsilon } a
| r = \set {x : \map d {x, a} = \epsilon}
| c = {{Defof|Sphere}}
}}
{{eqn | r = \set {x : \map d {x, a} \le \epsilon} \setminus \set {x : \map d {x, a} < \epsilon }
| c =
}}
{{eqn | r = \set {x : \map d {x, a} \le \epsilon} \setminus \map {B_\epsilon} a
| c = {{Defof|Open Ball of Metric Space}}
}}
{{eqn | r = \map { {B_\epsilon}^- } a \setminus \map {B_\epsilon } a
| c = {{Defof|Closed Ball of Metric Space}}
}}
{{end-eqn}}
{{qed}}
Category:Metric Spaces
Category:Closed Balls
Category:Open Balls
\end{proof}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.