id
stringlengths
1
260
contents
stringlengths
1
234k
22073
\section{Substitution in Big-O Estimate/Real Analysis} Tags: Asymptotic Notation \begin{theorem} Let $f$ and $g$ be real-valued or complex-valued functions defined on a neighborhood of $+ \infty$ in $\R$. Let $f = \map \OO g$, where $\OO$ denotes big-O notation. Let $h$ be a real-valued defined on a neighborhood of $+ \infty$ in $\R$. Let $\ds \lim_{x \mathop \to +\infty} \map h x = +\infty$. Then: :$f \circ h = \map \OO {g \circ h}$ as $x \to +\infty$. \end{theorem} \begin{proof} {{ProofWanted}} Category:Asymptotic Notation \end{proof}
22074
\section{Substitution in Big-O Estimate/Sequences} Tags: Asymptotic Notation \begin{theorem} Let $(a_n)$ and $(b_n)$ be sequences of real or complex numbers. Let $a_n = O(b_n)$ where $O$ denotes big-O notation. Let $(n_k)$ be a diverging sequence of natural numbers. Then $a_{n_k} = O(b_{n_k})$. \end{theorem} \begin{proof} Because $a_n = O(b_n)$, there exists $M\geq0$ and $n_0 \in\N$ such that $|a_n| \leq M \cdot |b_n|$ for $n\geq n_0$. Because $n_k$ diverges, there exists $k_0\in\N$ such that $n_k\geq n_0$ for $k\geq k_0$. Then $|a_{n_k}| \leq M\cdot |b_{n_k}|$ for $k\geq k_0$. Thus $a_{n_k} = O(b_{n_k})$. {{qed}} Category:Asymptotic Notation \end{proof}
22075
\section{Substitution of Constant yields Primitive Recursive Function} Tags: Primitive Recursive Functions \begin{theorem} Let $f: \N^{k + 1} \to \N$ be a primitive recursive function. Then $g: \N^k \to \N$ given by: :$\map g {n_1, n_2, \ldots, n_k} = \map f {n_1, n_2, \ldots, n_{i - 1}, a, n_i \ldots, n_k}$ is primitive recursive. \end{theorem} \begin{proof} Let $n = \tuple {n_1, n_2, \ldots, n_{i - 1}, n_i \ldots, n_k}$. We see that: :$\map g {n_1, n_2, \ldots, n_k} = \map f {\map {\pr^k_1} n, \map {\pr^k_2} n, \ldots, \map {\pr^k_{i - 1} } n, \map {f_a} n, \map {\pr^k_i} n, \ldots, \map {\pr^k_k} n}$ We have that: * $\pr^k_j$ is a basic primitive recursive function for all $j$ such that $1 \ne j \le k$ * $f_a$ is a primitive recursive function. So $g$ is obtained by substitution from primitive recursive functions and so is primitive recursive. {{qed}} Category:Primitive Recursive Functions \end{proof}
22076
\section{Substitution of Elements} Tags: Set Theory \begin{theorem} Let $a$, $b$, and $x$ be sets. :$a = b \implies \left({a \in x \iff b \in x}\right)$ \end{theorem} \begin{proof} By the Axiom of Extension: : $a = b \implies \left({a \in x \implies b \in x}\right)$ Equality is Symmetric, so also by the Axiom of Extension: : $a = b \implies \left({b \in x \implies a \in x}\right)$ {{qed}} \end{proof}
22077
\section{Substitutivity of Class Equality} Tags: Zermelo-Fraenkel Class Theory, Class Theory \begin{theorem} Let $A$ and $B$ be classes. Let $\map P A$ be a well-formed formula of the language of set theory. Let $\map P B$ be the same proposition $\map P A$ with all instances of $A$ replaced with instances of $B$. Let $=$ denote class equality. :$A = B \implies \paren {\map P A \iff \map P B}$ \end{theorem} \begin{proof} {{NotZFC}} By induction on the well-formed parts of $\map P A$. The proof shall use $\implies$ and $\neg$ as the primitive connectives. \end{proof}
22078
\section{Substitutivity of Equality} Tags: Set Theory, Equality \begin{theorem} Let $x$ and $y$ be sets. Let $\map P x$ be a well-formed formula of the language of set theory. Let $\map P y$ be the same proposition $\map P x$ with some (not necessarily all) free instances of $x$ replaced with free instances of $y$. Let $=$ denote set equality. :$x = y \implies \paren {\map P x \iff \map P y}$ \end{theorem} \begin{proof} By induction on the well-formed parts of $\map P x$. The proof shall use $\implies$ and $\neg$ as the primitive connectives. \end{proof}
22079
\section{Substructure of Entropic Structure is Entropic} Tags: Entropic Structures \begin{theorem} Let $\struct {S, \odot}$ be an entropic structure: :$\forall a, b, c, d \in S: \paren {a \odot b} \odot \paren {c \odot d} = \paren {a \odot c} \odot \paren {b \odot d}$ Let $\struct {T, \odot_T}$ be a substructure of $\struct {S, \odot}$. Then $\struct {T, \odot_T}$ is also an entropic structure. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | q = \forall a, b, c, d \in T | o = | r = \paren {a \odot_T b} \odot_T \paren {c \odot_T d} | c = }} {{eqn | r = \paren {a \odot b} \odot \paren {c \odot d} | c = {{Defof|Operation Induced by Restriction}} }} {{eqn | r = \paren {a \odot c} \odot \paren {b \odot d} | c = as $S$ is an entropic structure }} {{eqn | r = \paren {a \odot_T c} \odot_T \paren {b \odot_T d} | c = {{Defof|Operation Induced by Restriction}} }} {{end-eqn}} {{qed}} Category:Entropic Structures \end{proof}
22080
\section{Subtract Half is Replicative Function} Tags: Replicative Functions \begin{theorem} Let $f: \R \to \R$ be the real function defined as: :$\forall x \in \R: \map f x = x - \dfrac 1 2$ Then $f$ is a replicative function. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 0}^{n - 1} \map f {x + \frac k n} | r = \sum_{k \mathop = 0}^{n - 1} \paren {x - \frac 1 2 + \frac k n} | c = }} {{eqn | r = n x - \frac n 2 + \frac 1 n \sum_{k \mathop = 0}^{n - 1} k | c = }} {{eqn | r = n x - \frac n 2 + \frac 1 n \frac {n \paren {n - 1} } 2 | c = Closed Form for Triangular Numbers }} {{eqn | r = n x - \frac n 2 + \frac n 2 - \frac 1 2 | c = }} {{eqn | r = n x - \frac 1 2 | c = }} {{eqn | r = \map f {n x} | c = }} {{end-eqn}} Hence the result by definition of replicative function. {{qed}} \end{proof}
22081
\section{Subtraction has no Identity Element} Tags: Numbers, Examples of Identity Elements, Identity Elements, Subtraction \begin{theorem} The operation of subtraction on numbers of any kind has no identity. \end{theorem} \begin{proof} {{AimForCont}} there exists an identity $e$ in one of the standard number systems $\F$. {{begin-eqn}} {{eqn | q = \forall x \in \F: | l = x | r = x - e | c = }} {{eqn | r = e - x | c = }} {{eqn | ll= \leadsto | l = x + \paren {-e} | r = e + \paren {-x} | c = }} {{eqn | ll= \leadsto | l = x + x | r = e + e | c = }} {{eqn | ll= \leadsto | l = x | r = e | c = }} {{end-eqn}} That is: :$\forall x \in \F: x = e$ But from Identity is Unique, if $e$ is an identity then there can be only one such. From Proof by Contradiction it follows that $\F$ has no such $e$. {{qed}} \end{proof}
22082
\section{Subtraction of Divisors obeys Distributive Law} Tags: Algebra, Subtraction of Divisors obeys Distributive Law \begin{theorem} {{:Euclid:Proposition/VII/7}} In modern algebraic language: :$a = \dfrac 1 n b, c = \dfrac 1 n d \implies a - c = \dfrac 1 n \paren {b - d}$ \end{theorem} \begin{proof} Let $AB$ be that part of the (natural) number $CD$ which $AE$ subtracted is of $CF$ subtracted. We need to show that the remainder $EB$ is also the same part of the number $CD$ which $AE$ subtracted is of $CF$ subtracted. :500px Whatever part $AE$ is of $CF$, let the same part $EB$ be of $CG$. Then from {{EuclidPropLink|book=VII|prop=5|title=Divisors obey Distributive Law}}, whatever part $AE$ is of $CF$, the same part also is $AB$ of $GF$. But whatever part $AE$ is of $CF$, the same part also is $AB$ of $CD$, by hypothesis. Therefore, whatever part $AB$ is of $GF$, the same pat is it of $CD$ also. Therefore $GF = CD$. Let $CF$ be subtracted from each. Therefore $GC = FD$. We have that whatever part $AE$ is of $CF$, the same part also is $EB$ of $CG$ Therefore whatever part $AE$ is of $CF$, the same part also is $EB$ of $FD$. But whatever part $AE$ is of $CF$, the same part also is $AB$ of $CD$. Therefore the remainder $EB$ is the same part of the remainder $FD$ that the whole $AB$ is of the whole $CD$. {{qed}} {{Euclid Note|7|VII}} 195383 195359 2014-09-28T09:03:56Z Prime.mover 59 195383 wikitext text/x-wiki \end{proof}
22083
\section{Subtraction of Multiples of Divisors obeys Distributive Law} Tags: Divisors, Algebra, Subtraction of Multiples of Divisors obeys Distributive Law, Subtraction \begin{theorem} {{:Euclid:Proposition/VII/8}} In modern algebraic language: :$a = \dfrac m n b, c = \dfrac m n d \implies a - c = \dfrac m n \paren {b - d}$ \end{theorem} \begin{proof} Let the (natural) number $AB$ be the same parts of the (natural) number $CD$ that $AE$ subtracted is of $CF$ subtracted. We need to show that $EB$ is also the same parts of the remainder $FD$ that the whole $AB$ is of the whole $CD$. :400px Let $GH = AB$. Then whatever parts $GH$ is of $CD$, the same parts also is $AE$ of $CF$. Let $GH$ be divided into the parts of $CD$, namely $GK + KH$, and $AE$ into the parts of $CF$, namely $AL + LE$. Thus the multitude of $GK, KH$ will be equal to the multitude of $AL, LE$. We have that whatever part $GK$ is of $CD$, the same part also is $AL$ of $CF$. We also have that $CD > CF$. Therefore $GK > AL$. Now let $GM = AL$. Then whatever part $GK$ is of $CD$, the same part also is $GM$ of $CF$. Therefore from Subtraction of Divisors obeys Distributive Law the remainder $MK$ is of the same part of the remainder $FD$ that the whole $GK$ is of the whole $CD$. Again, we have that whatever part $KH$ is of $CD$, the same part also is $EL$ of $CF$. We also have that $CD > CF$. Therefore $HK > EL$. Let $KN = EL$. Then whatever part $KH$ is of $CD$, the same part also is $KN$ of $CF$. Therefore from Subtraction of Divisors obeys Distributive Law the remainder $NH$ is of the same part of the remainder $FD$ that the whole $KH$ is of the whole $CD$. But $MK$ was proved to be the same part of $FD$ that $GK$ is of $CD$. Therefore $MK + NH$ is the same parts of $DF$ that $HG$ is of $CD$. But $MK + NH = EB$ and $HG = BA$. Therefore $EB$ is the same parts of $FD$ that $AB$ is of $CD$. {{Qed}} {{Euclid Note|8|VII}} 195388 195368 2014-09-28T10:10:47Z Prime.mover 59 195388 wikitext text/x-wiki \end{proof}
22084
\section{Subtraction of Subring is Subtraction of Ring} Tags: Ring Theory \begin{theorem} Let $\struct {R, +, \circ}$ be an ring. For each $x, y \in R$ let $x - y$ denote the subtraction of $x$ and $y$ in $R$. Let $\struct {S, + {\restriction_S}, \circ {\restriction_S}}$ be a subring of $R$. For each $x, y \in S$ let $x \sim y$ denote the subtraction of $x$ and $y$ in $S$. Then: :$\forall x, y \in S: x \sim y = x - y$ \end{theorem} \begin{proof} Let $x, y \in S$. Let $-x$ denote the ring negative of $x$ in $R$. Let $\mathbin \sim x$ denote the ring negative of $x$ in $S$. Then: {{begin-eqn}} {{eqn | l = x \sim y | r = x \mathbin {+ {\restriction_S} } \paren {\mathbin \sim y} | c = {{Defof|Ring Subtraction}} }} {{eqn | r = x + \paren {\mathbin \sim y} | c = {{Defof|Subring|Addition on Subring}} }} {{eqn | r = x + \paren {-y} | c = Negative of Subring is Negative of Ring }} {{eqn | r = x - y | c = {{Defof|Ring Subtraction}} }} {{end-eqn}} {{qed}} Category:Ring Theory \end{proof}
22085
\section{Subtraction on Integers is Extension of Natural Numbers} Tags: Integers, Subtraction \begin{theorem} Integer subtraction is an extension of the definition of subtraction on the natural numbers. \end{theorem} \begin{proof} Let $m, n \in \N: m \le n$. From natural number subtraction, $\exists p \in \N: m + p = n$ such that $n - m = p$. As $m, n, p \in \N$, it follows that $m, n, p \in \Z$ as well. However, as $\Z$ is the inverse completion of $\N$, it follows that $-m \in \Z$ as well, so it makes sense to express the following: {{begin-eqn}} {{eqn | l = \paren {n + \paren {-m} } + m | r = n + \paren {\paren {-m} + m} | c = }} {{eqn | r = n | c = }} {{eqn | r = p + m | c = }} {{eqn | r = \paren {n - m} + m | c = }} {{end-eqn}} Thus, as all elements of $\Z$ are cancellable, it follows that $n + \paren {-m} = n - m$. So: : $\forall m, n \in \Z, m \le n: n + \paren {-m} = n - m = n -_\N m$ and the result follows. {{qed}} Category:Integers Category:Subtraction \end{proof}
22086
\section{Subtraction on Numbers is Anticommutative/Integral Domains} Tags: Numbers, Subtraction on Numbers is Anticommutative, Subtraction \begin{theorem} The operation of subtraction on the numbers is anticommutative. That is: :$a - b = b - a \iff a = b$ \end{theorem} \begin{proof} Let $a, b$ be elements of one of the standard number sets: $\Z, \Q, \R, \C$. Each of those systems is an integral domain, and so is closed under the operation of subtraction. \end{proof}
22087
\section{Subtraction on Numbers is Anticommutative/Natural Numbers} Tags: Subtraction on Numbers is Anticommutative, Natural Numbers, Subtraction \begin{theorem} The operation of subtraction on the natural numbers $\N$ is anticommutative, and defined only when $a = b$: That is: :$a - b = b - a \iff a = b$ \end{theorem} \begin{proof} $a - b$ is defined on $\N$ only if $a \ge b$. If $a > b$, then although $a - b$ is defined, $b - a$ is not. So for $a - b = b - a$ it is necessary for both to be defined. This happens only when $a = b$. Hence the result. Category:Subtraction on Numbers is Anticommutative \end{proof}
22088
\section{Subtraction on Numbers is Not Associative} Tags: Numbers, Examples of Associative Operations, Associativity, Subtraction \begin{theorem} The operation of subtraction on the numbers is not associative. That is, in general: :$a - \paren {b - c} \ne \paren {a - b} - c$ \end{theorem} \begin{proof} By definition of subtraction: {{begin-eqn}} {{eqn | l = a - \paren {b - c} | r = a + \paren {-\paren {b + \paren {-c} } } | c = }} {{eqn | r = a + \paren {-b} + c | c = }} {{end-eqn}} {{begin-eqn}} {{eqn | l = \paren {a - b} - c | r = \paren {a + \paren {-b} } + \paren {-c} | c = }} {{eqn | r = a + \paren {-b} + \paren {-c} | c = }} {{end-eqn}} So we see that: :$a - \paren {b - c} = \paren {a - b} - c \iff c = 0$ and so in general: :$a - \paren {b - c} \ne \paren {a - b} - c$ {{qed}} \end{proof}
22089
\section{Succeed is Dual to Precede} Tags: Order Theory \begin{theorem} Let $\left({S, \preceq}\right)$ be an ordered set. Let $a, b \in S$. The following are dual statements: :$a$ succeeds $b$ :$a$ precedes $b$ \end{theorem} \begin{proof} By definition, $a$ succeeds $b$ {{iff}}: :$b \preceq a$ The dual of this statement is: :$a \preceq b$ by Dual Pairs (Order Theory). By definition, this means $a$ precedes $b$. The converse follows from Dual of Dual Statement (Order Theory). {{qed}} \end{proof}
22090
\section{Successive Solutions of Phi of n equals Phi of n + 2} Tags: Euler Phi Function \begin{theorem} Let $\phi$ denote the Euler $\phi$ function. $7$ and $8$ are two successive integers which are solutions to the equation: :$\map \phi n = \map \phi {n + 2}$ \end{theorem} \begin{proof} From Euler Phi Function of Prime: :$\map \phi 7 = 7 - 1 = 6$ From Euler Phi Function of Prime Power: :$\map \phi 9 = \map \phi {3^2} = 2 \times 3^{2 - 1} = 6 = \map \phi 7$ From the corollary to Euler Phi Function of Prime Power: :$\map \phi 8 = \map \phi {2^3} = 2^{3 - 1} = 4$ From Euler Phi Function of Integer: :$\map \phi {10} = \map \phi {2 \times 5} = 10 \paren {1 - \dfrac 1 2} \paren {1 - \dfrac 1 5} = 10 \times \dfrac 1 2 \times \dfrac 4 5 = 4 = \map \phi 8$ {{qed}} \end{proof}
22091
\section{Successor Mapping is Inflationary} Tags: Inflationary Mappings, Successor Mapping \begin{theorem} Let $\omega$ denote the set of natural numbers as defined by the von Neumann construction. Let $s: \omega \to \omega$ denote the successor mapping on $\omega$. Then $s$ is an inflationary mapping. \end{theorem} \begin{proof} By definition of the von Neumann construction: :$n^+ = n \cup \set n$ from which it follows that: :$n \subseteq n^+$ {{qed}} \end{proof}
22092
\section{Successor Mapping of Peano Structure has no Fixed Point} Tags: Peano's Axioms \begin{theorem} Let $\PP = \struct {P, s, 0}$ be a Peano structure. Then: :$\forall n \in P: \map s n \ne n$ That is, the successor mapping has no fixed points. \end{theorem} \begin{proof} Let $T$ be the set: :$T = \set {n \in P: \map s n \ne n}$ We will use {{PeanoAxiom|5}} to prove that $T = P$. \end{proof}
22093
\section{Successor Mapping on Natural Numbers has no Fixed Element} Tags: Natural Numbers \begin{theorem} Let $\N$ denote the set of natural numbers. Then: :$\forall n \in \N: n + 1 \ne n$ \end{theorem} \begin{proof} Consider the set of natural numbers as defined by the von Neumann construction. From Von Neumann Construction of Natural Numbers is Minimally Inductive, $\N$ is a minimally inductive class under the successor mapping. Let $s: \N \to \N$ denote the successor mapping: :$\forall x \in \N: \map s x := x + 1$ {{AimForCont}} $\exists n \in \N: n = n + 1$ From Fixed Point of Progressing Mapping on Minimally Inductive Class is Greatest Element, $n$ is the greatest element of $\N$. From Minimally Inductive Class with Fixed Element is Finite it follows that $\N$ is a finite set. This contradicts the fact that the natural numbers are by definition countably infinite. {{qed}} \end{proof}
22094
\section{Successor Mapping on Natural Numbers is not Surjection} Tags: Surjections \begin{theorem} Let $f: \N \to \N$ be the successor mapping on the natural numbers $\N$: :$\forall n \in \N: \map f n = n + 1$ Then $f$ is not a surjection. \end{theorem} \begin{proof} There exists no $n \in \N$ such that $n + 1 = 0$. Thus $\map f 0$ has no preimage. The result follows by definition of surjection. {{qed}} \end{proof}
22095
\section{Successor Set of Ordinal is Ordinal} Tags: Ordinals \begin{theorem} Let $S$ be an ordinal. Then its successor set $S^+ = S \cup \set S$ is also an ordinal. \end{theorem} \begin{proof} Since $S$ is transitive, it follows by Successor Set of Transitive Set is Transitive that $S^+$ is transitive. We now have to show that $S^+$ is strictly well-ordered by the epsilon restriction $\Epsilon \! \restriction_{S^+}$. So suppose that a subset $A \subseteq S^+$ is non-empty. Then: {{begin-eqn}} {{eqn | l = A | r = A \cap \paren {S \cup \set S} | c = Intersection with Subset is Subset and {{Defof|Successor Set}} }} {{eqn | r = \paren {A \cap S} \cup \paren {A \cap \set S} | c = Intersection Distributes over Union }} {{end-eqn}} Let us refer to the above equation by the symbol $\paren 1$. We need to show that $A$ has a smallest element. We first consider the case where $A \cap S$ is empty. By equation $\paren 1$, it follows that $A \cap \set S$ is non-empty (because $A$ is non-empty). Therefore, $S \in A$. That is, $\set S \subseteq A$. By Union with Empty Set and Intersection with Subset is Subset, equation $\paren 1$ implies that $A \subseteq \set S$. Therefore, $A = \set S$ by the definition of set equality. So $S$ is the smallest element of $A$. We now consider the case where $A \cap S$ is non-empty. By Intersection is Subset, $A \cap S \subseteq S$; by the definition of a well-ordered set, there exists a smallest element $x$ of $A \cap S$. Let $y \in A$. If $y \in S$, then $y \in A \cap S$; therefore, by the definition of the smallest element, either $x \in y$ or $x = y$. Otherwise, $y = S$, and so $x \in S = y$. That is, $x$ is the smallest element of $A$. {{qed}} \end{proof}
22096
\section{Successor Set of Transitive Set is Transitive} Tags: Set Theory \begin{theorem} Let $S$ be a transitive set. Then its successor set $S\,^+ = S \cup \left\{{S}\right\}$ is also transitive. \end{theorem} \begin{proof} Suppose that $x \in S\,^+$. Then either $x \in S$ or $x = S$. If $x \in S$, it follows by the transitivity of $S$ that $x \subseteq S$. If $x = S$, then $x = S \subseteq S$ because a set is a subset of itself. Since $S \subseteq S\,^+$, it follows by the transitivity of the subset relation that $x \subseteq S\,^+$. {{qed}} Category:Set Theory \end{proof}
22097
\section{Successor Sets of Linearly Ordered Set Induced by Convex Component Partition} Tags: Linearly Ordered Spaces \begin{theorem} Let $T = \struct {S, \preceq, \tau}$ be a linearly ordered space. Let $A$ and $B$ be separated sets of $T$. Let $A^*$ and $B^*$ be defined as: :$A^* := \ds \bigcup \set {\closedint a b: a, b \in A, \closedint a b \cap B^- = \O}$ :$B^* := \ds \bigcup \set {\closedint a b: a, b \in B, \closedint a b \cap A^- = \O}$ where $A^-$ and $B^-$ denote the closure of $A$ and $B$ in $T$. Let $A^*$, $B^*$ and $\relcomp S {A^* \cup B^*}$ be expressed as the union of convex components of $S$: :$\ds A^* = \bigcup A_\alpha, \quad B^* = \bigcup B_\beta, \quad \relcomp S {A^* \cup B^*} = \bigcup C_\gamma$ where $\relcomp S X$ denotes the complement of $X$ with respect to $S$. Let $M$ be the linearly ordered set: :$M = \set {A_\alpha, B_\beta, C_\gamma}$ as defined in Partition of Linearly Ordered Space by Convex Components is Linearly Ordered Set. Then each of the sets $A_\alpha \in M$ has an immediate successor in $M$ if $A_\alpha$ intersects the closure of $S_\alpha$, the set of strict upper bounds for $A_\alpha$. Similarly for $B_\beta$. That immediate successor ${C_\alpha}^+$ to $A_\alpha$ is an element in $\set {C_\gamma}$. \end{theorem} \begin{proof} Let $A_\alpha \cap {S_\alpha}^- \ne \O$. Then $A_\alpha \cap {S_\alpha}^-$ contains exactly $1$ point, say $p$. This belongs to the complement in $S$ of the closed set $\paren {B^*}^-$. Hence there exists a neighborhood $\openint x y$ of $p$ which is disjoint from $\paren {B^*}^-$. Then: :$\openint x y \cap S_\alpha \ne \O$ and so: :$\openint p y \ne \O$ But $\openint p y$ is disjoint from both $A^*$ and $B^*$. Thus there must exist some $C_\gamma$ which contains $\openint p y$. {{qed}} \end{proof}
22098
\section{Successor in Limit Ordinal} Tags: Ordinals \begin{theorem} Let $x$ be a limit ordinal. Let $y \in x$. Then $y^+ \in x$ where $y^+$ denotes the successor set of $y$: :$\forall y \in x: y^+ \in x$ \end{theorem} \begin{proof} Because $x$ is a limit ordinal: :$x \ne y^+$ Moreover, by Successor of Element of Ordinal is Subset: :$y \in x \implies y^+ \subseteq x$ Therefore by Transitive Set is Proper Subset of Ordinal iff Element of Ordinal: :$y^+ \subset x$ and $y^+ \in x$ {{qed}} Category:Ordinals \end{proof}
22099
\section{Successor is Less than Successor} Tags: Ordinals \begin{theorem} Let $x$ and $y$ be ordinals and let $x^+$ denote the successor set of $x$. Then, $x \in y \iff x^+ \in y^+$. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = x \in y | o = \implies | r = x^+ \in y^+ | c = Subset is Compatible with Ordinal Successor }} {{eqn | l = x \in y | o = \impliedby | r = x^+ \in y^+ | c = Sufficient Condition }} {{eqn | ll= \leadsto | l = x \in y | o = \iff | r = x^+ \in y^+ | c = }} {{end-eqn}} {{qed}} Category:Ordinals \end{proof}
22100
\section{Successor is Less than Successor/Sufficient Condition/Proof 1} Tags: Ordinals \begin{theorem} Let $x$ and $y$ be ordinals and let $x^+$ denote the successor set of $x$. Let $x^+ \in y^+$. Then: : $x \in y$ \end{theorem} \begin{proof} Suppose $y^+ \in x^+$. By the definition of successor, $y^+ \in x \lor y^+ = x$. Suppose $y^+ = x$. By Ordinal is Less than Successor, $y \in x$. Suppose $y^+ \in x$. By Ordinal is Less than Successor, $y \in y^+$. By Ordinal is Transitive, $y \in x$. {{qed}} Category:Ordinals \end{proof}
22101
\section{Successor is Less than Successor/Sufficient Condition/Proof 2} Tags: Ordinals \begin{theorem} Let $x$ and $y$ be ordinals and let $x^+$ denote the successor set of $x$. Let $x^+ \in y^+$. Then: : $x \in y$ \end{theorem} \begin{proof} First note that by Successor Set of Ordinal is Ordinal, $x^+$ and $y^+$ are ordinals. Let $x^+ \in y^+$. Then since $y^+$ is transitive, $x^+ \subseteq y^+$. Thus $x \in y$ or $x = y$. If $x = y$ then $x^+ \in x^+$, contradicting Ordinal is not Element of Itself. Thus $x \in y$. Category:Ordinals \end{proof}
22102
\section{Successor of Element of Ordinal is Subset} Tags: Ordinals \begin{theorem} Let $x$ and $y$ be ordinals. Then: :$x \in y \iff x^+ \subseteq y$ \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = x | o = \in | r = y | c = }} {{eqn | ll= \leadstoandfrom | l = x^+ | o = \in | r = y^+ | c = Successor is Less than Successor }} {{eqn | ll= \leadstoandfrom | l = x^+ | o = \in | r = y | c = {{Defof|Successor Set}} }} {{eqn | lo= \lor | l = x^+ | r = y | c = }} {{eqn | ll= \leadstoandfrom | l = x^+ | o = \subsetneq | r = y | c = Transitive Set is Proper Subset of Ordinal iff Element of Ordinal }} {{eqn | lo = \lor | l = x^+ | r = y | c = }} {{eqn | ll= \leadstoandfrom | l = x^+ | o = \subseteq | r = y | c = }} {{end-eqn}} {{qed}} Category:Ordinals \end{proof}
22103
\section{Successor to Natural Number} Tags: Natural Numbers: 1-Based \begin{theorem} Let $\N_{> 0}$ be the 1-based natural numbers: :$\N_{> 0} = \left\{{1, 2, 3, \ldots}\right\}$ Let $<$ be the ordering on $\N_{> 0}$: :$\forall a, b \in \N_{>0}: a < b \iff \exists c \in \N_{>0}: a + c = b$ Let $a \in \N_{>0}$. Then there exists no natural number $n$ such that $a < n < a + 1$. \end{theorem} \begin{proof} Using the following axioms: {{:Axiom:Axiomatization of 1-Based Natural Numbers}} Suppose that $\exists n \in \N_{>0}: a < n < a + 1$. Then by the definition of ordering on natural numbers: {{begin-eqn}} {{eqn | l = a + x | r = n | c = Definition of Ordering on Natural Numbers: $a < n$ }} {{eqn | l = n + y | r = a + 1 | c = Definition of Ordering on Natural Numbers: $a < n$ }} {{eqn | ll= \implies | l = \left({a + x}\right) + y | r = a + 1 | c = substitution for $n$ }} {{eqn | ll= \implies | l = a + \left({x + y}\right) | r = a + 1 | c = Natural Number Addition is Associative }} {{eqn | ll= \implies | l = x + y | r = 1 | c = Addition on $1$-Based Natural Numbers is Cancellable }} {{end-eqn}} By Axiom $D$, either: :$y = 1$ or: :$y = t + 1$ for some $t \in \N_{>0}$ Then either: :$x + 1 = 1$ when $y = 1$ or: :$x + \left({t + 1}\right) = \left({x + t}\right) + 1 = 1$ when $y = t + 1$ Both of these conclusions violate Natural Number is Not Equal to Successor. Hence the result. {{qed}} \end{proof}
22104
\section{Sufficient Condition for 5 to divide n^2+1} Tags: Modulo Arithmetic \begin{theorem} Let: {{begin-eqn}} {{eqn | l = 5 | o = \nmid | r = n - 1 }} {{eqn | l = 5 | o = \nmid | r = n }} {{eqn | l = 5 | o = \nmid | r = n + 1 }} {{end-eqn}} where $\nmid$ denotes non-divisibility. Then: :$5 \divides n^2 + 1$ where $\divides$ denotes divisibility. \end{theorem} \begin{proof} We have that: {{begin-eqn}} {{eqn | l = 5 | o = \nmid | r = n - 1 }} {{eqn | ll= \leadsto | l = n - 1 | o = \not \equiv | r = 0 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n | o = \not \equiv | r = 1 | rr= \pmod 5 }} {{end-eqn}} {{begin-eqn}} {{eqn | l = 5 | o = \nmid | r = n }} {{eqn | ll= \leadsto | l = n | o = \not \equiv | r = 0 | rr= \pmod 5 }} {{end-eqn}} {{begin-eqn}} {{eqn | l = 5 | o = \nmid | r = n + 1 }} {{eqn | ll= \leadsto | l = n + 1 | o = \not \equiv | r = 0 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n | o = \not \equiv | r = 4 | rr= \pmod 5 }} {{end-eqn}} So either: :$n \equiv 2 \pmod 5$ or: :$n \equiv 3 \pmod 5$ and so: {{begin-eqn}} {{eqn | l = n | o = \equiv | r = 2 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n^2 | o = \equiv | r = 4 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n^2 + 1 | o = \equiv | r = 0 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = 5 | o = \divides | r = n^2 + 1 }} {{end-eqn}} or: {{begin-eqn}} {{eqn | l = n | o = \equiv | r = 3 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n^2 | o = \equiv | r = 3^2 | rr= \pmod 5 }} {{eqn | o = \equiv | r = 4 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = n^2 + 1 | o = \equiv | r = 0 | rr= \pmod 5 }} {{eqn | ll= \leadsto | l = 5 | o = \divides | r = n^2 + 1 }} {{end-eqn}} {{qed}} \end{proof}
22105
\section{Sufficient Condition for Quotient Group by Intersection to be Abelian} Tags: Quotient Groups \begin{theorem} Let $G$ be a group. Let $N$ and $K$ be normal subgroups of $G$. Let the quotient groups $G / N$ and $G / K$ be abelian. Then the quotient group $G / \paren {N \cap K}$ is also abelian. \end{theorem} \begin{proof} From Intersection of Normal Subgroups is Normal, we have that $N \cap K$ is normal in $G$. We are given that $G / N$ and $G / K$ are abelian. Hence: {{begin-eqn}} {{eqn | q = \forall x, y \in G | l = \sqbrk {x, y} | o = \in | r = N | c = Quotient Group is Abelian iff All Commutators in Divisor }} {{eqn | lo= \land | l = \sqbrk {x, y} | o = \in | r = K | c = }} {{eqn | ll= \leadsto | q = \forall x, y \in G | l = \sqbrk {x, y} | o = \in | r = N \cap K | c = {{Defof|Set Intersection}} }} {{eqn | ll= \leadsto | l = G / \paren {N \cap K} | o = | r = \text {is abelian} | c = Quotient Group is Abelian iff All Commutators in Divisor }} {{end-eqn}} {{qed}} \end{proof}
22106
\section{Sufficient Condition for Square of Product to be Triangular} Tags: Triangular Numbers, Square Numbers \begin{theorem} Let $n \in \Z_{>0}$ be a (strictly) positive integer. Let $2 n^2 \pm 1 = m^2$ be a square number. Then $\paren {m n}^2$ is a triangular number. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \paren {m n}^2 | r = \paren {2 n^2 \pm 1} \times n^2 | c = }} {{eqn | r = \dfrac {\paren {2 n^2 \pm 1} \paren {2 n^2} } 2 | c = }} {{end-eqn}} That is, either: :$\paren {m n}^2 = \dfrac {\paren {2 n^2 - 1} \paren {2 n^2} } 2$ and so: :$\paren {m n}^2 = T_{2 n^2 - 1}$ or: :$\paren {m n}^2 = \dfrac {\paren {2 n^2} \paren {2 n^2 + 1} } 2$ and so: :$\paren {m n}^2 = T_{2 n^2}$ Hence the result. {{Qed}} \end{proof}
22107
\section{Sufficient Condition for Twice Differentiable Functional to have Minimum} Tags: Calculus of Variations, Definitions: Calculus of Variations \begin{theorem} Let $J$ be a twice differentiable functional. Let $J$ have an extremum for $y=\hat y$. Let the second variation $\delta^2 J \sqbrk {\hat y; h}$ be strongly positive {{WRT}} $h$. Then $J$ acquires the minimum for $y = \hat y$ . \end{theorem} \begin{proof} By assumption, $J$ has an extremum for $y = \hat y$: :$\delta J \sqbrk {\hat y; h} = 0$ The increment is expressible then as: :$\Delta J \sqbrk {\hat y; h} = \delta^2 J \sqbrk {\hat y; h} + \epsilon \size h^2$ where $\epsilon \to 0$ as $\size h \to 0$. By assumption, the second variation is strongly positive: :$\delta^2 J \sqbrk {\hat y; h} \ge k \size h^2, \quad k \in \R_{>0}$ Hence: :$\Delta J \sqbrk {\hat y; h} \ge \paren {k + \epsilon} \size h^2$ What remains to be shown is that there exists a set of $h$ such that $\epsilon$ is small enough so that {{RHS}} is always positive. Since $\epsilon \to 0$ as $\size h \to 0$, there exist $c \in \R_{>0}$, such that: :$\size h < c \implies \size \epsilon < \dfrac 1 2 k$ Choose $h$ such that this inequality holds. Then {{begin-eqn}} {{eqn | l = \frac 1 2 k | o = > | r = \epsilon > -\frac 1 2 k | c = $\big \vert + k$, by Membership is Left Compatible with Ordinal Addition }} {{eqn | l = \frac 3 2 k | o = > | r = k + \epsilon > \frac 1 2 k | c = $\big \vert \cdot \size h^2$, by Membership is Left Compatible with Ordinal Multiplication }} {{eqn | l = \frac 3 2 k \size h^2 | o = > | r = \paren {k + \epsilon} \size h^2 > \frac 1 2 k \size h^2 }} {{end-eqn}} {{Explain|What does this mean? : $\big \vert + k$ and $\big \vert \cdot \size h^2$}} Therefore: :$\Delta J \sqbrk {\hat y; h} \ge \paren {k + \epsilon} \size h^2 > \dfrac 1 2 k \size h^2 $ For $k \in \R_{>0}$ and $\size h \ne 0$ {{RHS}} is always positive. Thus, there exists a neighbourhood around $y = \hat y$ where the increment is always positive: :$\exists c \in \R_{>0}: \size h < c \implies \Delta J \sqbrk {\hat y; h} > 0$ and $J$ has a minimum for $y = \hat y$. {{qed}} \end{proof}
22108
\section{Sufficient Conditions for Uncountability} Tags: Uncountable Sets, Infinite Sets, Set Theory \begin{theorem} Let $X$ be a set. The following are equivalent: :$(1): \quad X$ contains an uncountable subset :$(2): \quad X$ is uncountable :$(3): \quad $ Every sequence of distinct points $\sequence {x_n}_{n \mathop \in \N}$ in $X$ omits at least one $x \in X$ :$(4): \quad $ There is no surjection $\N \twoheadrightarrow X$ :$(5): \quad X$ is infinite and there is no bijection $X \leftrightarrow \N$ Assuming the Continuum Hypothesis holds, we also have the equivalent uncountability condition: :$(6): \quad $There exist extended real numbers $a < b$ and a surjection $X \to \closedint a b$ \end{theorem} \begin{proof} Recall that $X$ is uncountable if there is no injection $X \hookrightarrow \N$. \end{proof}
22109
\section{Sufficient Conditions for Weak Extremum} Tags: Calculus of Variations \begin{theorem} Let $J$ be a functional such that: :$\ds J \sqbrk y = \int_a^b \map F {x, y, y'} \rd x$ :$\map y a = A$ :$\map y b = B$ Let $y = \map y x$ be an extremum. Let the strengthened Legendre's Condition hold. Let the strengthened Jacobi's Necessary Condition hold. {{explain|specific links to those strengthened versions}} Then the functional $J$ has a weak minimum for $y = \map y x$. \end{theorem} \begin{proof} By the continuity of function $\map P x$ and the solution of Jacobi's equation: :$\exists \epsilon > 0: \paren {\forall x \in \closedint a {b + \epsilon}:\map P x > 0} \land \paren {\tilde a \notin \closedint a {b + \epsilon} }$ Consider the quadratic functional: :$\ds \int_a^b \paren {P h'^2 + Q h^2} \rd x - \alpha^2 \int_a^b h'^2 \rd x$ together with Euler's equation: :$-\dfrac \rd {\rd x} \paren{\paren {P - \alpha^2} h'} + Q h = 0$ The Euler's equation is continuous {{WRT}} $\alpha$. Thus the solution of the Euler's equation is continuous {{WRT}} $\alpha $. {{ProofWanted|solution to continuous differential equation is continuous}} Since: :$\forall x \in \closedint a {b + \epsilon}: \map P x > 0$ $\map P x$ has a positive lower bound in $\closedint a {b + \epsilon}$. Consider the solution with $\map h a = 0$, $\map {h'} 0 = 1$. Then :$\exists \alpha \in \R: \forall x \in \closedint a b: \map P x - \alpha^2 > 0$ Also: :$\forall x \in \hointl a b: \map h x \ne 0$ {{Stub|seems to be Jacobi's condition where $P \to P - \alpha^2$}} By Necessary and Sufficient Condition for Quadratic Functional to be Positive Definite: :$\ds \int_a^b \paren {\paren {P - \alpha^2} h'^2 + Q h^2} \rd x > 0$ In other words, if $c = \alpha^2$, then: :$(1): \quad \exists c > 0: \ds \int_a^b \paren {P h'^2 + Q h^2} \rd x > c \int_a^b h'^2 \rd x$ Let $y = \map y x$ be an extremal. Let $y = \map y x + \map h x$ be a curve, sufficiently close to $y = \map y x$. By expansion of $\Delta J \sqbrk {y; h}$ from lemma $1$ of Legendre's Condition: :$\ds J \sqbrk {y + h} - J \sqbrk y = \int_a^b \paren {P h'^2 + Q h^2} \rd x + \int_a^b \paren {\xi h'^2 + \eta h^2} \rd x$ where: :$\ds \forall x \in \closedint a b: \lim_{\size h_1 \mathop \to 0} \set {\xi,\eta} = \set {0, 0}$ {{explain|$\size h_1$}} and the limit is uniform. {{Stub|why?}} By Schwarz inequality: {{begin-eqn}} {{eqn | l = \map {h^2} x | r = \paren {\int_a^x \map {h'} x \rd x}^2 }} {{eqn | r = \int_a^x 1^2 d y \int_a^x \map {h'^2} x \rd x }} {{eqn | o = \le | r = \paren {x - a} \int_a^x \map {h'^2} x \rd x }} {{eqn | o = \le | r = \paren {x - a} \int_a^b \map {h'^2} x \rd x | c = $h'^2 \ge 0$ }} {{end-eqn}} Notice that the integral on the right does not depend on $x$. Integrate the inequality {{WRT|Integration}} $x$: {{begin-eqn}} {{eqn | l = \int_a^b \map{h^2} x \rd x | o = \le | r = \int_a^b \paren {x - a} \rd x \int_a^b \map {h'^2} x \rd x }} {{eqn | r = \frac {\paren {b - a}^2} 2 \int_a^b \map {h'^2} x \rd x }} {{end-eqn}} Let $\epsilon \in \R_{>0}$ be a constant such that: :$\size \xi \le \epsilon$, $\size \eta \le \epsilon$ Then: {{begin-eqn}} {{eqn | l = \size {\int_a^b \paren {\xi h^2 + \eta h'^2} \rd x} | o = \le | r = \int_a^b \size \xi h^2 \rd x + \int_a^b \size \eta h'^2 \rd x | c = Absolute Value of Definite Integral, Absolute Value of Product }} {{eqn | o = \le | r = \epsilon \int_a^b h'^2 \rd x + \epsilon \frac {\paren {a - b}^2} 2 \int_a^b h'^2 \rd x }} {{eqn | n = 2 | r = \epsilon \paren {1 + \frac {\paren {b - a}^2} 2} \int_a^b h'^2 \rd x }} {{end-eqn}} Thus, by $(1)$: :$\ds \int_a^b \paren {P h'^2 + Q h^2} \rd x > 0$ while by $(2)$: :$\ds \int_a^b \paren {\xi h'^2 + \eta h^2} \rd x$ can be made arbitrarily small. Thus, for all sufficiently small $\size h_1$, which implies sufficiently small $\size \xi$ and $\size \eta$, and, consequently, sufficiently small $\epsilon$: :$J \sqbrk {y + h} - J \sqbrk y = \int_a^b \paren {P h'^2 + Q h^2} \rd x + \int_a^b \paren {\xi h'^2 + \eta h^2} \rd x > 0$ Therefore, in some small neighbourhood $y = \map y x$ there exists a weak minimum of the functional. {{qed}} \end{proof}
22110
\section{Sufficient Conditions for Weak Stationarity of Order 2} Tags: Stationary Stochastic Processes \begin{theorem} Let $S$ be a stochastic process giving rise to a time series $T$. Let the mean of $S$ be fixed. Let the autocovariance matrix of $S$ be of the form: :$\boldsymbol \Gamma_n = \begin {pmatrix} \gamma_0 & \gamma_1 & \gamma_2 & \cdots & \gamma_{n - 1} \\ \gamma_1 & \gamma_0 & \gamma_1 & \cdots & \gamma_{n - 2} \\ \gamma_2 & \gamma_1 & \gamma_0 & \cdots & \gamma_{n - 3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \gamma_{n - 1} & \gamma_{n - 2} & \gamma_{n - 3} & \cdots & \gamma_0 \end {pmatrix} = \sigma_z^2 \mathbf P_n = \begin {pmatrix} 1 & \rho_1 & \rho_2 & \cdots & \rho_{n - 1} \\ \rho_1 & 1 & \rho_1 & \cdots & \rho_{n - 2} \\ \rho_2 & \rho_1 & 1 & \cdots & \rho_{n - 3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho_{n - 1} & \rho_{n - 2} & \rho_{n - 3} & \cdots & 1 \end {pmatrix}$ Then $S$ is weakly stationary of order $2$. \end{theorem} \begin{proof} Follows from the definition of weak stationarity. \end{proof}
22111
\section{Sum Less Maximum is Minimum} Tags: Algebra \begin{theorem} For all numbers $a, b$ where $a, b$ in $\N, \Z, \Q$ or $\R$: :$a + b - \max \set {a, b} = \min \set {a, b}$. \end{theorem} \begin{proof} From Sum of Maximum and Minimum we have that $a + b = \max \set {a, b} + \min \set {a, b}$. Thus $a + b - \max \set {a, b} = \min \set {a, b}$ follows by subtracting $\max \set {a, b}$ from both sides. It is clear that this result applies when $a, b$ in $\Z, \Q$ or $\R$, as subtraction is well-defined throughout those number sets. However, it can be seen to apply in $\N$ as well, despite the fact that $n - m$ is defined on $\N$ only when $n \ge m$. This is because the fact that $a + b \ge \max \set {a, b}$ follows immediately again from $a + b = \max \set {a, b} + \min \set {a, b}$. {{qed}} Category:Algebra \end{proof}
22112
\section{Sum Less Minimum is Maximum} Tags: Algebra \begin{theorem} For all numbers $a, b$ where $a, b$ in $\N, \Z, \Q$ or $\R$: :$a + b - \min \set {a, b} = \max \set {a, b}$ \end{theorem} \begin{proof} From Sum of Maximum and Minimum we have that $a + b = \max \set {a, b} + \min \set {a, b}$. Thus $a + b - \min \set {a, b} = \max \set {a, b}$ follows by subtracting $\min \set {a, b}$ from both sides. It is clear that this result applies when $a, b$ in $\Z, \Q$ or $\R$, as subtraction is well-defined throughout those number sets. However, it can be seen to apply in $\N$ as well, despite the fact that $n - m$ is defined on $\N$ only when $n \ge m$. This is because the fact that $a + b \ge \min \set {a, b}$ follows immediately again from $a + b = \max \set {a, b} + \min \set {a, b}$. {{qed}} Category:Algebra \end{proof}
22113
\section{Sum Over Divisors Equals Sum Over Quotients} Tags: Number Theory, Divisors \begin{theorem} Let $n$ be a positive integer. Let $f: \Z_{>0} \to \Z_{>0}$ be a mapping on the positive integers. Let $\ds \sum_{d \mathop \divides n} \map f d$ be the sum of $\map f d$ over the divisors of $n$. Then: :$\ds \sum_{d \mathop \divides n} \map f d = \sum_{d \mathop \divides n} \map f {\frac n d}$. \end{theorem} \begin{proof} If $d$ is a divisor of $n$ then $d \times \dfrac n d = n$ and so $\dfrac n d$ is also a divisor of $n$. Therefore if $d_1, d_2, \ldots, d_r$ are all the divisors of $n$, then so are $\dfrac n {d_1}, \dfrac n {d_2}, \ldots, \dfrac n {d_r}$, except in a different order. Hence: {{begin-eqn}} {{eqn | l = \sum_{d \mathop \divides n} \map f {\frac n d} | r = \map f {\frac n {d_1} } + \map f {\frac n {d_2} } + \cdots + \map f {\frac n {d_r} } | c = }} {{eqn | r = \map f {d_1} + \map f {d_2} + \cdots + \map f {d_r} | c = }} {{eqn | r = \sum_{d \mathop \divides n} \map f d | c = }} {{end-eqn}} {{qed}} Category:Number Theory Category:Divisors \end{proof}
22114
\section{Sum Over Divisors of Multiplicative Function} Tags: Multiplicative Functions, Number Theory \begin{theorem} Let $f: \Z_{>0} \to \Z_{>0}$ be a multiplicative function. Let $n \in \Z_{>0}$. Let $\ds \sum_{d \mathop \divides n} \map f d$ be the sum over the divisors of $n$. Then $\ds \map F n = \sum_{d \mathop \divides n} \map f d$ is also a multiplicative function. \end{theorem} \begin{proof} Let $\ds \map F n = \sum_{d \mathop \divides n} \map f d$. Let $m, n \in \Z_{>0}: m \perp n$. Then by definition: :$\ds \map F {m n} = \sum_{d \mathop \divides m n} \map f d$ The divisors of $m n$ are of the form $d = r s$ where $r$ and $s$ are divisors of $m$ and $n$ respectively, from Divisors of Product of Coprime Integers. It is noted that $r \perp s$, otherwise any common divisor of $r$ and $s$ would be a common divisor of $m$ and $n$. Therefore: :$\ds \map F {m n} = \sum_{r \mathop \divides m, \ s \mathop \divides n} \map f {r s}$ So, as $f$ is multiplicative: :$\ds \map F {m n} = \sum_{r \mathop \divides m, \ s \mathop \divides n} \map f r \map f s$ But at the same time: :$\ds \map F m \map F n = \paren {\sum_{r \mathop \divides m} \map f r} \paren {\sum_{s \mathop \divides n} \map f s}$ Multiplying out the product on the {{RHS}}, $\map F {m n}$ and $\map F m \map F n$ are seen to be the same. {{qed}} \end{proof}
22115
\section{Sum Over Divisors of von Mangoldt is Logarithm} Tags: Von Mangoldt Function, Analytic Number Theory \begin{theorem} Let $\Lambda$ be von Mangoldt's function. Then for $n \ge 1$: :$\ds \sum_{d \mathop \divides n} \map \Lambda d = \ln n$ \end{theorem} \begin{proof} Let $n \ge 1$, and by the Fundamental Theorem of Arithmetic write $n = p_1^{e_1} \cdots p_k^{e_k}$ with $p_1, \ldots, p_k$ distinct primes and $e_1, \ldots, e_k > 0$. Now $d \divides n$ if any only if $d = p_1^{f_1} \cdots p_k^{f_k}$ with $0 \le f_i \le e_i$ for $i = 1, \ldots, k$. By the definition of $\Lambda$, for such $d$ we have $\map \Lambda d \ne 0$ {{iff}} there is exactly one $i \in \set {1, \ldots, k}$ such that $f_i > 0$. If this is the case, let $d = p_i^{f_i}$. Then: :$\map \Lambda d = \ln p_i$ Therefore: :$\ds \sum_{d \mathop \divides n} \map \Lambda d = \sum_{i \mathop = 1}^k e_i \ln p_i$ Also, we have: {{begin-eqn}} {{eqn | l = \ln n | r = \ln (p_1^{e_1} \cdots p_k^{e_k}) | c = }} {{eqn | r = \sum_{i \mathop = 1}^k e_i \ln p_i | c = Sum of Logarithms }} {{end-eqn}} Thus we indeed have: :$\ds \sum_{d \mathop \divides n} \map \Lambda d = \ln n$ {{qed}} Category:Analytic Number Theory Category:Von Mangoldt Function \end{proof}
22116
\section{Sum Rule for Complex Derivatives} Tags: Complex Differential Calculus \begin{theorem} Let $\map f z, \map j z, \map k z$ be single-valued continuous complex functions in a domain $D \subseteq \C$, where $D$ is open. Let $f$, $j$, and $k$ be complex-differentiable at all points in $D$. Let $\map f z = \map j z + \map k z$. Then: :$\forall z \in D: \map {f'} z = \map {j'} z + \map {k'} z$ \end{theorem} \begin{proof} Let $z_0 \in D$ be a point in $D$. {{begin-eqn}} {{eqn | l = \map {f'} {z_0} | r = \lim_{h \mathop \to 0} \frac {\map f {z_0 + h} - \map f {z_0} } h | c = {{Defof|Derivative of Complex Function}} }} {{eqn | r = \lim_{h \mathop \to 0} \frac {\paren {\map j {z_0 + h} + \map k {z_0 + h} } - \paren {\map j {z_0} +\map k {z_0} } } h | c = }} {{eqn | r = \lim_{h \mathop \to 0} \frac {\map j {z_0 + h} + \map k {z_0 + h} - \map j {z_0} - \map k {z_0} } h | c = }} {{eqn | r = \lim_{h \mathop \to 0} \frac {\paren {\map j {z_0 + h} - \map j {z_0} } + \paren {\map k {z_0 + h} - \map k {z_0} } } h | c = }} {{eqn | r = \lim_{h \mathop \to 0} \paren {\frac {\map j {z_0 + h} - \map j {z_0} } h + \frac {\map k {z_0 + h} - \map k {z_0} } h} | c = Complex Multiplication Distributes over Addition }} {{eqn | r = \lim_{h \mathop \to 0} \frac {\map j {z_0 + h} - \map j {z_0} } h + \lim_{h \mathop \to 0} \frac {\map k {z_0 + h} - \map k {z_0} } h | c = Sum Rule for Limits of Complex Functions }} {{eqn | r = \map {j'} {z_0} + \map {k'} {z_0} | c = {{Defof|Derivative of Complex Function}} }} {{eqn | ll= \leadsto | q = \forall z \in D | l = \map {f'} z | r = \map {j'} z + \map {k'} z | c = {{Defof|Derivative of Complex Function}} }} {{end-eqn}} {{qed}} Category:Complex Differential Calculus \end{proof}
22117
\section{Sum Rule for Counting} Tags: combinatorics, Counting Arguments, counting arguments, Combinatorics \begin{theorem} Let there be: :$r_1$ different objects in the set $S_1$ :$r_2$ different objects in the set $S_2$ :$\ldots$ :$r_m$ different objects in the set $S_m$. Let $\ds \bigcap_{i \mathop = 1}^m S_i = \O$. Then the number of ways to select an object from one of the $m$ sets is $\ds \sum_{i \mathop = 1}^m r_i$. \end{theorem} \begin{proof} A direct application of Cardinality of Set Union. {{qed}} \end{proof}
22118
\section{Sum Rule for Derivatives} Tags: Differential Calculus, calculu, Calculus, Sum Rule for Derivatives \begin{theorem} Let $\map f x, \map j x, \map k x$ be real functions defined on the open interval $I$. Let $\xi \in I$ be a point in $I$ at which both $j$ and $k$ are differentiable. Let $\map f x = \map j x + \map k x$. Then $f$ is differentiable at $\xi$ and: :$\map {f'} \xi = \map {j'} \xi + \map {k'} \xi$ It follows from the definition of derivative that if $j$ and $k$ are both differentiable on the interval $I$, then: :$\forall x \in I: \map {f'} x = \map {j'} x + \map {k'} x$ \end{theorem} \begin{proof} {{begin-equation}} {{equation | l=<math>f^{\prime} \left({\xi}\right)</math> | r=<math>\lim_{h \to 0} \frac {f \left({\xi+h}\right) - f \left({\xi}\right)} {h}</math> | c=by the definition of the derivative }} {{equation | r=<math>\lim_{h \to 0} \frac {\left({j \left({\xi+h}\right) + k \left({\xi+h}\right)}\right) - \left({j \left({\xi}\right) + k \left({\xi}\right)}\right)} {h}</math> | c= }} {{equation | r=<math>\lim_{h \to 0} \frac {j \left({\xi+h}\right) + k \left({\xi+h}\right) - j \left({\xi}\right) - k \left({\xi}\right)} {h}</math> | c= }} {{equation | r=<math>\lim_{h \to 0} \frac {\left({j \left({\xi+h}\right) - j \left({\xi}\right)}\right) + \left({k \left({\xi+h}\right) - k \left({\xi}\right)}\right)} {h}</math> | c= }} {{equation | r=<math>\lim_{h \to 0} \left({\frac {j \left({\xi+h}\right) - j \left({\xi}\right)}{h} + \frac {k \left({\xi+h}\right) - k \left({\xi}\right)} {h}}\right)</math> | c= }} {{equation | r=<math>\lim_{h \to 0} \frac {j \left({\xi+h}\right) - j \left({\xi}\right)}{h} + \lim_{h \to 0} \frac {k \left({\xi+h}\right) - k \left({\xi}\right)} {h}</math> | c= }} {{equation | r=<math>j^{\prime} \left({x}\right) + k^{\prime} \left({x}\right)</math> | c=by the definition of the derivative }} {{end-equation}} {{qed}} Alternatively, it can be observed that this is an example of a Linear Combination of Derivatives with <math>\lambda = \mu = 1</math>. \end{proof}
22119
\section{Sum Rule for Derivatives/General Result} Tags: Differential Calculus, Sum Rule for Derivatives \begin{theorem} Let $\map {f_1} x, \map {f_2} x, \ldots, \map {f_n} x$ be real functions all differentiable. Then for all $n \in \N_{>0}$: :$\ds \map {D_x} {\sum_{i \mathop = 1}^n \map {f_i} x} = \sum_{i \mathop = 1}^n \map {D_x} {\map {f_i} x}$ \end{theorem} \begin{proof} The proof proceeds by induction. For all $n \in \N_{> 0}$, let $\map P n$ be the proposition: :$\ds \map {D_x} {\sum_{i \mathop = 1}^n \map {f_i} x} = \sum_{i \mathop = 1}^n \map {D_x} {\map {f_i} x}$ $\map P 1$ is true, as this just says: :$\map {D_x} {\map {f_1} x} = \map {D_x} {\map {f_1} x}$ which is trivially true. \end{proof}
22120
\section{Sum and Product of Discrete Random Variables} Tags: Probability Theory \begin{theorem} Let $X$ and $Y$ be discrete random variables on the probability space $\left({\Omega, \Sigma, \Pr}\right)$. \end{theorem} \begin{proof} To show that $U$ and $V$ are discrete random variables on $\left({\Omega, \Sigma, \Pr}\right)$, we need to show that: :$(1)$ The image of $U$ and $V$ are countable subsets of $\R$; :$(2)$ $\forall x \in \R: \left\{{\omega \in \Omega: U \left({\omega}\right) = x}\right\} \in \Sigma$ and $\left\{{\omega \in \Omega: V \left({\omega}\right) = x}\right\} \in \Sigma$. \end{proof}
22121
\section{Sum from -m to m of 1 minus Cosine of n + alpha of theta over n + alpha} Tags: Cosine Function \begin{theorem} For $0 < \theta < 2 \pi$: :$\ds \sum_{n \mathop = -m}^m \dfrac {1 - \cos \paren {n + \alpha} \theta} {n + \alpha} = \int_0^\theta \map \sin {\alpha u} \dfrac {\sin \paren {m + \frac 1 2} u \rd u} {\sin \frac 1 2 u}$ \end{theorem} \begin{proof} We have: {{begin-eqn}} {{eqn | l = \sum_{n \mathop = -m}^m e^{i \paren {n + \alpha} \theta} | r = \sum_{n \mathop = -m}^m e^{i n \theta} e^{i \alpha \theta} | c = }} {{eqn | r = e^{i \alpha \theta} e^{-i m \theta} \sum_{n \mathop = 0}^{2 m} e^{i n \theta} | c = }} {{eqn | n = 1 | r = e^{i \alpha \theta} e^{-i m \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta} - 1} {e^{i \theta} - 1} } | c = Sum of Geometric Sequence }} {{eqn | r = e^{i \alpha \theta} e^{-i m \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta / 2} \paren {e^{i \paren {2 m + 1} \theta / 2} - e^{-i \paren {2 m + 1} \theta / 2} } } {e^{i \theta / 2} \paren {e^{i \theta / 2} - e^{i \theta / 2} } } } | c = extracting factors }} {{eqn | r = e^{i \alpha \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta / 2} - e^{-i \paren {2 m + 1} \theta / 2} } {e^{i \theta / 2} - e^{i \theta / 2} } } | c = Exponential of Sum and some algebra }} {{eqn | r = e^{i \alpha \theta} \frac {\map \sin {\paren {2 m + 2} \theta / 2} } {\map \sin {\theta / 2} } | c = Sine Exponential Formulation }} {{eqn | ll= \leadsto | l = \sum_{n \mathop = -m}^m \paren {\cos \paren {n + \alpha} \theta + i \sin \paren {n + \alpha} \theta} | r = \paren {\map \cos {\alpha \theta} + i \map \sin {\alpha \theta} } \frac {\map \sin {\paren {m + \frac 1 2} \theta } } {\map \sin {\theta / 2} } | c = Euler's Formula and simplifying }} {{eqn | n = 2 | ll= \leadsto | l = \sum_{n \mathop = -m}^m \sin \paren {n + \alpha} \theta | r = \map \sin {\alpha \theta} \frac {\map \sin {\paren {m + \frac 1 2} \theta} } {\map \sin {\theta / 2} } | c = equating imaginary parts }} {{end-eqn}} Note that the {{RHS}} at $(1)$ is not defined when $e^{i u} = 1$. This happens when $u = 2 k \pi$ for $k \in \Z$. For the given range of $0 < \theta < 2 \pi$ it is therefore seen that $(1)$ does indeed hold. Then: {{begin-eqn}} {{eqn | l = \int_0^\theta \sin \paren {\alpha + n} u \rd u | r = \intlimits {\dfrac {-\cos \paren {n + \alpha} u} {n + \alpha} } {u \mathop = 0} {u \mathop = \theta} | c = Primitive of $\sin a x$ }} {{eqn | r = \paren {\dfrac {-\cos \paren {n + \alpha} \theta} {n + \alpha} } - \paren {\dfrac {-\cos \paren {n + \alpha} 0} {n + \alpha} } | c = }} {{eqn | r = \dfrac {1 - \cos \paren {n + \alpha} \theta} {n + \alpha} | c = Cosine of Zero is One }} {{eqn | ll= \leadsto | l = \sum_{n \mathop = -m}^m \dfrac {1 - \cos \paren {n + \alpha} \theta} {n + \alpha} | r = \sum_{n \mathop = -m}^m \int_0^\theta \sin \paren {\alpha + n} u \rd u | c = }} {{eqn | r = \int_0^\theta \sum_{n \mathop = -m}^m \sin \paren {\alpha + n} u \rd u | c = Linear Combination of Definite Integrals }} {{eqn | r = \int_0^\theta \map \sin {\alpha u} \frac {\map \sin {\paren {m + \frac 1 2} u} } {\map \sin {u / 2} } \rd u | c = from $(2)$, changing the variable name }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
22122
\section{Sum from -m to m of Sine of n + alpha of theta over n + alpha} Tags: Sine Function \begin{theorem} For $0 < \theta < 2 \pi$: :$\ds \sum_{n \mathop = -m}^m \dfrac {\sin \paren {n + \alpha} \theta} {n + \alpha} = \int_0^\theta \map \cos {\alpha \theta} \dfrac {\sin \paren {m + \frac 1 2} \theta \rd \theta} {\sin \frac 1 2 \theta}$ \end{theorem} \begin{proof} We have: {{begin-eqn}} {{eqn | l = \sum_{n \mathop = -m}^m e^{i \paren {n + \alpha} \theta} | r = \sum_{n \mathop = -m}^m e^{i n \theta} e^{i \alpha \theta} | c = }} {{eqn | r = e^{i \alpha \theta} e^{-i m \theta} \sum_{n \mathop = 0}^{2 m} e^{i n \theta} | c = }} {{eqn | n = 1 | r = e^{i \alpha \theta} e^{-i m \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta} - 1} {e^{i \theta} - 1} } | c = Sum of Geometric Sequence }} {{eqn | r = e^{i \alpha \theta} e^{-i m \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta / 2} \paren {e^{i \paren {2 m + 1} \theta / 2} - e^{-i \paren {2 m + 1} \theta / 2} } } {e^{i \theta / 2} \paren {e^{i \theta / 2} - e^{i \theta / 2} } } } | c = extracting factors }} {{eqn | r = e^{i \alpha \theta} \paren {\dfrac {e^{i \paren {2 m + 1} \theta / 2} - e^{-i \paren {2 m + 1} \theta / 2} } {e^{i \theta / 2} - e^{i \theta / 2} } } | c = Exponential of Sum and some algebra }} {{eqn | r = e^{i \alpha \theta} \frac {\map \sin {\paren {2 m + 2} \theta / 2} } {\map \sin {\theta / 2} } | c = Sine Exponential Formulation }} {{eqn | ll= \leadsto | l = \sum_{n \mathop = -m}^m \paren {\cos \paren {n + \alpha} \theta + i \sin \paren {n + \alpha} \theta} | r = \paren {\map \cos {\alpha \theta} + i \map \sin {\alpha \theta} } \frac {\map \sin {\paren {m + \frac 1 2} \theta } } {\map \sin {\theta / 2} } | c = Euler's Formula and simplifying }} {{eqn | n = 2 | ll= \leadsto | l = \sum_{n \mathop = -m}^m \cos \paren {n + \alpha} \theta | r = \map \cos {\alpha \theta} \frac {\map \sin {\paren {m + \frac 1 2} \theta } } {\map \sin {\theta / 2} } | c = equating real parts }} {{end-eqn}} Note that the {{RHS}} at $(1)$ is not defined when $e^{i \theta} = 1$. This happens when $\theta = 2 k \pi$ for $k \in \Z$. For the given range of $0 < \theta < 2 \pi$ it is therefore seen that $(1)$ does indeed hold. Then: {{begin-eqn}} {{eqn | l = \int_0^\theta \cos \paren {\alpha + n} \theta \rd \theta | r = \intlimits {\dfrac {\sin \paren {n + \alpha} \theta} {n + \alpha} } {\theta \mathop = 0} {\theta \mathop = \theta} | c = Primitive of $\cos a x$ }} {{eqn | r = \paren {\dfrac {\sin \paren {n + \alpha} \theta} {n + \alpha} } - \paren {\dfrac {\sin \paren {n + \alpha} 0} {n + \alpha} } | c = }} {{eqn | r = \dfrac {\sin \paren {n + \alpha} \theta} {n + \alpha} | c = Sine of Zero is Zero }} {{eqn | ll= \leadsto | l = \sum_{n \mathop = -m}^m \dfrac {\sin \paren {n + \alpha} \theta} {n + \alpha} | r = \sum_{n \mathop = -m}^m \int_0^\theta \cos \paren {\alpha + n} \theta \rd \theta | c = }} {{eqn | r = \int_0^\theta \sum_{n \mathop = -m}^m \cos \paren {\alpha + n} \theta \rd \theta | c = Linear Combination of Definite Integrals }} {{eqn | r = \int_0^\theta \map \cos {\alpha \theta} \frac {\map \sin {\paren {m + \frac 1 2} \theta } } {\map \sin {\theta / 2} } \rd \theta | c = from $(2)$ }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
22123
\section{Sum of 2 Lucky Numbers in 4 Ways} Tags: 34, Lucky Numbers \begin{theorem} The number $34$ is the smallest positive integer to be the sum of $2$ lucky numbers in $4$ different ways. \end{theorem} \begin{proof} The sequence of lucky numbers begins: :$1, 3, 7, 9, 13, 15, 21, 25, 31, 33, \ldots$ Thus we have: {{begin-eqn}} {{eqn | l = 34 | r = 1 + 33 | c = }} {{eqn | r = 3 + 31 | c = }} {{eqn | r = 9 + 25 | c = }} {{eqn | r = 13 + 21 | c = }} {{end-eqn}} {{qed}} \end{proof}
22124
\section{Sum of 2 Squares in 2 Distinct Ways} Tags: Sum of 2 Squares in 2 Distinct Ways, Sums of Squares, Brahmagupta-Fibonacci Identity, Square Numbers \begin{theorem} Let $m, n \in \Z_{>0}$ be distinct positive integers that can be expressed as the sum of two distinct square numbers. Then $m n$ can be expressed as the sum of two square numbers in at least two distinct ways. \end{theorem} \begin{proof} Let: :$m = a^2 + b^2$ :$n = c^2 + d^2$ Then: {{begin-eqn}} {{eqn | l = m n | r = \paren {a^2 + b^2} \paren {c^2 + d^2} | c = }} {{eqn | r = \paren {a c + b d}^2 + \paren {a d - b c}^2 | c = Brahmagupta-Fibonacci Identity }} {{eqn | r = \paren {a c - b d}^2 + \paren {a d + b c}^2 | c = Brahmagupta-Fibonacci Identity: Corollary }} {{end-eqn}} It remains to be shown that if $a \ne b$ and $c \ne d$, then the four numbers: :$a c + b d, a d - b c, a c - b d, a d + b c$ are distinct. Because $a, b, c, d > 0$, we have: :$a c + b d \ne a c - b d$ :$a d + b c \ne a d - b c$ We also have: {{begin-eqn}} {{eqn | l = a c \pm b d | r = a d \pm b c }} {{eqn | ll= \leadstoandfrom | l = a c \mp b c - a d \pm b d | r = 0 }} {{eqn | ll= \leadstoandfrom | l = c \paren {a \mp b} - d \paren {a \mp b} | r = 0 }} {{eqn | ll= \leadstoandfrom | l = \paren {a \mp b} \paren {c \mp d} | r = 0 }} {{end-eqn}} Thus $a \ne b$ and $c \ne d$ implies $a c \pm b d \ne a d \pm b c$. The case for $a c \pm b d \ne a d \mp b c$ is similar. {{qed}} \end{proof}
22125
\section{Sum of 2 Squares in 2 Distinct Ways/Examples/145} Tags: 145, Sum of 2 Squares in 2 Distinct Ways \begin{theorem} $145$ can be expressed as the sum of two square numbers in two distinct ways: {{begin-eqn}} {{eqn | l = 145 | r = 12^2 + 1^2 }} {{eqn | r = 9^2 + 8^2 }} {{end-eqn}} \end{theorem} \begin{proof} We have that: :$145 = 5 \times 29$ Both $5$ and $29$ can be expressed as the sum of two distinct square numbers: {{begin-eqn}} {{eqn | l = 5 | r = 1^2 + 2^2 }} {{eqn | l = 29 | r = 2^2 + 5^2 }} {{end-eqn}} Thus: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {2^2 + 5^2} | c = }} {{eqn | r = \paren {1 \times 2 + 2 \times 5}^2 + \paren {1 \times 5 - 2 \times 2}^2 | c = Brahmagupta-Fibonacci Identity }} {{eqn | r = \paren {2 + 10}^2 + \paren {5 - 4}^2 | c = }} {{eqn | r = 12^2 + 1^2 | c = }} {{eqn | r = 144 + 1 | c = }} {{eqn | r = 145 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {2^2 + 5^2} | c = }} {{eqn | r = \paren {1 \times 2 - 2 \times 5}^2 + \paren {1 \times 5 + 2 \times 2}^2 | c = Brahmagupta-Fibonacci Identity/Corollary }} {{eqn | r = \paren {2 - 10}^2 + \paren {5 + 4}^2 | c = }} {{eqn | r = \paren {8 - 2}^2 + \paren {5 + 4}^2 | c = }} {{eqn | r = 8^2 + 9^2 | c = }} {{eqn | r = 64 + 81 | c = }} {{eqn | r = 145 | c = }} {{end-eqn}} {{qed}} \end{proof}
22126
\section{Sum of 2 Squares in 2 Distinct Ways/Examples/50} Tags: Sum of 2 Squares in 2 Distinct Ways, 50 \begin{theorem} $50$ is the smallest positive integer which can be expressed as the sum of two square numbers in two distinct ways: {{begin-eqn}} {{eqn | l = 50 | r = 5^2 + 5^2 }} {{eqn | r = 7^2 + 1^2 }} {{end-eqn}} \end{theorem} \begin{proof} The smallest two positive integers which can be expressed as the sum of two distinct square numbers are: {{begin-eqn}} {{eqn | l = 5 | r = 1^2 + 2^2 }} {{eqn | l = 10 | r = 1^2 + 3^2 }} {{end-eqn}} We have that: :$50 = 5 \times 10$ Thus: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {1^2 + 3^2} | c = }} {{eqn | r = \paren {1 \times 1 + 2 \times 3}^2 + \paren {1 \times 3 - 2 \times 1}^2 | c = Brahmagupta-Fibonacci Identity }} {{eqn | r = \paren {1 + 6}^2 + \paren {3 - 2}^2 | c = }} {{eqn | r = 7^2 + 1^2 | c = }} {{eqn | r = 49 + 1 | c = }} {{eqn | r = 50 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {1^2 + 3^2} | c = }} {{eqn | r = \paren {1 \times 1 - 2 \times 3}^2 + \paren {1 \times 3 + 2 \times 1}^2 | c = Brahmagupta-Fibonacci Identity: Corollary }} {{eqn | r = \paren {1 - 6}^2 + \paren {3 + 2}^2 | c = }} {{eqn | r = \paren {6 - 1}^2 + \paren {3 + 2}^2 | c = }} {{eqn | r = 5^2 + 5^2 | c = }} {{eqn | r = 25 + 25 | c = }} {{eqn | r = 50 | c = }} {{end-eqn}} {{qed}} \end{proof}
22127
\section{Sum of 2 Squares in 2 Distinct Ways/Examples/65} Tags: Sum of 2 Squares in 2 Distinct Ways, 65 \begin{theorem} $65$ can be expressed as the sum of two square numbers in two distinct ways: {{begin-eqn}} {{eqn | l = 65 | r = 8^2 + 1^2 }} {{eqn | r = 7^2 + 4^2 }} {{end-eqn}} \end{theorem} \begin{proof} We have that: :$65 = 5 \times 13$ Both $5$ and $13$ can be expressed as the sum of two distinct square numbers: {{begin-eqn}} {{eqn | l = 5 | r = 1^2 + 2^2 }} {{eqn | l = 13 | r = 2^2 + 3^2 }} {{end-eqn}} Thus: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {2^2 + 3^2} | c = }} {{eqn | r = \paren {1 \times 2 + 2 \times 3}^2 + \paren {1 \times 3 - 2 \times 2}^2 | c = Brahmagupta-Fibonacci Identity }} {{eqn | r = \paren {2 + 6}^2 + \paren {3 - 4}^2 | c = }} {{eqn | r = \paren {2 + 6}^2 + \paren {4 - 3}^2 | c = }} {{eqn | r = 8^2 + 1^2 | c = }} {{eqn | r = 64 + 1 | c = }} {{eqn | r = 65 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | r = \paren {1^2 + 2^2} \paren {2^2 + 3^2} | c = }} {{eqn | r = \paren {1 \times 2 - 2 \times 3}^2 + \paren {1 \times 3 + 2 \times 2}^2 | c = Brahmagupta-Fibonacci Identity: Corollary }} {{eqn | r = \paren {2 - 6}^2 + \paren {3 + 4}^2 | c = }} {{eqn | r = \paren {6 - 2}^2 + \paren {3 + 4}^2 | c = }} {{eqn | r = 4^2 + 7^2 | c = }} {{eqn | r = 16 + 49 | c = }} {{eqn | r = 65 | c = }} {{end-eqn}} {{qed}} \end{proof}
22128
\section{Sum of 2 Squares in 2 Distinct Ways which is also Sum of Cubes} Tags: Sum of 2 Squares in 2 Distinct Ways which is also Sum of Cubes, 65, Sums of Squares, Sums of Cubes \begin{theorem} The smallest positive integer which is both the sum of $2$ square numbers in two distinct ways and also the sum of $2$ cube numbers is $65$: {{begin-eqn}} {{eqn | l = 65 | m = 16 + 49 | mo= = | r = 4^2 + 7^2 | c = }} {{eqn | m = 1 + 64 | mo= = | r = 1^2 + 8^2 | c = }} {{eqn | o = | mo= = | r = 1^3 + 4^3 | c = }} {{end-eqn}} \end{theorem} \begin{proof} From Sum of 2 Squares in 2 Distinct Ways, the smallest $2$ positive integer which are the sum of $2$ square numbers in two distinct ways are $50$ and $65$. But $50$ cannot be expressed as the sum of $2$ cube numbers: {{begin-eqn}} {{eqn | l = 50 - 1^3 | r = 49 | c = which is not cubic }} {{eqn | l = 50 - 2^3 | r = 42 | c = which is not cubic }} {{eqn | l = 50 - 3^3 | r = 23 | c = which is not cubic }} {{eqn | l = 50 - 4^3 | r = -14 | c = and we have fallen off the end }} {{end-eqn}} Hence $65$ is that smallest number. {{qed}} \end{proof}
22129
\section{Sum of 3 Squares in 2 Distinct Ways} Tags: 27, Square Numbers \begin{theorem} $27$ is the smallest positive integer which can be expressed as the sum of $3$ square numbers in $2$ distinct ways: {{begin-eqn}} {{eqn | l = 27 | r = 3^2 + 3^2 + 3^2 }} {{eqn | r = 5^2 + 1^2 + 1^2 }} {{end-eqn}} \end{theorem} \begin{proof} Can be performed by brute-force investigation. \end{proof}
22130
\section{Sum of 3 Unit Fractions that equals 1} Tags: Unit Fractions, Recreational Mathematics \begin{theorem} There are $3$ ways to represent $1$ as the sum of exactly $3$ unit fractions. \end{theorem} \begin{proof} Let: :$1 = \dfrac 1 a + \dfrac 1 b + \dfrac 1 c$ where: :$0 < a \le b \le c$ and: {{AimForCont}} $a = 1$. Then: :$1 = \dfrac 1 1 + \dfrac 1 b + \dfrac 1 c$ and so: :$\dfrac 1 b + \dfrac 1 c = 0$ which contradicts the stipulation that $b, c > 0$. So there is no solution possible when $a = 1$. Therefore $a \ge 2$. \end{proof}
22131
\section{Sum of 4 Unit Fractions that equals 1} Tags: Unit Fractions, 1, Recreational Mathematics, Fractions \begin{theorem} There are $14$ ways to represent $1$ as the sum of exactly $4$ unit fractions. \end{theorem} \begin{proof} Let: :$1 = \dfrac 1 a + \dfrac 1 b + \dfrac 1 c + \dfrac 1 d$ where: :$a \le b \le c \le d$ and: :$a \ge 2$ \end{proof}
22132
\section{Sum of 714 and 715} Tags: Prime Numbers, 714, 715 \begin{theorem} The sum of $714$ and $715$ is a $4$-digit integer which has $6$ anagrams which are prime. \end{theorem} \begin{proof} We have that: :$714 + 715 = 1429$ Hence we investigate its anagrams. We bother only to check those which do not end in either $2$ or $4$, as those are even. {{begin-eqn}} {{eqn | l = 1429 | o = | c = is prime }} {{eqn | l = 1249 | o = | c = is prime }} {{eqn | l = 4129 | o = | c = is prime }} {{eqn | l = 4219 | o = | c = is prime }} {{eqn | l = 2149 | r = 7 \times 307 | c = and so is not prime }} {{eqn | l = 2419 | r = 41 \times 59 | c = and so is not prime }} {{eqn | l = 9241 | o = | c = is prime }} {{eqn | l = 9421 | o = | c = is prime }} {{eqn | l = 2941 | r = 17 \times 173 | c = and so is not prime }} {{eqn | l = 2491 | r = 47 \times 53 | c = and so is not prime }} {{eqn | l = 4291 | r = 7 \times 613 | c = and so is not prime }} {{eqn | l = 4921 | r = 7 \times 19 \times 37 | c = and so is not prime }} {{end-eqn}} Of the above, $6$ are seen to be prime. {{qed}} \end{proof}
22133
\section{Sum of Absolute Values on Ordered Integral Domain} Tags: Integral Domains, Absolute Value Function \begin{theorem} Let $\struct {D, +, \times, \le}$ be an ordered integral domain. For all $a \in D$, let $\size a$ denote the absolute value of $a$. Then: :$\size {a + b} \le \size a + \size b$ \end{theorem} \begin{proof} Let $P$ be the (strict) positivity property on $D$. Let $<$ be the (strict) total ordering defined on $D$ as: :$a < b \iff a \le b \land a \ne b$ Let $N$ be the (strict) negativity property on $D$. Let $a \in D$. If $\map P a$ or $a = 0$ then $a \le \size a$. If $\map N a$ then by Properties of Strict Negativity: $(1)$ and definition of absolute value: :$a < 0 < \size a$ and hence by transitivity $<$ we have: :$a < \size a$ By similar reasoning: :$-a < \size a$ Thus for all $a, b \in D$ we have: :$a \le \size a, b \le \size b$ As $<$ is compatible with $+$, we have: :$a + b \le \size a + \size b$ and: :$-\paren {a + b} = \paren {-a} + \paren {-b} \le \size a + \size b$ But either: :$\size {a + b} = a + b$ or: :$\size {a + b} = -\paren {a + b}$ Hence the result: :$\size {a + b} \le \size a + \size b$ {{qed}} \end{proof}
22134
\section{Sum of Absolutely Continuous Functions is Absolutely Continuous} Tags: Absolutely Continuous Functions \begin{theorem} Let $I \subseteq \R$ be a real interval. Let $f, g : I \to \R$ be absolutely continuous functions. Then $f + g$ is absolutely continuous. \end{theorem} \begin{proof} Let $\epsilon$ be a positive real number. Since $f$ is absolutely continuous, there exists real $\delta_1 > 0$ such that for all sets of disjoint closed real intervals $\closedint {a_1} {b_1}, \dotsc, \closedint {a_n} {b_n} \subseteq I$ with: :$\ds \sum_{i \mathop = 1}^n \paren {b_i - a_i} < \delta_1$ we have: :$\ds \sum_{i \mathop = 1}^n \size {\map f {b_i} - \map f {a_i} } < \frac \epsilon 2$ Similarly, since $g$ is absolutely continuous, there exists real $\delta_2 > 0$ such that whenever: :$\ds \sum_{i \mathop = 1}^n \paren {b_i - a_i} < \delta_2$ we have: :$\ds \sum_{i \mathop = 1}^n \size {\map g {b_i} - \map g {a_i} } < \frac \epsilon 2$ Let: :$\delta = \map \min {\delta_1, \delta_2}$ Then, for all sets of disjoint closed real intervals $\closedint {a_1} {b_1}, \dotsc, \closedint {a_n} {b_n} \subseteq I$ with: :$\ds \sum_{i \mathop = 1}^n \paren {b_i - a_i} < \delta$ we have: :$\ds \sum_{i \mathop = 1}^n \size {\map f {b_i} - \map f {a_i} } < \frac \epsilon 2$ and: :$\ds \sum_{i \mathop = 1}^n \size {\map g {b_i} - \map g {a_i} } < \frac \epsilon 2$ We then have: {{begin-eqn}} {{eqn | l = \sum_{i \mathop = 1}^n \size {\map {\paren {f + g} } {b_i} - \map {\paren {f + g} } {a_i} } | r = \sum_{i \mathop = 1}^n \size {\paren {\map f {b_i} - \map f {a_i} } + \paren {\map g {b_i} - \map g {a_i} } } }} {{eqn | o = \le | r = \sum_{i \mathop = 1}^n \size {\map f {b_i} - \map f {a_i} } + \sum_{i \mathop = 1}^n \size {\map g {b_i} - \map g {a_i} } | c = Triangle Inequality for Real Numbers }} {{eqn | o = < | r = \frac \epsilon 2 + \frac \epsilon 2 }} {{eqn | r = \epsilon }} {{end-eqn}} whenever: :$\ds \sum_{i \mathop = 1}^n \paren {b_i - a_i} < \delta$ Since $\epsilon$ was arbitrary: :$f + g$ is absolutely continuous. {{qed}} Category:Absolutely Continuous Functions \end{proof}
22135
\section{Sum of Absolutely Convergent Series} Tags: Absolute Convergence, Convergence, Series \begin{theorem} Let $\ds \sum_{n \mathop = 1}^\infty a_n$ and $\ds \sum_{n \mathop = 1}^\infty b_n$ be two real or complex series that are absolutely convergent. Then the series $\ds \sum_{n \mathop = 1}^\infty \paren {a_n + b_n}$ is absolutely convergent, and: :$\ds \sum_{n \mathop = 1}^\infty \paren {a_n + b_n} = \sum_{n \mathop = 1}^\infty a_n + \sum_{n \mathop = 1}^\infty b_n$ \end{theorem} \begin{proof} Let $\epsilon \in \R_{>0}$. From Tail of Convergent Series tends to Zero, it follows that there exists $M \in \N$ such that: :$\ds \sum_{n \mathop = M + 1}^\infty \cmod {a_n} < \dfrac \epsilon 2$ and: :$\ds\sum_{n \mathop = M + 1}^\infty \cmod {b_n} < \dfrac \epsilon 2$ For all $m \ge M$, it follows that: {{begin-eqn}} {{eqn | l = \cmod {\sum_{n \mathop = 1}^\infty a_n + \sum_{n \mathop = 1}^\infty b_n - \sum_{n \mathop = 1}^m \paren {a_n + b_n} } | r = \cmod {\sum_{n \mathop = m + 1}^\infty a_n + \sum_{n \mathop = m + 1}^\infty b_n} }} {{eqn | o = \le | r = \sum_{n \mathop = m + 1}^\infty \cmod {a_n} + \sum_{n \mathop = m + 1}^\infty \cmod {b_n} | c = by Triangle Inequality }} {{eqn | o = \le | r = \sum_{n \mathop = M + 1}^\infty \cmod {a_n} + \sum_{n \mathop = M + 1}^\infty \cmod {b_n} }} {{eqn | o = < | r = \epsilon }} {{end-eqn}} By definition of convergent series, it follows that: {{begin-eqn}} {{eqn | l = \sum_{n \mathop = 1}^\infty a_n + \sum_{n \mathop = 1}^\infty b_n | r = \lim_{m \mathop \to \infty} \sum_{n \mathop = 1}^m \paren {a_n + b_n} }} {{eqn | r = \sum_{n \mathop = 1}^\infty \paren {a_n + b_n} }} {{end-eqn}} To show that $\ds \sum_{n \mathop = 1}^\infty \paren {a_n + b_n}$ is absolutely convergent, note that: {{begin-eqn}} {{eqn | l = \sum_{n \mathop = 1}^\infty \cmod {a_n} + \sum_{n \mathop = 1}^\infty \cmod {b_n} | r = \sum_{n \mathop = 1}^\infty \paren {\cmod {a_n} + \cmod {b_n} } | c = as shown above }} {{eqn | o = \ge | r = \sum_{n \mathop = 1}^\infty \cmod {a_n + b_n} | c = by Triangle Inequality }} {{end-eqn}} {{qed}} \end{proof}
22136
\section{Sum of All Ring Products is Additive Subgroup} Tags: Rings, Subset Products, Ring Theory \begin{theorem} Let $\struct {R, +, \circ}$ be a ring. Let $\struct {S, +}$ and $\struct {T, +}$ be additive subgroups of $\struct {R, +, \circ}$. Let $S + T$ be defined as subset product. Let $S T$ be defined as: :$\ds S T = \set {\sum_{i \mathop = 1}^n s_i \circ t_i: s_1 \in S, t_i \in T, i \in \closedint 1 n}$ Then both $S + T$ and $S T$ are additive subgroups of $\struct {R, +, \circ}$. \end{theorem} \begin{proof} As $\struct {R, +}$ is abelian (from the definition of a ring), we have: :$S + T = T + S$ from Subset Product of Commutative is Commutative. So from Subset Product of Subgroups it follows that $S + T$ is an additive subgroup of $\struct {R, +, \circ}$. Let $x, y \in S T$. We have that $\struct {S T, +}$ is closed. So $x + y \in S T$. So, if $\ds y = \sum s_i \circ t_i \in S T$, it follows that: :$\ds -y = \sum \paren {-s_i} \circ t_i \in S T$ By the Two-Step Subgroup Test, we have that $S T$ is an additive subgroup of $\struct {R, +, \circ}$. {{qed}} \end{proof}
22137
\section{Sum of All Ring Products is Associative} Tags: Rings, Ring Theory \begin{theorem} Let $\struct {R, +, \circ}$ be a ring. Let $\struct {S, +}, \struct {T, +}, \struct {U, +}$ be additive subgroups of $\struct {R, +, \circ}$. Let $S T$ be defined as: :$\ds S T = \set {\sum_{i \mathop = 1}^n s_i \circ t_i: s_1 \in S, t_i \in T, i \in \closedint 1 n}$ Then: :$\paren {S T} U = S \paren {T U}$ \end{theorem} \begin{proof} We have by definition that $S T$ is made up of all finite sums of elements of the form $s \circ t$ where $s \in S, t \in T$. From Sum of All Ring Products is Closed under Addition, this set is closed under ring addition. Therefore, so are $\paren {S T} U$ and $S \paren {T U}$. Let $z \in \paren {S T} U$. Then $z$ is a finite sum of elements in the form $x \circ u$ where $x \in ST$ and $u \in U$. So $x$ is a finite sum of elements in the form $s \circ t$ where $s \in S, t \in T$. Therefore $z$ is a finite sum of elements in the form $\paren {s \circ t} \circ u$ where $s \in S, t \in T, u \in U$. As $\struct {R, +, \circ}$ is a ring, $\circ$ is associative. So $z$ is a finite sum of elements in the form $s \circ \paren {t \circ u}$ where $s \in S, t \in T, u \in U$. So these elements all belong to $S \paren {T U}$. Since $S \paren {T U}$ is closed under addition, $z \in S \paren {T U}$. So: :$\paren {S T} U \subseteq S \paren {T U}$ By a similar argument in the other direction: :$S \paren {T U} \subseteq \paren {S T} U $ and so by definition of set equality: :$\paren {S T} U = S \paren {T U}$ {{qed}} \end{proof}
22138
\section{Sum of All Ring Products is Closed under Addition} Tags: Rings, Ring Theory \begin{theorem} Let $\struct {R, +, \circ}$ be a ring. Let $\struct {S, +}$ and $\struct {T, +}$ be additive subgroups of $\struct {R, +, \circ}$. Let $S T$ be defined as: :$\ds S T = \set {\sum_{i \mathop = 1}^n s_i \circ t_i: s_1 \in S, t_i \in T, i \in \closedint 1 n}$ Then $\struct {S T, +}$ is a closed subset of $\struct {R, +}$. \end{theorem} \begin{proof} Let $x_1, x_2 \in S T$. Then: :$\ds x_1 = \sum_{i \mathop = 1}^j s_i \circ t_i, x_2 = \sum_{i \mathop = 1}^k s_i \circ t_i$ for some $s_i, t_i, j, k$, etc. By renaming the indices, we can express $x_2$ as: :$\ds x_2 = \sum_{i \mathop = j + 1}^{j + k} s_i \circ t_i$ and hence: :$\ds x_1 + x_2 = \sum_{i \mathop = 1}^j s_i \circ t_i + \sum_{i \mathop = j + 1}^{j + k} s_i \circ t_i = \sum_{i \mathop = 1}^k s_i \circ t_i$ So $x_1 + x_2 \in S T$ and $\struct {S T, +}$ is shown to be closed. {{qed}} \end{proof}
22139
\section{Sum of Angles of Triangle equals Two Right Angles} Tags: Triangles, Sum of Angles of Triangle equals Two Right Angles \begin{theorem} In a triangle, the sum of the three interior angles equals two right angles. {{:Euclid:Proposition/I/32}} \end{theorem} \begin{proof} :300px Let $\triangle ABC$ be a triangle. Let $BC$ be extended to a point $D$. From External Angle of Triangle equals Sum of other Internal Angles: : $\angle ACD = \angle ABC + \angle BAC$ Bby by Euclid's Second Common Notion: : $\angle ACB + \angle ACD = \angle ABC + \angle BAC + \angle ACB$ But from Two Angles on Straight Line make Two Right Angles, $ACB + ACD$ equals two right angles. So by Euclid's First Common Notion, $\angle ABC + \angle BAC + \angle ACB$ equals two right angles. {{qed}} {{Euclid Note|32|I|Euclid's proposition $32$ consists of two parts, the first of which is External Angle of Triangle equals Sum of other Internal Angles, and the second part of which is this.|part = second}} \end{proof}
22140
\section{Sum of Antecedent and Consequent of Proportion} Tags: Ratios \begin{theorem} {{:Euclid:Proposition/V/25}} That is, if $a : b = c : d$ and $a$ is the greatest and $d$ is the least, then: :$a + d > b + c$ \end{theorem} \begin{proof} Let the four magnitudes $AB, CD, E, F$ be proportional, so that $AB : CD = E : F$. Let $AB$ be the greatest and $F$ the least. We need to show that $AB + F > CD + E$. :250px Let $AG = E, CH = F$. We have that $AB : CD = E : F$, $AG = E, F = CH$. So $AB : CD = AG : CH$. So from Proportional Magnitudes have Proportional Remainders $GB : HD = AB : CD$. But $AB > CD$ and so $GB > HD$. Since $AG = E$ and $CH = F$, it follows that $AG + F = CH + E$. We have that $GB > HD$. So add $AG + F$ to $GB$ and $CH + E$ to $HD$. It follows that $AB + F > CD + E$. {{qed}} {{Euclid Note|25|V}} \end{proof}
22141
\section{Sum of Arccotangents} Tags: Arccotangent Function, Inverse Cotangent \begin{theorem} :$\arccot a + \arccot b = \arccot \dfrac {a b - 1} {a + b}$ where $\arccot$ denotes the arccotangent. \end{theorem} \begin{proof} Let $x = \arccot a$ and $y = \arccot b$. Then: {{begin-eqn}} {{eqn | n = 1 | l = \cot x | r = a | c = }} {{eqn | n = 2 | l = \cot y | r = b | c = }} {{eqn | l = \map \cot {\arccot a + \arccot b} | r = \map \cot {x + y} | c = }} {{eqn | r = \frac {\cot x \cot y - 1} {\cot x + \cot y} | c = Cotangent of Sum }} {{eqn | r = \frac {a + b} {1 - a b} | c = by $(1)$ and $(2)$ }} {{eqn | ll= \leadsto | l = \arccot a + \arccot b | r = \arccot \frac {a b - 1} {a + b} | c = }} {{end-eqn}} {{qed}} \end{proof}
22142
\section{Sum of Arcsecant and Arccosecant} Tags: Inverse Cosecant, Inverse Trigonometric Functions, Arccosecant Function, Analysis, Inverse Secant, Arcsecant Function \begin{theorem} Let $x \in \R$ be a real number such that $\size x \ge 1$. Then: : $\arcsec x + \arccsc x = \dfrac \pi 2$ where $\arcsec$ and $\arccsc$ denote arcsecant and arccosecant respectively. \end{theorem} \begin{proof} Let $y \in \R$ such that: : $\exists x \in \R: \size x \ge 1$ and $x = \map \csc {y + \dfrac \pi 2}$ Then: {{begin-eqn}} {{eqn | l = x | r = \map \sec {y + \frac \pi 2} | c = }} {{eqn | r = -\csc y | c = Secant of Angle plus Right Angle }} {{eqn | r = \map \csc {-y} | c = Cosecant Function is Odd }} {{end-eqn}} Suppose $-\dfrac \pi 2 \le y \le \dfrac \pi 2$. Then we can write $-y = \arccsc x$. But then $\map \csc {y + \dfrac \pi 2} = x$. Now since $-\dfrac \pi 2 \le y \le \dfrac \pi 2$ it follows that $0 \le y + \dfrac \pi 2 \le \pi$. Hence $y + \dfrac \pi 2 = \arcsec x$. That is, $\dfrac \pi 2 = \arcsec x + \arccsc x$. {{qed}} \end{proof}
22143
\section{Sum of Arcsine and Arccosine} Tags: Inverse Trigonometric Functions, Analysis, Inverse Cosine, Inverse Hyperbolic Functions, Inverse Sine, Arcsine Function, Arccosine Function \begin{theorem} Let $x \in \R$ be a real number such that $-1 \le x \le 1$. Then: : $\arcsin x + \arccos x = \dfrac \pi 2$ where $\arcsin$ and $\arccos$ denote arcsine and arccosine respectively. \end{theorem} \begin{proof} Let $y \in \R$ such that: : $\exists x \in \left[{-1 \,.\,.\, 1}\right]: x = \cos \left({y + \dfrac \pi 2}\right)$ Then: {{begin-eqn}} {{eqn | l = x | r = \cos \left({y + \frac \pi 2}\right) | c = }} {{eqn | r = -\sin y | c = Cosine of Angle plus Right Angle }} {{eqn | r = \sin \left({-y}\right) | c = Sine Function is Odd }} {{end-eqn}} Suppose $-\dfrac \pi 2 \le y \le \dfrac \pi 2$. Then we can write $-y = \arcsin x$. But then $\cos \left({y + \dfrac \pi 2}\right) = x$. Now since $-\dfrac \pi 2 \le y \le \dfrac \pi 2$ it follows that $0 \le y + \dfrac \pi 2 \le \pi$. Hence $y + \dfrac \pi 2 = \arccos x$. That is, $\dfrac \pi 2 = \arccos x + \arcsin x$. {{qed}} \end{proof}
22144
\section{Sum of Arctangent and Arccotangent} Tags: Inverse Trigonometric Functions, Arccotangent Function, Inverse Tangent, Inverse Cotangent, Arctangent Function \begin{theorem} Let $x \in \R$ be a real number. Then: : $\arctan x + \operatorname{arccot} x = \dfrac \pi 2$ where $\arctan$ and $\operatorname{arccot}$ denote arctangent and arccotangent respectively. \end{theorem} \begin{proof} Let $y \in \R$ such that: : $\exists x \in \R: x = \cot \left({y + \dfrac \pi 2}\right)$ Then: {{begin-eqn}} {{eqn | l = x | r = \cot \left({y + \frac \pi 2}\right) | c = }} {{eqn | r = -\tan y | c = Cotangent of Angle plus Right Angle }} {{eqn | r = \tan \left({-y}\right) | c = Tangent Function is Odd }} {{end-eqn}} Suppose $-\dfrac \pi 2 \le y \le \dfrac \pi 2$. Then we can write $-y = \arctan x$. But then $\cot \left({y + \dfrac \pi 2}\right) = x$. Now since $-\dfrac \pi 2 \le y \le \dfrac \pi 2$ it follows that $0 \le y + \dfrac \pi 2 \le \pi$. Hence $y + \dfrac \pi 2 = \operatorname{arccot} x$. That is, $\dfrac \pi 2 = \operatorname{arccot} x + \arctan x$. {{qed}} \end{proof}
22145
\section{Sum of Arctangents} Tags: Arctangent Function, Inverse Tangent \begin{theorem} :$\arctan a + \arctan b = \arctan \dfrac {a + b} {1 - a b}$ where $\arctan$ denotes the arctangent. \end{theorem} \begin{proof} Let $x = \arctan a$ and $y = \arctan b$. Then: {{begin-eqn}} {{eqn | n = 1 | l = \tan x | r = a | c = }} {{eqn | n = 2 | l = \tan y | r = b | c = }} {{eqn | l = \map \tan {\arctan a + \arctan b} | r = \map \tan {x + y} | c = }} {{eqn | r = \frac {\tan x + \tan y} {1 - \tan x \tan y} | c = Tangent of Sum }} {{eqn | r = \frac {a + b} {1 - a b} | c = by $(1)$ and $(2)$ }} {{eqn | ll= \leadsto | l = \arctan a + \arctan b | r = \arctan \frac {a + b} {1 - a b} | c = }} {{end-eqn}} {{qed}} \end{proof}
22146
\section{Sum of Arithmetic-Geometric Sequence} Tags: Arithmetic-Geometric Sequences, Arithmetic-Geometric Progressions, Sum of Arithmetic-Geometric Sequence, Sums of Sequences, Sum of Arithmetic-Geometric Progression, Algebra \begin{theorem} Let $\sequence {a_k}$ be an arithmetic-geometric sequence defined as: :$a_k = \paren {a + k d} r^k$ for $k = 0, 1, 2, \ldots, n - 1$ Then its closed-form expression is: :$\ds \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} r^k = \frac {a \paren {1 - r^n} } {1 - r} + \frac {r d \paren {1 - n r^{n - 1} + \paren {n - 1} r^n} } {\paren {1 - r}^2}$ \end{theorem} \begin{proof} Proof by induction: For all $n \in \N_{> 0}$, let $P \left({n}\right)$ be the proposition: :$\displaystyle \sum_{k \mathop = 0}^{n - 1} \left({a + k d}\right) r^k = \frac {a \left({1 - r^n}\right)} {1 - r} + \frac {r d \left({1 - n r^{n - 1} + \left({n - 1}\right) r^n}\right)} {\left({1 - r}\right)^2}$ \end{proof}
22147
\section{Sum of Arithmetic Sequence} Tags: Arithmetic Sequences, Sum of Arithmetic Sequence, Sum of Arithmetic Progression, Arithmetic Progressions, Sums of Sequences, Algebra \begin{theorem} Let $\sequence {a_k}$ be an arithmetic sequence defined as: :$a_k = a + k d$ for $n = 0, 1, 2, \ldots, n - 1$ Then its closed-form expression is: {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} | r = n \paren {a + \frac {n - 1} 2 d} | c = }} {{eqn | r = \frac {n \paren {a + l} } 2 | c = where $l$ is the last term of $\sequence {a_k}$ }} {{end-eqn}} \end{theorem} \begin{proof} We have that: :$\ds \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} = a + \paren {a + d} + \paren {a + 2 d} + \dotsb + \paren {a + \paren {n - 1} d}$ Then: {{begin-eqn}} {{eqn | l = 2 \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} | r = 2 \paren {a + \paren {a + d} + \paren {a + 2 d} + \dotsb + \paren {a + \paren {n - 1} d} } }} {{eqn | r = \paren {a + \paren {a + d} + \dotsb + \paren {a + \paren {n - 1} d} } }} {{eqn | ro= + | r = \paren {\paren {a + \paren {n - 1} d} + \paren {a + \paren {n - 2} d} + \dotsb + \paren {a + d} + a} }} {{eqn | r = \paren {2 a + \paren {n - 1} d}_1 + \paren {2 a + \paren {n - 1} d}_2 + \dotsb + \paren {2 a + \paren {n - 1} d}_n }} {{eqn | r = n \paren {2 a + \paren {n - 1} d} }} {{end-eqn}} So: {{begin-eqn}} {{eqn | l = 2 \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} | r = n \paren {2 a + \paren {n - 1} d} }} {{eqn | ll= \leadsto | l = \sum_{k \mathop = 0}^{n - 1} \paren {a + k d} | r = \frac {n \paren {2 a + \paren {n - 1} d} } 2 }} {{eqn | r = \frac {n \paren {a + l} } 2 | c = {{Defof|Last Term of Arithmetic Sequence|Last Term}} $l$ }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
22148
\section{Sum of Bernoulli Numbers by Binomial Coefficients Vanishes} Tags: Bernoulli Numbers, Binomial Coefficients, Sum of Bernoulli Numbers by Binomial Coefficients Vanishes \begin{theorem} :$\forall n \in \Z_{>1}: \ds \sum_{k \mathop = 0}^{n - 1} \binom n k B_k = 0$ where $B_k$ denotes the $k$th Bernoulli number. \end{theorem} \begin{proof} Take the definition of Bernoulli numbers: :$\ds \frac x {e^x - 1} = \sum_{n \mathop = 0}^\infty \frac {B_n x^n} {n!}$ From the definition of the exponential function: {{begin-eqn}} {{eqn | l = e^x | r = \sum_{n \mathop = 0}^\infty \frac {x^n} {n!} | c = }} {{eqn | r = 1 + \sum_{n \mathop = 1}^\infty \frac {x^n} {n!} | c = }} {{eqn | ll= \leadsto | l = \frac {e^x - 1} x | r = \sum_{n \mathop = 1}^\infty \frac {x^{n - 1} } {n!} | c = }} {{eqn | r = 1 + \frac x {2!} + \frac {x^2} {3!} + \cdots | c = }} {{end-eqn}} Thus: {{begin-eqn}} {{eqn | l = 1 | r = \paren {\frac x {e^x - 1} } \paren {\frac {e^x - 1} x} | c = }} {{eqn | r = \paren {\sum_{n \mathop = 0}^\infty \frac {B_n x^n} {n!} } \paren {\sum_{n \mathop = 1}^\infty \frac {x^{n - 1} } {n!} } | c = }} {{eqn | r = \paren {\sum_{n \mathop = 0}^\infty \frac {B_n x^n} {n!} } \paren {\sum_{n \mathop = 0}^\infty \frac {x^n} {\paren {n + 1}!} } | c = as both series start at zero }} {{end-eqn}} By Product of Absolutely Convergent Series, we will let: {{begin-eqn}} {{eqn | l = a_n | r = \frac {B_n x^n} {n!} | c = }} {{eqn | l = b_n | r = \frac {x^n} {\paren {n + 1}!} | c = }} {{end-eqn}} Then: {{begin-eqn}} {{eqn | l = \sum_{n \mathop = 0}^\infty c_n | r = \paren {\sum_{n \mathop = 0}^\infty a_n} \paren {\sum_{n \mathop = 0}^\infty b_n} | rr= =1 | c = }} {{eqn | l = c_n | r = \sum_{k \mathop = 0}^n a_k b_{n - k} | c = }} {{eqn | l = c_0 | r = \frac {B_0 x^0} {0!} \frac {x^0} {\paren {0 + 1}!} | rr= = 1 | c = as $c_0 = \paren {a_0} \paren {b_{0 - 0} } = \paren {a_0} \paren {b_0}$ }} {{eqn | ll= \leadsto | l = \sum_{n \mathop = 1}^\infty c_n | r = \paren {\sum_{n \mathop = 0}^\infty a_n} \paren {\sum_{n \mathop = 0}^\infty b_n} - a_0 b_0 | rr= = 0 | c = subtracting $1$ from both sides }} {{eqn | r = c_1 x + c_2 x^2 + c_3 x^3 + \cdots | rr= = 0 }} {{eqn | ll= \leadsto | q = \forall n \in \Z_{>0} | l = c_n | r = 0 }} {{end-eqn}} {{begin-eqn}} {{eqn | l = c_1 | r = \frac {B_0 x^0} {0!} \frac {x^{1} } {\paren {1 + 1 }!} + \frac {B_1 x^1} {1!} \frac {x^{0} } {\paren {0 + 1 }!} | rr= = 0 | rrr= = a_0 b_1 + a_1 b_0 }} {{eqn | l = c_2 | r = \frac {B_0 x^0} {0!} \frac {x^{2} } {\paren {2 + 1 }!} + \frac {B_1 x^1} {1!} \frac {x^{1} } {\paren {1 + 1 }!} + \frac {B_2 x^2} {2!} \frac {x^{0} } {\paren {0 + 1 }!} | rr= = 0 | rrr= = a_0 b_2 + a_1 b_1 + a_2 b_0 }} {{eqn | l = \cdots | r = \cdots | rr= = 0 }} {{eqn | l = c_n | r = \frac {B_0 x^0} {0!} \frac {x^{n} } {\paren {n + 1 }!} + \frac {B_1 x^1} {1!} \frac {x^{n-1} } {\paren {n - 1 + 1 }!} + \cdots + \frac {B_n x^n} {n!} \frac {x^{0} } {\paren {0 + 1 }!} | rr= = 0 | rrr= = a_0 b_n + a_1 b_{n - 1 } + a_2 b_{n - 2 } + \cdots + a_n b_0 }} {{end-eqn}} Multiplying $c_n$ through by $\paren {n + 1 }!$ gives: {{begin-eqn}} {{eqn | l = \paren {n + 1 }! c_n | r = \frac {B_0 x^0} {0!} \frac {\paren {n + 1 }! x^n } {\paren {n + 1 }!} + \frac {B_1 x^1} {1!} \frac {\paren {n + 1 }! x^{n-1} } {\paren {n - 1 + 1 }!} + \cdots + \frac {B_n x^n} {n!} \frac {\paren {n + 1 }! x^{0} } {\paren {0 + 1 }!} | rr= = 0 | c = }} {{eqn | r = x^n \paren {\frac {\paren {n + 1 }! } {0! \paren {n + 1 }!} B_0 + \frac {\paren {n + 1 }! } {1! \paren {n - 1 + 1 }!} B_1 + \cdots + \frac {\paren {n + 1 }! } {n! \paren {0 + 1 }!} B_n } | rr= = 0 | c = factoring out $x^n$ }} {{end-eqn}} But those coefficients are the binomial coefficients: {{begin-eqn}} {{eqn | l = \paren {n + 1 }! c_n | r = \dbinom {n + 1 } 0 B_0 + \dbinom {n + 1 } 1 B_1 + \dbinom {n + 1 } 2 B_2 + \cdots + \dbinom {n + 1 } n B_n | rr= = 0 | c = }} {{eqn | l = n! c_{n-1 } | r = \dbinom n 0 B_0 + \dbinom n 1 B_1 + \dbinom n 2 B_2 + \cdots + \dbinom n {n - 1} B_{n - 1} | rr= = 0 | c = }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
22149
\section{Sum of Bernoulli Numbers by Power of Two and Binomial Coefficient} Tags: Bernoulli Numbers, Sum of Bernoulli Numbers by Power of Two and Binomial Coefficient, Definitions: Bernoulli Numbers \begin{theorem} Let $n \in \Z_{>0}$ be a (strictly) positive integer. Then: {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 1}^n \dbinom {2 n + 1} {2 k} 2^{2 k} B_{2 k} | r = \binom {2 n + 1} 2 2^2 B_2 + \binom {2 n + 1} 4 2^4 B_4 + \binom {2 n + 1} 6 2^6 B_6 + \cdots | c = }} {{eqn | r = 2 n | c = }} {{end-eqn}} where $B_n$ denotes the $n$th Bernoulli number. \end{theorem} \begin{proof} The proof proceeds by induction. For all $n \in \Z_{> 0}$, let $P \left({n}\right)$ be the proposition: :$\displaystyle \sum_{k \mathop = 1}^n \dbinom {2 n + 1} {2 k} 2^{2 k} B_{2 k} = 2 n$ \end{proof}
22150
\section{Sum of Big-O Estimates/Real Analysis} Tags: Asymptotic Notation \begin{theorem} Let $c$ be a real number. Let $f, g : \hointr c \infty \to \R$ be real functions. Let $R_1 : \hointr c \infty \to \R$ be a real function such that $f = \map \OO {R_1}$. Let $R_2 : \hointr c \infty \to \R$ be a real function such that $g = \map \OO {R_2}$. Then: :$f + g = \map \OO {\size {R_1} + \size {R_2} }$ \end{theorem} \begin{proof} Since: :$f = \map \OO {R_1}$ there exists $x_1 \in \hointr c \infty$ and a real number $C_1$ such that: :$\size {\map f x} \le C_1 \size {\map {R_1} x}$ for $x \ge x_1$. Similarly, since: :$g = \map \OO {R_2}$ there exists $x_2 \in \hointr c \infty$ and a real number $C_2$ such that: :$\size {\map g x} \le C_2 \size {\map {R_2} x}$ Set: :$x_0 = \max \set {x_1, x_2}$ and: :$C = \max \set {C_1, C_2}$ Then, for $x \ge x_0$ we have: {{begin-eqn}} {{eqn | l = \size {\map f x + \map g x} | o = \le | r = \size {\map f x} + \size {\map g x} | c = Triangle Inequality }} {{eqn | o = \le | r = C_1 \size {\map {R_1} x} + C_2 \size {\map {R_2} x} | c = since $x \ge x_1$ and $x \ge x_2$ }} {{eqn | o = \le | r = C \size {\map {R_1} x} + C \size {\map {R_2} x} }} {{eqn | r = C \size {\size {\map {R_1} x} + \size {\map {R_2} x} } }} {{end-eqn}} That is, by the definition of big-O notation, we have: :$f + g = \map \OO {\size {R_1} + \size {R_2} }$ {{qed}} Category:Asymptotic Notation \end{proof}
22151
\section{Sum of Big-O Estimates/Sequences} Tags: Asymptotic Notation \begin{theorem} Let $\sequence {a_n},\sequence {b_n},\sequence {c_n},\sequence {d_n}$ be sequences of real or complex numbers. Let: :$a_n = \map \OO {b_n}$ :$c_n = \map \OO {d_n}$ where $\OO$ denotes big-O notation. Then: :$a_n + c_n = \map \OO {\size {b_n} + \size {d_n} }$ \end{theorem} \begin{proof} Since: :$a_n = \map \OO {b_n}$ there exists a positive real number $C_1$ and natural number $N_1$ such that: :$\size {a_n} \le C_1 \size {b_n}$ for all $n \ge N_1$. Similarly, since: :$c_n = \map \OO {d_n}$ there exists a positive real number $C_2$ and natural number $N_2$ such that: :$\size {c_n} \le C_2 \size {d_n}$ for all $n \ge N_2$. Let: :$N = \max \set {N_1, N_2}$ Then, for $n \ge N$ we have: {{begin-eqn}} {{eqn | l = \size {a_n + c_n} | o = \le | r = \size {a_n} + \size {c_n} | c = Triangle Inequality }} {{eqn | r = C_1 \size {b_n} + \size {c_n} | c = since $n \ge N_1$ }} {{eqn | r = C_1 \size {b_n} + C_2 \size {d_n} | c = since $n \ge N_2$ }} {{end-eqn}} Let: :$C = \max \set {C_1, C_2}$ Then: {{begin-eqn}} {{eqn | l = C_1 \size {b_n} + C_2 \size {d_n} | r = C \size {b_n} + C \size {d_n} }} {{eqn | r = C \paren {\size {b_n} + \size {d_n} } }} {{eqn | r = C \size {\size {b_n} + \size {d_n} } }} {{end-eqn}} So: :$\size {a_n + c_n} \le C \size {\size {b_n} + \size {d_n} }$ for $n \ge N$. So: :$a_n + c_n = \map \OO {\size {b_n} + \size {d_n} }$ {{qed}} Category:Asymptotic Notation \end{proof}
22152
\section{Sum of Binomial Coefficients over Lower Index/Corollary} Tags: Binomial Coefficients, Sum of Binomial Coefficients over Lower Index \begin{theorem} :$\ds \forall n \in \Z_{\ge 0}: \sum_{i \mathop \in \Z} \binom n i = 2^n$ where $\dbinom n i$ is a binomial coefficient. \end{theorem} \begin{proof} From the definition of the binomial coefficient, when $i < 0$ and $i > n$ we have $\dbinom n i = 0$. The result follows directly from Sum of Binomial Coefficients over Lower Index. {{qed}} Category:Binomial Coefficients Category:Sum of Binomial Coefficients over Lower Index \end{proof}
22153
\section{Sum of Bounded Linear Transformations is Bounded Linear Transformation} Tags: Linear Transformations on Hilbert Spaces \begin{theorem} Let $\mathbb F \in \set {\R, \C}$. Let $\struct {\HH, \innerprod \cdot \cdot_\HH}$ and $\struct {\KK, \innerprod \cdot \cdot_\KK}$ be Hilbert spaces over $\mathbb F$. Let $A, B : \HH \to \KK$ be bounded linear transformations. Let $\norm \cdot$ be the norm on the space of bounded linear transformations. Then: :$A + B$ is a bounded linear transformation with: :$\norm {A + B} \le \norm A + \norm B$ \end{theorem} \begin{proof} From Addition of Linear Transformations, we have that: :$A + B$ is a linear transformation. It remains to show that $A + B$ is bounded. Let $\norm \cdot_\HH$ be the inner product norm on $\HH$. Let $\norm \cdot_\KK$ be the inner product norm on $\KK$. Since $A$ is a bounded linear transformation, from Fundamental Property of Norm on Bounded Linear Transformation, we have: :$\norm {A x}_\KK \le \norm A \norm x_\HH$ for all $x \in \HH$. Similarly, since $B$ is a bounded linear transformation we have: :$\norm {B x}_\KK \le \norm B \norm x_\HH$ for all $x \in \HH$. Let $x \in \HH$. Then, we have: {{begin-eqn}} {{eqn | l = \norm {\paren {A + B} x}_\KK | r = \norm {A x + B x}_\KK }} {{eqn | o = \le | r = \norm {A x}_\KK + \norm {B x}_\KK | c = {{Defof|Norm on Vector Space}} }} {{eqn | o = \le | r = \norm A \norm x_\HH + \norm B \norm x_\HH }} {{eqn | r = \paren {\norm A + \norm B} \norm x_\HH }} {{end-eqn}} So, taking $c = \norm A +\norm B$, we have: :$\norm {\paren {A + B} x}_\KK \le c \norm x_\HH$ for all $x \in \HH$. So: :$A + B$ is a bounded linear transformation. Note that: :$\norm A + \norm B \in \set {c > 0: \forall h \in \HH: \norm {\paren {A + B} h}_\KK \le c \norm h_\HH}$ while, by the definition of the norm, we have: :$\norm {A + B} = \inf \set {c > 0: \forall h \in \HH: \norm {\paren {A + B} h}_\KK \le c \norm h_\HH}$ So, by the definition of infimum: :$\norm {A + B} \le \norm A + \norm B$ {{qed}} Category:Linear Transformations on Hilbert Spaces \end{proof}
22154
\section{Sum of Cardinals is Associative} Tags: Cardinals \begin{theorem} Let $\mathbf a$, $\mathbf b$ and $\mathbf c$ be cardinals. Then: : $\mathbf a + \paren {\mathbf b + \mathbf c} = \paren {\mathbf a + \mathbf b} + \mathbf c$ where $\mathbf a + \mathbf b$ denotes the sum of $\mathbf a$ and $\mathbf b$. \end{theorem} \begin{proof} Let $\mathbf a = \card A, \mathbf b = \card B$ and $\mathbf c = \card C$ for some sets $A$, $B$ and $C$. Let $A, B, C$ be pairwise disjoint, that is: :$A \cap B = \O$ :$B \cap C = \O$ :$A \cap C = \O$ Then we can define: :$A \sqcup B := A \cup B$ :$B \sqcup C := B \cup C$ :$A \sqcup C := A \cup C$ where $A \sqcup B$ denotes the disjoint union of $A$ and $B$. Then we have: :$\mathbf a + \mathbf b = \card {A \sqcup B} = \card {A \cup B}$ :$\mathbf b + \mathbf c = \card {B \sqcup C} = \card {B \cup C}$ Then: {{begin-eqn}} {{eqn | l=\paren {A \cup B} \cap C | r=\paren {A \cap C} \cup \paren {B \cap C} | c=Intersection Distributes over Union }} {{eqn | r=\O \cup \O | c=as $A \cap C = \O$ and $B \cap C = \O$ }} {{eqn | r=\O | c=Union with Empty Set }} {{end-eqn}} Then: {{begin-eqn}} {{eqn | l=\card {\paren {A \cup B} \cup C} | r=\card {A \cup B} + \card C | c=as $\paren {A \cup B} \cap C = \O$ from above }} {{eqn | r=\paren {\mathbf a + \mathbf b} + \mathbf c | c={{Defof|Sum of Cardinals}} }} {{end-eqn}} Similarly: {{begin-eqn}} {{eqn | l=A \cap \paren {B \cup C} | r=\paren {A \cap B} \cup \paren {A \cap C} | c=Intersection Distributes over Union }} {{eqn | r=\O \cup \O | c=as $A \cap B = \O$ and $A \cap C = \O$ }} {{eqn | r=\O | c=Union with Empty Set }} {{end-eqn}} Then: {{begin-eqn}} {{eqn | l=\card {A \cup \paren {B \cup C} } | r=\card A + \card {B \cup C} | c=as $A \cap \paren {B \cup C} = \O$ from above }} {{eqn | r=\mathbf a + \paren {\mathbf b + \mathbf c} | c={{Defof|Sum of Cardinals}} }} {{end-eqn}} Finally note that from Union is Associative: :$A \cup \paren {B \cup C} = \paren {A \cup B} \cup C$ {{qed}} \end{proof}
22155
\section{Sum of Cardinals is Commutative} Tags: Cardinals \begin{theorem} Let $\mathbf a$ and $\mathbf b$ be cardinals. Then: :$\mathbf a + \mathbf b = \mathbf b + \mathbf a$ where $\mathbf a + \mathbf b$ denotes the sum of $\mathbf a$ and $\mathbf b$. \end{theorem} \begin{proof} Let $\mathbf a = \map \Card A$ and $\mathbf b = \map \Card B$ for some sets $A$ and $B$ such that $A \cap B = \O$. Then: {{begin-eqn}} {{eqn | l = \mathbf a + \mathbf b | r = \map \Card {A \cup B} | c = {{Defof|Sum of Cardinals}} }} {{eqn | r = \map \Card {B \cup A} | c = Union is Commutative }} {{eqn | r = \mathbf b + \mathbf a | c = {{Defof|Sum of Cardinals}} }} {{end-eqn}} {{qed}} \end{proof}
22156
\section{Sum of Ceilings not less than Ceiling of Sum} Tags: Ceiling Function, Floor and Ceiling \begin{theorem} Let $\ceiling x$ be the ceiling function. Then: :$\ceiling x + \ceiling y \ge \ceiling {x + y}$ The equality holds: :$\ceiling x + \ceiling y = \ceiling {x + y}$ {{iff}} either: :$x \in \Z$ or $y \in \Z$ or: :$x \bmod 1 + y \bmod 1 > 1$ where $x \bmod 1$ denotes the modulo operation. \end{theorem} \begin{proof} From the definition of the modulo operation, we have that: :$x = \floor x + \paren {x \bmod 1}$ from which we obtain: :$x = \ceiling x - \sqbrk {x \notin \Z} + \paren {x \bmod 1}$ where $\sqbrk {x \notin \Z}$ uses Iverson's convention. {{begin-eqn}} {{eqn | l = \ceiling {x + y} | r = \ceiling {\floor x + \paren {x \bmod 1} + \floor y + \paren {y \bmod 1} } | c = }} {{eqn | r = \ceiling {\ceiling x - \sqbrk {x \notin \Z} + \paren {x \bmod 1} + \ceiling y - \sqbrk {y \notin \Z} + \paren {y \bmod 1} } | c = }} {{eqn | r = \ceiling x + \ceiling y + \ceiling {\paren {x \bmod 1} + \paren {y \bmod 1} } - \sqbrk {x \notin \Z} - \sqbrk {y \notin \Z} | c = Ceiling of Number plus Integer }} {{end-eqn}} We have that: :$x \notin \Z \implies x \bmod 1 > 0$ As $0 \le x \bmod 1 < 1$ it follows that: :$\sqbrk {x \notin \Z} \ge x \bmod 1$ Hence the inequality. The equality holds {{iff}}: :$\ceiling {\paren {x \bmod 1} + \paren {y \bmod 1} } = \sqbrk {x \notin \Z} + \sqbrk {y \notin \Z}$ that is, {{iff}} one of the following holds: :$x \in \Z$, in which case $x \bmod 1 = 0$ :$y \in \Z$, in which case $y \bmod 1 = 0$ :both $x, y \in \Z$, in which case $\paren {x \bmod 1} + \paren {y \bmod 1} = 0$ :both $x, y \notin \Z$ and $\paren {x \bmod 1} + \paren {y \bmod 1} > 1$. {{qed}} \end{proof}
22157
\section{Sum of Chi-Squared Random Variables} Tags: Chi-Squared Distribution \begin{theorem} Let $n_1, n_2, \ldots, n_k$ be strictly positive integers which sum to $N$. Let $X_i \sim {\chi^2}_{n_i}$ for $1 \le i \le k$, where ${\chi^2}_{n_i}$ is the chi-squared distribution with $n_i$ degrees of freedom. Then: :$\ds X = \sum_{i \mathop = 1}^k X_i \sim {\chi^2}_N$ \end{theorem} \begin{proof} Let $Y \sim {\chi^2}_N$. By Moment Generating Function of Chi-Squared Distribution, the moment generating function of $X_i$ is given by: :$\map {M_{X_i} } t = \paren {1 - 2 t}^{-n_i / 2}$ Similarly, the moment generating function of $Y$ is given by: :$\map {M_Y} t = \paren {1 - 2 t}^{-N / 2}$ By Moment Generating Function of Linear Combination of Independent Random Variables, the moment generating function of $X$ is given by: :$\ds \map {M_X} t = \prod_{i \mathop = 1}^k \map {M_{X_i} } t$ We aim to show that: :$\map {M_X} t = \map {M_Y} t$ By Moment Generating Function is Unique, this ensures $X = Y$. We have: {{begin-eqn}} {{eqn | l = \map {M_X} t | r = \prod_{i \mathop = 1}^k \paren {1 - 2 t}^{-n_i / 2} }} {{eqn | r = \paren {1 - 2 t}^{-\paren {n_1 + n_2 + \ldots + n_k} / 2} }} {{eqn | r = \paren {1 - 2 t}^{-N / 2} }} {{eqn | r = \map {M_Y} t }} {{end-eqn}} {{qed}} Category:Chi-Squared Distribution \end{proof}
22158
\section{Sum of Complex Conjugates} Tags: Complex Analysis, Complex Conjugates, Complex Addition \begin{theorem} Let $z_1, z_2 \in \C$ be complex numbers. Let $\overline z$ denote the complex conjugate of the complex number $z$. Then: :$\overline {z_1 + z_2} = \overline {z_1} + \overline {z_2}$ \end{theorem} \begin{proof} Let $z_1 = x_1 + i y_1, z_2 = x_2 + i y_2$. Then: {{begin-eqn}} {{eqn | l = \overline {z_1 + z_2} | r = \overline {\paren {x_1 + x_2} + i \paren {y_1 + y_2} } | c = }} {{eqn | r = \paren {x_1 + x_2} - i \paren {y_1 + y_2} | c = {{Defof|Complex Conjugate}} }} {{eqn | r = \paren {x_1 - i y_1} + \paren {x_2 - i y_2} | c = {{Defof|Complex Addition}} }} {{eqn | r = \overline {z_1} + \overline {z_2} | c = {{Defof|Complex Conjugate}} }} {{end-eqn}} {{qed}} \end{proof}
22159
\section{Sum of Complex Exponentials of i times Arithmetic Sequence of Angles/Formulation 1} Tags: Exponential Function \begin{theorem} Let $\alpha \in \R$ be a real number such that $\alpha \ne 2 \pi k$ for $k \in \Z$. Then: :$\ds \sum_{k \mathop = 0}^n e^{i \paren {\theta + k \alpha} } = \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} } \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} }$ \end{theorem} \begin{proof} First note that if $\alpha = 2 \pi k$ for $k \in \Z$, then $e^{i \alpha} = 1$. {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 0}^n e^{i \paren {\theta + k \alpha} } | r = e^{i \theta} \sum_{k \mathop = 0}^n e^{i k \alpha} | c = factorising $e^{i \theta}$ }} {{eqn | r = e^{i \theta} \paren {\frac {e^{i \paren {n + 1} \alpha} - 1} {e^{i \alpha} - 1} } | c = Sum of Geometric Sequence: only when $e^{i \alpha} \ne 1$ }} {{eqn | r = \frac {e^{i \theta} e^{i \paren {n + 1} \alpha / 2} } {e^{i \alpha / 2} } \paren {\frac {e^{i \paren {n + 1} \alpha / 2} - e^{-i \paren {n + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = extracting factors }} {{eqn | r = e^{i \paren {\theta + n \alpha / 2} } \paren {\frac {e^{i \paren {n + 1} \alpha / 2} - e^{-i \paren {n + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Exponential of Sum and some algebra }} {{eqn | r = \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} } \paren {\frac {e^{i \paren {n + 1} \alpha / 2} - e^{-i \paren {n + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Euler's Formula }} {{eqn | r = \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} } \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } | c = Sine Exponential Formulation }} {{end-eqn}} {{qed}} Category:Exponential Function \end{proof}
22160
\section{Sum of Complex Exponentials of i times Arithmetic Sequence of Angles/Formulation 2} Tags: Exponential Function \begin{theorem} Let $\alpha \in \R$ be a real number such that $\alpha \ne 2 \pi k$ for $k \in \Z$. Then: :$\ds \sum_{k \mathop = 1}^n e^{i \paren {\theta + k \alpha} } = \paren {\map \cos {\theta + \frac {n + 1} 2 \alpha} + i \map \sin {\theta + \frac {n + 1} 2 \alpha} } \frac {\map \sin {n \alpha / 2} } {\map \sin {\alpha / 2} }$ \end{theorem} \begin{proof} First note that if $\alpha = 2 \pi k$ for $k \in \Z$, then $e^{i \alpha} = 1$. {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 1}^n e^{i \paren {\theta + k \alpha} } | r = e^{i \theta} e^{i \alpha} \sum_{k \mathop = 0}^{n - 1} e^{i k \alpha} | c = factorising $e^{i \theta} e^{i \alpha}$ }} {{eqn | r = e^{i \theta} e^{i \alpha} \paren {\frac {e^{i n \alpha} - 1} {e^{i \alpha} - 1} } | c = Sum of Geometric Sequence: only when $e^{i \alpha} \ne 1$ }} {{eqn | r = e^{i \theta} e^{i \alpha} \paren {\frac {e^{i n \alpha / 2} \paren {e^{i n \alpha / 2} - e^{-i n \alpha / 2} } } {e^{i \alpha / 2} \paren {e^{i \alpha / 2} - e^{-i \alpha / 2} } } } | c = extracting factors }} {{eqn | r = e^{i \paren {\theta + \paren {n + 1} \alpha / 2} } \paren {\frac {e^{i n \alpha / 2} - e^{-i n \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Exponential of Sum and some algebra }} {{eqn | r = \paren {\map \cos {\theta + \frac {n + 1} 2 \alpha} + i \map \sin {\theta + \frac {n + 1} 2 \alpha} } \paren {\frac {e^{i n \alpha / 2} - e^{-i n \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Euler's Formula }} {{eqn | r = \paren {\map \cos {\theta + \frac {n + 1} 2 \alpha} + i \map \sin {\theta + \frac {n + 1} 2 \alpha} } \frac {\map \sin {n \alpha / 2} } {\map \sin {\alpha / 2} } | c = Sine Exponential Formulation }} {{end-eqn}} {{qed}} \end{proof}
22161
\section{Sum of Complex Exponentials of i times Arithmetic Sequence of Angles/Formulation 3} Tags: Exponential Function \begin{theorem} Let $\alpha \in \R$ be a real number such that $\alpha \ne 2 \pi k$ for $k \in \Z$. Then: :$\ds \sum_{k \mathop = p}^q e^{i \paren {\theta + k \alpha} } = \paren {\map \cos {\theta + \frac {\paren {p + q} \alpha} 2} + i \map \sin {\theta + \frac {\paren {p + q} \alpha} 2} } \frac {\map \sin {\paren {q - p + 1} \alpha / 2} } {\map \sin {\alpha / 2} }$ \end{theorem} \begin{proof} First note that if $\alpha = 2 \pi k$ for $k \in \Z$, then $e^{i \alpha} = 1$. {{begin-eqn}} {{eqn | l = \sum_{k \mathop = p}^q e^{i \paren {\theta + k \alpha} } | r = e^{i \theta} e^{i p \alpha} \sum_{k \mathop = 0}^{q - p} e^{i k \alpha} | c = factorising $e^{i \theta} e^{i p \alpha}$ }} {{eqn | r = e^{i \theta} e^{i p \alpha} \paren {\frac {e^{i \paren {q - p + 1} \alpha} - 1} {e^{i \alpha} - 1} } | c = Sum of Geometric Sequence: only when $e^{i \alpha} \ne 1$ }} {{eqn | r = e^{i \theta} e^{i p \alpha} \frac {e^{i \paren {q - p + 1} \alpha / 2} } {e^{i \alpha / 2} } \paren {\frac {e^{i \paren {q - p + 1} \alpha / 2} - e^{-i \paren {q - p + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = extracting factors }} {{eqn | r = e^{i \paren {\theta + \paren {p + q} \alpha / 2} } \paren {\frac {e^{i \paren {q - p + 1} \alpha / 2} - e^{-i \paren {q - p + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Exponential of Sum and some algebra }} {{eqn | r = \paren {\map \cos {\theta + \frac {\paren {p + q} \alpha} 2} + i \map \sin {\theta + \frac {\paren {p + q} \alpha} 2} } \paren {\frac {e^{i \paren {q - p + 1} \alpha / 2} - e^{-i \paren {q - p + 1} \alpha / 2} } {e^{i \alpha / 2} - e^{-i \alpha / 2} } } | c = Euler's Formula }} {{eqn | r = \paren {\map \cos {\theta + \frac {\paren {p + q} \alpha} 2} + i \map \sin {\theta + \frac {\paren {p + q} \alpha} 2} } \frac {\map \sin {\paren {q - p + 1} \alpha / 2} } {\map \sin {\alpha / 2} } | c = Sine Exponential Formulation }} {{end-eqn}} {{qed}} Category:Exponential Function \end{proof}
22162
\section{Sum of Complex Indices of Real Number} Tags: Powers \begin{theorem} Let $r \in \R_{> 0}$ be a (strictly) positive real number. Let $\psi, \tau \in \C$ be complex numbers. Let $r^\lambda$ be defined as the the principal branch of a positive real number raised to a complex number. Then: :$r^{\psi \mathop + \tau} = r^\psi \times r^\tau$ \end{theorem} \begin{proof} Then: {{begin-eqn}} {{eqn | l = r^{\psi \mathop + \tau} | r = \map \exp {\paren {\psi + \tau} \ln r} | c = {{Defof|Power (Algebra)/Complex Number/Principal Branch/Positive Real Base|Principal Branch of Positive Real Number raised to Complex Number}} }} {{eqn | r = \map \exp {\psi \ln r + \tau \ln r} }} {{eqn | r = \map \exp {\psi \ln r} \, \map \exp {\tau \ln r} | c = Exponential of Sum }} {{eqn | r = r^\psi \times r^\tau | c = {{Defof|Power (Algebra)/Complex Number/Principal Branch/Positive Real Base|Principal Branch of Positive Real Number raised to Complex Number}} }} {{end-eqn}} {{qed}} Category:Powers \end{proof}
22163
\section{Sum of Complex Integrals on Adjacent Intervals} Tags: Complex Analysis \begin{theorem} Let $\closedint a b$ be a closed real interval. Let $f: \closedint a b \to \C$ be a continuous complex function. Let $c \in \closedint a b$. Then: :$\ds \int_a^c \map f t \rd t + \int_c^b \map f t \rd t = \int_a^b \map f t \rd t$ \end{theorem} \begin{proof} From Continuous Complex Function is Complex Riemann Integrable, it follows that all three complex Riemann integrals are well defined. From Real and Imaginary Part Projections are Continuous, it follows that $\Re: \C \to \R$ and $\Im: \C \to \R$ are continuous functions. {{explain|Revisit the above link -- see if there is a more appropriate one to use so as not to invoke the concept of metric spaces}} From Composite of Continuous Mappings is Continuous, it follows that $\Re \circ f: \R \to \R$ and $\Im \circ f: \R \to \R$ are continuous real functions. Then: {{begin-eqn}} {{eqn | l = \int_a^b \map f t \rd t | r = \int_a^b \map \Re {\map f t} \rd t + i \int_a^b \map \Im {\map f t} \rd t | c = {{Defof|Complex Riemann Integral}} }} {{eqn | r = \int_a^c \map \Re {\map f t} \rd t + \int_c^b \map \Re {\map f t} \rd t + i \paren {\int_a^c \map \Im {\map f t} \rd t + \int_c^b \map \Im {\map f t} \rd t} | c = Sum of Integrals on Adjacent Intervals for Continuous Functions }} {{eqn | r = \int_a^c \map \Re {\map f t} \rd t + i \int_a^c \map \Im {\map f t} \rd t + \int_c^b \map \Re {\map f t} \rd t + i \int_c^b \map \Im {\map f t} \rd t }} {{eqn | r = \int_a^c \map f t \rd t + \int_c^b \map f t \rd t }} {{end-eqn}} {{qed}} Category:Complex Analysis \end{proof}
22164
\section{Sum of Complex Number with Conjugate} Tags: Complex Analysis, Complex Conjugates \begin{theorem} Let $z \in \C$ be a complex number. Let $\overline z$ be the complex conjugate of $z$. Let $\map \Re z$ be the real part of $z$. Then: :$z + \overline z = 2 \, \map \Re z$ \end{theorem} \begin{proof} Let $z = x + i y$. Then: {{begin-eqn}} {{eqn | l = z + \overline z | r = \paren {x + i y} + \paren {x - i y} | c = {{Defof|Complex Conjugate}} }} {{eqn | r = 2 x }} {{eqn | r = 2 \, \map \Re z | c = {{Defof|Real Part}} }} {{end-eqn}} {{qed}} \end{proof}
22165
\section{Sum of Complex Numbers in Exponential Form} Tags: Complex Numbers, Complex Addition \begin{theorem} Let $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$ be complex numbers expressed in exponential form. Let $z_3 = r_3 e^{i \theta_3} = z_1 + z_2$. Then: :$r_3 = \sqrt {r_1^2 + r_2^2 + 2 r_1 r_2 \map \cos {\theta_1 - \theta_2} }$ :$\theta_3 = \map \arctan {\dfrac {r_1 \sin \theta_1 + r_2 \sin \theta_2} {r_1 \cos \theta_1 + r_2 \cos \theta_2} }$ \end{theorem} \begin{proof} We have: {{begin-eqn}} {{eqn | l = r_1 e^{i \theta_1} + r_2 e^{i \theta_2} | r = r_1 \paren {\cos \theta_1 + i \sin \theta_1} + r_2 \paren {\cos \theta_2 + i \sin \theta_2} | c = {{Defof|Polar Form of Complex Number}} }} {{eqn | r = \paren {r_1 \cos \theta_1 + r_2 \cos \theta_2} + i \paren {r_1 \sin \theta_1 + r_2 \sin \theta_2} | c = }} {{end-eqn}} Then: {{begin-eqn}} {{eqn | l = {r_3}^2 | r = r_1^2 + r_2^2 + 2 r_1 r_2 \, \map \cos {\theta_1 - \theta_2} | c = Complex Modulus of Sum of Complex Numbers }} {{eqn | ll= \leadsto | l = r_3 | r = \sqrt {r_1^2 + r_2^2 + 2 r_1 r_2 \, \map \cos {\theta_1 - \theta_2} } | c = }} {{end-eqn}} and similarly: :$\theta_3 = \map \arctan {\dfrac {r_1 \sin \theta_1 + r_2 \sin \theta_2} {r_1 \cos \theta_1 + r_2 \cos \theta_2} }$ {{qed}} \end{proof}
22166
\section{Sum of Complex Numbers in Exponential Form/General Result} Tags: Complex Addition \begin{theorem} Let $n \in \Z_{>0}$ be a positive integer. For all $k \in \set {1, 2, \dotsc, n}$, let: :$z_k = r_k e^{i \theta_k}$ be non-zero complex numbers in exponential form. Let: :$r e^{i \theta} = \ds \sum_{k \mathop = 1}^n z_k = z_1 + z_2 + \dotsb + z_k$ Then: {{begin-eqn}} {{eqn | l = r | r = \sqrt {\sum_{k \mathop = 1}^n r_k + \sum_{1 \mathop \le j \mathop < k \mathop \le n} 2 {r_j} {r_k} \map \cos {\theta_j - \theta_k} } }} {{eqn | l = \theta | r = \map \arctan {\dfrac {r_1 \sin \theta_1 + r_2 \sin \theta_2 + \dotsb + r_n \sin \theta_n} {r_1 \cos \theta_1 + r_2 \cos \theta_2 + \dotsb + r_n \cos \theta_n} } }} {{end-eqn}} \end{theorem} \begin{proof} Let: {{begin-eqn}} {{eqn | l = r e^{i \theta} | r = \sum_{k \mathop = 1}^n z_k | c = }} {{eqn | r = z_1 + z_2 + \dotsb + z_k | c = }} {{eqn | r = r_1 \paren {\cos \theta_1 + i \sin \theta_1} + r_2 \paren {\cos \theta_2 + i \sin \theta_2} + \dotsb + r_n \paren {\cos \theta_n + i \sin \theta_n} | c = {{Defof|Complex Number}} }} {{eqn | r = r_1 \cos \theta_1 + r_2 \cos \theta_2 + \dotsb + r_n \cos \theta_n + i \paren {r_1 \sin \theta_1 + r_2 \sin \theta_2 + \dotsb + r_n \sin \theta_n} | c = rerranging }} {{end-eqn}} By the definition of the complex modulus, with $z = x + i y$, $r$ is defined as: :$r = \sqrt {\map {\Re^2} z + \map {\Im^2} z}$ Hence {{begin-eqn}} {{eqn | l = r | r = \sqrt {\map {\Re^2} z + \map {\Im^2} z} | c = }} {{eqn | l = r | r = \sqrt {\paren {r_1 \cos \theta_1 + r_2 \cos \theta_2 + \dotsb + r_n \cos \theta_n }^2 + \paren {r_1 \sin \theta_1 + r_2 \sin \theta_2 + \dotsb + r_n \sin \theta_n}^2 } | c = }} {{end-eqn}} In the above we have two types of pairs of terms: {{begin-eqn}} {{eqn | n = 1 | q = 1 \le k \le n | l = {r_k}^2 \cos^2 {\theta_k}^2 + {r_k}^2 \sin^2 {\theta_k}^2 | r = {r_k}^2 \paren {\cos^2 {\theta_k}^2 + \sin^2 {\theta_k}^2} | c = }} {{eqn | r = {r_k}^2 | c = Sum of Squares of Sine and Cosine }} {{eqn | n = 2 | q = 1 \le j < k \le n | l = 2 r_j r_k \cos \theta_j \cos \theta_k + 2 {r_j} {r_k} \sin \theta_j \sin \theta_k | r = 2 r_j r_k \paren {\cos \theta_j \cos \theta_k + \sin \theta_j \sin \theta_k} | c = }} {{eqn | r = 2 r_j r_k \map \cos {\theta_j - \theta_k} | c = Cosine of Difference }} {{end-eqn}} Hence: :$\ds r = \sqrt {\sum_{k \mathop = 1}^n r_k + \sum_{1 \mathop \le j \mathop < k \mathop \le n} 2 {r_j} {r_k} \map \cos {\theta_j - \theta_k} }$ Note that $r > 0$ since $r_k > 0$ for all $k$. Hence we may safely assume that $r > 0$ when determining the argument below. By definition of the argument of a complex number, with $z = x + i y$, $\theta$ is defined as any solution to the pair of equations: :$(1): \quad \dfrac x {\cmod z} = \map \cos \theta$ :$(2): \quad \dfrac y {\cmod z} = \map \sin \theta$ where $\cmod z$ is the modulus of $z$. As $r > 0$ we have that $\cmod z \ne 0$ by definition of modulus. Hence we can divide $(2)$ by $(1)$, to get: {{begin-eqn}} {{eqn | l = \map \tan \theta | r = \frac y x | c = }} {{eqn | r = \frac {\map \Im z} {\map \Re z} | c = }} {{end-eqn}} Hence: {{begin-eqn}} {{eqn | l = \theta | r = \map \arctan {\frac {\map \Im {r e^{i \theta} } } {\map \Re {r e^{i \theta} } } } | c = }} {{eqn | r = \map \arctan {\dfrac {r_1 \sin \theta_1 + r_2 \sin \theta_2 + \dotsb + r_n \sin \theta_n} {r_1 \cos \theta_1 + r_2 \cos \theta_2 + \dotsb + r_n \cos \theta_n} } | c = }} {{end-eqn}} {{qed}} \end{proof}
22167
\section{Sum of Components of Equal Ratios} Tags: Ratios \begin{theorem} {{:Euclid:Proposition/V/12}} That is: :$a_1 : b_1 = a_2 : b_2 = a_3 : b_3 = \cdots \implies \left({a_1 + a_2 + a_3 + \cdots}\right) : \left({b_1 + b_2 + b_3 + \cdots}\right)$ \end{theorem} \begin{proof} Let any number of magnitudes $A, B, C, D, E, F$ be proportional, so that: :$A : B = C : D = E : F$ etc. :450px Of $A, C, E$ let equimultiples $G, H, K$ be taken, and of $B, D, F$ let other arbitrary equimultiples $L, M, N$ be taken. We have that $A : B = C : D = E : F$. Therefore: :$G > L \implies H > M, K > N$ :$G = L \implies H = M, K = N$ :$G < L \implies H < M, K < N$ So, in addition: :$G > L \implies G + H + K > L + M + N$ :$G = L \implies G + H + K = L + M + N$ :$G < L \implies G + H + K < L + M + N$ It follows from Multiplication of Numbers is Left Distributive over Addition that $G$ and $G + H + K$ are equimultiples of $A$ and $A + C + E$. For the same reason, $L$ and $L + M + N$ are equimultiples of $B$ and $B + D + F$. The result follows from {{EuclidDefLink|V|5|Equality of Ratios}}. {{qed}} {{Euclid Note|12|V}} \end{proof}
22168
\section{Sum of Consecutive Triangular Numbers is Square} Tags: Polygonal Numbers, Triangle Numbers, Triangular Numbers, Sum of Consecutive Triangular Numbers is Square, Square Numbers \begin{theorem} The sum of two consecutive triangular numbers is a square number. \end{theorem} \begin{proof} Let $T_{n - 1}$ and $T_n$ be two consecutive triangular numbers. From Closed Form for Triangular Numbers, we have: :$T_{n - 1} = \dfrac {\paren {n - 1} n} 2$ :$T_n = \dfrac {n \paren {n + 1} } 2$ So: {{begin-eqn}} {{eqn | l = T_{n - 1} + T_n | r = \frac {\paren {n - 1} n} 2 + \frac {n \paren {n + 1} } 2 | c = }} {{eqn | r = \frac {\paren {n - 1 + n + 1} n} 2 | c = }} {{eqn | r = \frac {2 n^2} 2 | c = }} {{eqn | r = n^2 | c = }} {{end-eqn}} {{qed}} \end{proof}
22169
\section{Sum of Cosecant and Cotangent} Tags: Trigonometric Identities \begin{theorem} :$\csc x + \cot x = \cot {\dfrac x 2}$ \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \csc x + \cot x | r = \frac 1 {\sin x} + \frac {\cos x} {\sin x} | c = {{Defof|Cosecant}} and {{Defof|Cotangent}} }} {{eqn | r = \frac {1 + \cos x} {\sin x} | c = }} {{eqn | r = \frac {2 \cos^2 {\frac x 2} } {2 \sin {\frac x 2} \cos {\frac x 2} } | c = Double Angle Formula for Sine and Double Angle Formula for Cosine }} {{eqn | r = \frac {\cos {\frac x 2} } {\sin {\frac x 2} } | c = }} {{eqn | r = \cot {\frac x 2} | c = {{Defof|Cotangent}} }} {{end-eqn}} {{qed}} Category:Trigonometric Identities \end{proof}
22170
\section{Sum of Cosets of Ideals is Sum in Quotient Ring} Tags: Ring Operations on Coset Space of Ideal, Ideal Theory \begin{theorem} Let $\struct {R, +, \circ}$ be a ring. Let $\powerset R$ be the power set of $R$. Let $J$ be an ideal of $R$. Let $X$ and $Y$ be cosets of $J$. Let $X +_\PP Y$ be the sum of $X$ and $Y$, where $+_\PP$ is the operation induced on $\powerset R$ by $+$. The sum $X +_\PP Y$ in $\powerset R$ is also their sum in the quotient ring $R / J$. \end{theorem} \begin{proof} As $\struct {R, +, \circ}$ is a ring, it follows that $\struct {R, +}$ is an abelian group. Thus by Subgroup of Abelian Group is Normal, all subgroups of $\struct {R, +, \circ}$ are normal. So from the definition of quotient group, it follows directly that $X +_\PP Y$ in $\powerset R$ is also the sum in the quotient ring $R / J$. {{qed}} \end{proof}
22171
\section{Sum of Cosines of Arithmetic Sequence of Angles} Tags: Cosine Function \begin{theorem} Let $\alpha \in \R$ be a real number such that $\alpha \ne 2 \pi k$ for $k \in \Z$. Then: \end{theorem} \begin{proof} From Sum of Complex Exponentials of i times Arithmetic Sequence of Angles: :$\displaystyle \sum_{k \mathop = 0}^n e^{i \paren {\theta + k \alpha} } = \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} } \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} }$ From Euler's Formula, this can be expressed as: :$\displaystyle \paren {\sum_{k \mathop = 0}^n \map \cos {\theta + k \alpha} + i \map \sin {\theta + k \alpha} } = \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} }$ Equating real parts: :$\displaystyle \sum_{k \mathop = 0}^n \map \cos {\theta + k \alpha} = \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } \map \cos {\theta + \frac {n \alpha} 2}$ {{qed}} \end{proof}
22172
\section{Sum of Cosines of Arithmetic Sequence of Angles/Formulation 1} Tags: Cosine Function \begin{theorem} Let $\alpha \in \R$ be a real number such that $\alpha \ne 2 \pi k$ for $k \in \Z$. Then: {{begin-eqn}} {{eqn | l = \sum_{k \mathop = 0}^n \map \cos {\theta + k \alpha} | r = \cos \theta + \map \cos {\theta + \alpha} + \map \cos {\theta + 2 \alpha} + \map \cos {\theta + 3 \alpha} + \dotsb }} {{eqn | r = \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } \map \cos {\theta + \frac {n \alpha} 2} }} {{end-eqn}} \end{theorem} \begin{proof} From Sum of Complex Exponentials of i times Arithmetic Sequence of Angles: Formulation 1: :$\ds \sum_{k \mathop = 0}^n e^{i \paren {\theta + k \alpha} } = \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} } \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} }$ It is noted that, from Sine of Multiple of Pi, when $\alpha = 2 \pi k$ for $k \in \Z$, $\map \sin {\alpha / 2} = 0$ and the {{RHS}} is not defined. From Euler's Formula, this can be expressed as: :$\ds \sum_{k \mathop = 0}^n \paren {\map \cos {\theta + k \alpha} + i \map \sin {\theta + k \alpha} } = \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } \paren {\map \cos {\theta + \frac {n \alpha} 2} + i \map \sin {\theta + \frac {n \alpha} 2} }$ Equating real parts: :$\ds \sum_{k \mathop = 0}^n \map \cos {\theta + k \alpha} = \frac {\map \sin {\alpha \paren {n + 1} / 2} } {\map \sin {\alpha / 2} } \map \cos {\theta + \frac {n \alpha} 2}$ {{qed}} \end{proof}