id
stringlengths
1
260
contents
stringlengths
1
234k
21273
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of B/Operations with Identity} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\BB$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S, \tuple {a, b, c}, \tuple {a, c, b} }$, where $I_S$ is the identity mapping on $S$. Then: :None of the operations of $\BB$ has an identity element. \end{theorem} \begin{proof} Recall Automorphism Group of $\BB$. Consider each of the categories of $\BB$ induced by each of $a \circ a$, $a \circ b$ and $a \circ c$, illustrated by the partially-filled Cayley tables to which they give rise: ;$(1): \quad a \circ a$ :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & a & & \\ b & & b & \\ c & & & c \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & b & & \\ b & & c & \\ c & & & a \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & c & & \\ b & & a & \\ c & & & b \\ \end {array}$ ;$(2): \quad a \circ b$ :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & a & \\ b & & & b \\ c & c & & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & b & \\ b & & & c \\ c & a & & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & c & \\ b & & & a \\ c & b & & \\ \end {array}$ ;$(3): \quad a \circ c$ :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & & c \\ b & a & & \\ c & & b & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & & b \\ b & c & & \\ c & & a & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & & c \\ b & a & & \\ c & & b & \\ \end {array}$ The Cayley table of an operations on $\BB$ is constructed from combining one of the partial Cayley tables from each of the above categories. It is then possible to identify which of these partial Cayley tables can contribute towards an operation with an identity element by seeing whether they contain at least one element of the nature: :$x \circ y = x$ or: :$x \circ y = y$ Of these, an operation with an identity element would need to combine: The partial Cayley table induced by $a \circ a = a$: :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & a & & \\ b & & b & \\ c & & & c \\ \end {array}$ Either of the partial Cayley tables induced by $a \circ b = a$ or $a \circ b = b$: :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & a & \\ b & & & b \\ c & c & & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & b & \\ b & & & c \\ c & a & & \\ \end {array}$ Either of the partial Cayley tables induced by $a \circ c = a$ or $a \circ c = c$: :$\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & & a \\ b & b & & \\ c & & c & \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & & & c \\ b & a & & \\ c & & b & \\ \end {array}$ Combining the partial Cayley table induced by $a \circ a = a$ with those of $a \circ b = a$ and $a \circ b = b$ in turn: $\begin {array} {c|ccc} \circ & a & b & c \\ \hline a & a & a & \\ b & & b & b \\ c & c & & c \\ \end {array} \qquad \begin {array} {c|ccc} \circ & a & b & c \\ \hline a & a & b & \\ b & & b & c \\ c & a & & c \\ \end {array}$ and there is no need to consider those induced by $a \circ c = a$ or $a \circ c = c$, as it is immediately apparent that neither of these partial Cayley tables can define an operation with an identity element. {{qed}} \end{proof}
21274
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of C n} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} where $I_S$ is the identity mapping on $S$. Then: :Each of $\CC_1$, $\CC_2$ and $\CC_3$ has $3^4 - 3$ elements. \end{theorem} \begin{proof} Recall the definition of (group) automorphism: :$\phi$ is an automorphism on $\struct {S, \circ}$ {{iff}}: ::$\phi$ is a permutation of $S$ ::$\phi$ is a homomorphism on $\struct {S, \circ}$: $\forall a, b \in S: \map \phi {a \circ b} = \map \phi a \circ \map \phi b$ From Identity Mapping is Group Automorphism, $I_S$ is always an automorphism on $\struct {S, \circ}$. Hence it is not necessary to analyse the effect of $I_S$ on $S$. {{WLOG}}, we will analyse the nature of $\CC_1$. Let $n$ be the number of operations $\circ$ on $S$ such that $\tuple {a, b}$ is an automorphism of $\struct {S, \circ}$. Let us denote the permutation $\tuple {a, b}$ as $r: S \to S$, defined as: :$r = \map r a = b, \map r b = a, \map r c = c$ We select various product elements $x \circ y \in S$ and determine how $r$ constrains other product elements. \end{proof}
21275
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of C n/Commutative Operations} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} where $I_S$ is the identity mapping on $S$. Then: $8$ of the operations of each of $\CC_1$, $\CC_2$ and $\CC_3$ is commutative. \end{theorem} \begin{proof} {{WLOG}}, we will analyse the nature of $\CC_1$. Recall this lemma: \end{proof}
21276
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of C n/Lemma 1} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\CC_1$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S, \tuple {a, b} }$, where $I_S$ is the identity mapping on $S$. Then: $c$ is an idempotent element under $\circ$, that is: :$c \circ c = c$ \end{theorem} \begin{proof} Recall the definition of (group) automorphism: :$\phi$ is an automorphism on $\struct {S, \circ}$ {{iff}}: ::$\phi$ is a permutation of $S$ ::$\phi$ is a homomorphism on $\struct {S, \circ}$: $\forall a, b \in S: \map \phi {a \circ b} = \map \phi a \circ \map \phi b$ Let us denote $\tuple {a, b}$ as the mapping $r: S \to S$: :$r := \map r a = b, \map r b = a, \map r c = c$ {{AimForCont}} $c$ is not idempotent. Then $c \circ c = x$, where $x = a$ or $x = b$. {{begin-eqn}} {{eqn | l = c \circ c | r = x | c = }} {{eqn | ll= \leadsto | l = \map r c \circ \map r c | r = \map r x | c = }} {{eqn | o = \ne | r = x | c = }} {{end-eqn}} So it cannot be the case that $c \circ c = a$ or $c \circ c = b$. Hence $c \circ c = c$, that is, $c$ is idempotent. {{qed}} Category:Sets of Operations on Set of 3 Elements \end{proof}
21277
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of C n/Lemma 2} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\CC_1$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S, \tuple {a, b} }$, where $I_S$ is the identity mapping on $S$. Then: {{begin-eqn}} {{eqn | l = a \circ a = a | o = \iff | r = b \circ b = b }} {{eqn | l = a \circ a = b | o = \iff | r = b \circ b = a }} {{eqn | l = a \circ a = c | o = \iff | r = b \circ b = c }} {{eqn | l = a \circ b = a | o = \iff | r = b \circ a = b }} {{eqn | l = a \circ b = b | o = \iff | r = b \circ a = a }} {{eqn | l = a \circ b = c | o = \iff | r = b \circ a = c }} {{eqn | l = a \circ c = a | o = \iff | r = b \circ c = b }} {{eqn | l = a \circ c = b | o = \iff | r = b \circ c = a }} {{eqn | l = a \circ c = c | o = \iff | r = b \circ c = c }} {{eqn | l = c \circ a = a | o = \iff | r = c \circ b = b }} {{eqn | l = c \circ a = b | o = \iff | r = c \circ b = a }} {{eqn | l = c \circ a = c | o = \iff | r = c \circ b = c }} {{end-eqn}} \end{theorem} \begin{proof} Recall the definition of (group) automorphism: :$\phi$ is an automorphism on $\struct {S, \circ}$ {{iff}}: ::$\phi$ is a permutation of $S$ ::$\phi$ is a homomorphism on $\struct {S, \circ}$: $\forall a, b \in S: \map \phi {a \circ b} = \map \phi a \circ \map \phi b$ Let us denote $\tuple {a, b}$ as the mapping $r: S \to S$: :$r := \map r a = b, \map r b = a, \map r c = c$ In Lemma 1 it has been established that: {{begin-eqn}} {{eqn | l = a \circ a = a | o = \iff | r = b \circ b = b }} {{eqn | l = a \circ a = b | o = \iff | r = b \circ b = a }} {{eqn | l = a \circ a = c | o = \iff | r = b \circ b = c }} {{end-eqn}} We select values for $x$ in the expression $a \circ b = x$ and determine how $r$ constrains other product elements. Thus: {{begin-eqn}} {{eqn | l = a \circ x | r = a | c = }} {{eqn | ll= \leadstoandfrom | l = \map r a \circ \map r x | r = \map r a | c = }} {{eqn | ll= \leadstoandfrom | l = b \circ \map r x | r = b | c = }} {{end-eqn}} Hence: {{begin-eqn}} {{eqn | l = a \circ b = a | o = \iff | r = b \circ a = b }} {{eqn | l = a \circ b = b | o = \iff | r = b \circ a = a }} {{eqn | l = a \circ b = c | o = \iff | r = b \circ a = c }} {{end-eqn}} Similarly: {{begin-eqn}} {{eqn | l = x \circ a | r = a | c = }} {{eqn | ll= \leadstoandfrom | l = \map r x \circ \map r a | r = \map r a | c = }} {{eqn | ll= \leadstoandfrom | l = \map r x \circ b | r = b | c = }} {{end-eqn}} Hence: {{begin-eqn}} {{eqn | l = c \circ a = a | o = \iff | r = c \circ b = b }} {{eqn | l = c \circ a = b | o = \iff | r = c \circ b = a }} {{eqn | l = c \circ a = c | o = \iff | r = c \circ b = c }} {{end-eqn}} {{qed}} Category:Sets of Operations on Set of 3 Elements \end{proof}
21278
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of C n/Operations with Identity} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} where $I_S$ is the identity mapping on $S$. Then: :$9$ of the operations of each of $\CC_1$, $\CC_2$ and $\CC_3$ has an identity element. \end{theorem} \begin{proof} {{WLOG}}, we will analyse the nature of $\CC_1$. Recall this lemma: \end{proof}
21279
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of D} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\DD$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S}$, where $I_S$ is the identity mapping on $S$. Then: :$\DD$ has $19 \, 422$ elements. \end{theorem} \begin{proof} Let $n$ denote the cardinality of $\DD$. Equivalently, $n$ equals the number of operations $\circ$ on $S$ on which the only automorphism is $I_S$. Recall these definitions: Let $\AA$, $\BB$, $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \AA | o = : | r = \map \Gamma S | c = where $\map \Gamma S$ is the symmetric group on $S$ }} {{eqn | l = \BB | o = : | r = \set {I_S, \tuple {a, b, c}, \tuple {a, c, b} } | c = where $I_S$ is the identity mapping on $S$ }} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} \end{proof}
21280
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of D/Commutative Operations} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\DD$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S}$, where $I_S$ is the identity mapping on $S$. Then: :$696$ of the operations of $\DD$ is commutative. \end{theorem} \begin{proof} Let $n$ denote the number of commutative operations of $\DD$. Recall these definitions: Let $\AA$, $\BB$, $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \AA | o = : | r = \map \Gamma S | c = where $\map \Gamma S$ is the symmetric group on $S$ }} {{eqn | l = \BB | o = : | r = \set {I_S, \tuple {a, b, c}, \tuple {a, c, b} } | c = where $I_S$ is the identity mapping on $S$ }} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} Let $N$ be the total number of commutative operations on $S$. Let: :$A$ denote the number of commutative operations in $\AA$ :$B$ denote the number of commutative operations in $\BB$ :$C$ denote the total number of commutative operations in $\CC_1$, $\CC_2$ and $\CC_3$. From the lemma, and from the Fundamental Principle of Counting: :$N = A + B + C + D$ From Count of Commutative Binary Operations on Set: :$N = 3^6 = 729$ Then we have: :From Automorphism Group of $\AA$: Operations with Identity: $A = 1$ :From Automorphism Group of $\BB$: Operations with Identity: $B = 8$ :From Automorphism Group of $\CC_n$: Operations with Identity: $C = 3 \times 8$ Hence we have: {{begin-eqn}} {{eqn | l = n | r = N - A - B - C | c = }} {{eqn | r = 729 - 1 - 8 - 24 | c = }} {{eqn | r = 696 | c = }} {{end-eqn}} {{qed}} \end{proof}
21281
\section{Sets of Operations on Set of 3 Elements/Automorphism Group of D/Operations with Identity} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\DD$ be the set of all operations $\circ$ on $S$ such that the group of automorphisms of $\struct {S, \circ}$ forms the set $\set {I_S}$, where $I_S$ is the identity mapping on $S$. Then: :$216$ of the operations of $\DD$ has an identity element. \end{theorem} \begin{proof} Let $n$ denote the number of operations of $\DD$ which have an identity element. Recall these definitions: Let $\AA$, $\BB$, $\CC_1$, $\CC_2$ and $\CC_3$ be respectively the set of all operations $\circ$ on $S$ such that the groups of automorphisms of $\struct {S, \circ}$ are as follows: {{begin-eqn}} {{eqn | l = \AA | o = : | r = \map \Gamma S | c = where $\map \Gamma S$ is the symmetric group on $S$ }} {{eqn | l = \BB | o = : | r = \set {I_S, \tuple {a, b, c}, \tuple {a, c, b} } | c = where $I_S$ is the identity mapping on $S$ }} {{eqn | l = \CC_1 | o = : | r = \set {I_S, \tuple {a, b} } }} {{eqn | l = \CC_2 | o = : | r = \set {I_S, \tuple {a, c} } }} {{eqn | l = \CC_3 | o = : | r = \set {I_S, \tuple {b, c} } }} {{end-eqn}} Let $N$ be the total number of operations on $S$ which have an identity element. Let: :$A$ denote the number of operations in $\AA$ which have an identity element :$B$ denote the number of operations in $\BB$ which have an identity element :$C$ denote the total number of operations in $\CC_1$, $\CC_2$ and $\CC_3$ which have an identity element. From the lemma, and from the Fundamental Principle of Counting: :$N = A + B + C + D$ From Count of Binary Operations with Identity: :$N = 3^5 = 243$ Then we have: :From Automorphism Group of $\AA$: Operations with Identity: $A = 0$ :From Automorphism Group of $\BB$: Operations with Identity: $B = 0$ :From Automorphism Group of $\CC_n$: Operations with Identity: $C = 3 \times 9$ Hence we have: {{begin-eqn}} {{eqn | l = n | r = N - A - B - C | c = }} {{eqn | r = 243 - 0 - 0 - 27 | c = }} {{eqn | r = 216 | c = }} {{end-eqn}} {{qed}} \end{proof}
21282
\section{Sets of Operations on Set of 3 Elements/Commutative Operations} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\PP$ be the set of all commutative operations $\circ$ on $S$. Then the elements of $\PP$ are divided in $129$ isomorphism classes. That is, up to isomorphism, there are $129$ commutative operations on $S$ which have an identity element. \end{theorem} \begin{proof} From Automorphism Group of $\AA$: Commutative Operations: :there is exactly $1$ commutative operation in $\AA$. From Automorphism Group of $\BB$: Commutative Operations: :there are $8$ commutative operations in $\BB$. From Automorphism Group of $\CC_n$: Commutative Operations: :there are $3 \times 8$ commutative operations in $\CC$. From Automorphism Group of $\DD$: Commutative Operations: :there are $696$ commutative operations in $\DD$. From Automorphism Group of $\CC_n$: Isomorphism Classes: :the elements of $\BB$ form isomorphism classes in pairs. From Automorphism Group of $\CC_n$: Isomorphism Classes: :the elements of $\CC$ form isomorphism classes in threes. From Automorphism Group of $\DD$: Isomorphism Classes: :the elements of $\DD$ form isomorphism classes in sixes. Hence there are: :$\dfrac 8 2 = 4$ isomorphism classes of commutative operations in $\BB$. :$\dfrac {3 \times 8} 3 = 8$ isomorphism classes of commutative operations in $\CC$. :$\dfrac {696} 6 = 116$ isomorphism classes of commutative operations in $\DD$. Thus there are $1 + 4 + 8 + 116 = 129$ isomorphism classes of operations $\circ$ on $S$ which have an identity element.. {{qed}} \end{proof}
21283
\section{Sets of Operations on Set of 3 Elements/Isomorphism Classes} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\MM$ be the set of all operations $\circ$ on $S$. Then the elements of $\MM$ are divided in $3330$ isomorphism classes. That is, up to isomorphism, there are $3330$ operations on $S$. \end{theorem} \begin{proof} From Automorphism Group of $\AA$: Isomorphism Classes: :each element of $\AA$ is in its own isomorphism class. Hence $\AA$ contributes $3$ isomorphism classes. From Automorphism Group of $\BB$: Isomorphism Classes: :the $24$ elements of $\BB$ form $12$ isomorphism classes in pairs. From Automorphism Group of $\CC_n$: Isomorphism Classes: :the $3 \times 78$ elements of $\CC$ form $78$ isomorphism classes in threes. From Automorphism Group of $\DD$: Isomorphism Classes: :the $19 \, 422$ elements of $\DD$ form $3237$ isomorphism classes in sixes. Thus there are $3 + 12 + 78 + 3237 = 3330$ isomorphism classes. {{qed}} \end{proof}
21284
\section{Sets of Operations on Set of 3 Elements/Operations with Identity} Tags: Sets of Operations on Set of 3 Elements \begin{theorem} Let $S = \set {a, b, c}$ be a set with $3$ elements. Let $\NN$ be the set of all operations $\circ$ on $S$ which have an identity element. Then the elements of $\NN$ are divided in $45$ isomorphism classes. That is, up to isomorphism, there are $45$ operations on $S$ which have an identity element. \end{theorem} \begin{proof} From Automorphism Group of $\AA$: Operations with Identity: :there are no elements of $\AA$ which have an identity element. From Automorphism Group of $\BB$: Operations with Identity: :there are no elements of $\BB$ which have an identity element. From Automorphism Group of $\CC_n$: Operations with Identity: :there are $3 \times 9$ elements of $\CC$ which have an identity element. From Automorphism Group of $\DD$: Operations with Identity: :there are $216$ elements of $\DD$ which have an identity element. From Automorphism Group of $\CC_n$: Isomorphism Classes: :the elements of $\CC$ form isomorphism classes in threes. From Automorphism Group of $\DD$: Isomorphism Classes: :the elements of $\DD$ form isomorphism classes in sixes. Hence there are: :$\dfrac {3 \times 9} 3 = 9$ isomorphism classes of elements of $\CC$ which have an identity element. :$\dfrac {216} 6 = 36$ isomorphism classes of elements of $\DD$ which have an identity element. Thus there are $3 + 36 = 45$ isomorphism classes of operations $\circ$ on $S$ which have an identity element.. {{qed}} \end{proof}
21285
\section{Sets of Permutations of Equivalent Sets are Equivalent} Tags: Permutation Theory \begin{theorem} Let $A$ and $B$ be sets such that: :$A \sim B$ where $\sim$ denotes set equivalence. Let $\map \Gamma A$ denote the set of permutations on $A$. Then: :$\map \Gamma A \sim \map \Gamma B$ \end{theorem} \begin{proof} By definition of set equivalence, let $f: A \to B$ be a bijection. Define $\Phi : \map \Gamma A \to \map \Gamma B$ by: :$\map \Phi \gamma := f \circ \gamma \circ f^{-1}$ By definition of permutation, each $\gamma \in \map \Gamma A$ is a bijection. By Composite of Bijections is Bijection, each $f \circ \gamma$ is a bijection. By Inverse of Bijection is Bijection, the inverse mapping $f^{-1}$ is a bijection. Then by Composite of Bijections is Bijection, each $\map \Phi \gamma = f \circ \gamma \circ f^{-1}$ is a bijection. Thus each $\map \Phi \gamma$ is a permutation of $B$, so $\Phi$ is well-defined. Define $\Psi : \map \Gamma B \to \map \Gamma A$ by: :$\map \Psi \delta := f^{-1} \circ \delta \circ f$ $\Psi$ is a well-defined mapping by the same reasoning as for $\Phi$. Then for each $\gamma \in \map \Gamma A$: {{begin-eqn}} {{eqn | l = \map {\paren {\Psi \circ \Phi} } \gamma | r = \map \Psi {\map \Phi \gamma} }} {{eqn | r = \map \Psi {f \circ \gamma \circ f^{-1} } }} {{eqn | r = f^{-1} \circ \paren {f \circ \gamma \circ f^{-1} } \circ f }} {{eqn | r = \paren {f \circ f^{-1} } \circ \gamma \circ \paren {f^{-1} \circ f} | c = Composition of Mappings is Associative }} {{eqn | r = I_B \circ \gamma \circ I_A | c = Composite of Bijection with Inverse is Identity Mapping }} {{eqn | r = \gamma \circ I_A | c = Identity Mapping is Left Identity }} {{eqn | r = \gamma | c = Identity Mapping is Right Identity }} {{end-eqn}} Thus $\Psi \circ \Phi = I_{\map \Gamma A}$. Similarly, for each $\delta \in \map \Gamma B$: {{begin-eqn}} {{eqn | l = \map {\Phi \circ \Psi} {\delta} | r = \map \Phi {\map \Psi \delta} }} {{eqn | r = \map \Phi {f^{-1} \circ \delta \circ f} }} {{eqn | r = f \circ \paren {f^{-1} \circ \delta \circ f} \circ f^{-1} }} {{eqn | r = \paren {f^{-1} \circ f} \circ \delta \circ \paren {f \circ f^{-1} } | c = Composition of Mappings is Associative }} {{eqn | r = I_A \circ \delta \circ I_B | c = Composite of Bijection with Inverse is Identity Mapping }} {{eqn | r = \delta \circ I_B | c = Identity Mapping is Left Identity }} {{eqn | r = \delta | c = Identity Mapping is Right Identity }} {{end-eqn}} Thus $\Phi \circ \Psi = I_{\map \Gamma B}$. By Bijection iff Left and Right Inverse, both $\Phi$ and $\Psi$ are bijections. In particular: :$\map \Gamma A \sim \map \Gamma B$ {{qed}} \end{proof}
21286
\section{Seven Eighths as Pandigital Fraction} Tags: Pandigital Fractions \begin{theorem} $\dfrac 7 8$ cannot be expressed as a pandigital fraction. \end{theorem} \begin{proof} Can be verified by brute force. Category:Pandigital Fractions \end{proof}
21287
\section{Seven Ninths as Pandigital Fraction} Tags: Pandigital Fractions \begin{theorem} $\dfrac 7 9$ cannot be expressed as a pandigital fraction. \end{theorem} \begin{proof} Can be verified by brute force. Category:Pandigital Fractions \end{proof}
21288
\section{Seven Touching Cylinders} Tags: Recreational Mathematics, Cylinders \begin{theorem} It is possible to arrange $7$ identical cylinders so that each one touches each of the others. The cylinders must be such that their heights must be at least $\dfrac {7 \sqrt 3} 2$ of the diameters of their bases. \end{theorem} \begin{proof} :600px It remains to be proved that the heights of the cylinders must be at least $\dfrac {7 \sqrt 3} 2$ of the diameters of their bases. {{ProofWanted|Prove the above}} \end{proof}
21289
\section{Seventeen Horses/General Problem 1} Tags: Seventeen Horses, Unit Fractions \begin{theorem} A man dies, leaving $n$ indivisible and indistinguishable objects to be divided among $3$ heirs. They are to be distributed in the ratio $\dfrac 1 a : \dfrac 1 b : \dfrac 1 c$. Let $\dfrac 1 a + \dfrac 1 b + \dfrac 1 c < 1$. Then there are $7$ possible values of $\tuple {n, a, b, c}$ such that the required shares are: :$\dfrac {n + 1} a, \dfrac {n + 1} b, \dfrac {n + 1} c$ These values are: :$\tuple {7, 2, 4, 8}, \tuple {11, 2, 4, 6}, \tuple {11, 2, 3, 12}, \tuple {17, 2, 3, 9}, \tuple {19, 2, 4, 5}, \tuple {23, 2, 3, 8}, \tuple {41, 2, 3, 7}$ leading to shares, respectively, of: :$\tuple {4, 2, 1}, \tuple {6, 3, 2}, \tuple {6, 4, 1}, \tuple {9, 6, 2}, \tuple {10, 5, 4}, \tuple {12, 8, 3}, \tuple {21, 14, 6}$ \end{theorem} \begin{proof} It is taken as a condition that $a \ne b \ne c \ne a$. We have that: :$\dfrac 1 a + \dfrac 1 b + \dfrac 1 c + \dfrac 1 n = 1$ and so we need to investigate the solutions to the above equations. From Sum of 4 Unit Fractions that equals 1, we have that the only possible solutions are: {{begin-eqn}} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 7 + \dfrac 1 {42} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 8 + \dfrac 1 {24} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 9 + \dfrac 1 {18} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 {10} + \dfrac 1 {15} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 {12} + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 5 + \dfrac 1 {20} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 6 + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 8 + \dfrac 1 8 | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 5 + \dfrac 1 5 + \dfrac 1 {10} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 6 + \dfrac 1 6 + \dfrac 1 6 | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 3 + \dfrac 1 4 + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 3 + \dfrac 1 6 + \dfrac 1 {6} | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 6 | r = 1 }} {{eqn | l = \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 4 | r = 1 }} {{end-eqn}} From these, we can eliminate the following, because it is not the case that $a \ne b \ne c \ne a$: {{begin-eqn}} {{eqn | l = \dfrac 1 2 + \dfrac 1 5 + \dfrac 1 5 + \dfrac 1 {10} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 6 + \dfrac 1 6 + \dfrac 1 6 | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 3 + \dfrac 1 4 + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 3 + \dfrac 1 6 + \dfrac 1 {6} | r = 1 }} {{eqn | l = \dfrac 1 3 + \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 6 | r = 1 }} {{eqn | l = \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 4 + \dfrac 1 4 | r = 1 }} {{end-eqn}} Then we can see by inspection that the following are indeed solutions to the problem: {{begin-eqn}} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 7 + \dfrac 1 {42} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 8 + \dfrac 1 {24} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 9 + \dfrac 1 {18} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 3 + \dfrac 1 {12} + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 5 + \dfrac 1 {20} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 6 + \dfrac 1 {12} | r = 1 }} {{eqn | l = \dfrac 1 2 + \dfrac 1 4 + \dfrac 1 8 + \dfrac 1 8 | r = 1 }} {{end-eqn}} The remaining tuple we have is: :$\dfrac 1 2 + \dfrac 1 3 + \dfrac 1 {10} + \dfrac 1 {15} = 1$ But we note that: :$\dfrac 1 2 + \dfrac 1 3 + \dfrac 1 {10} = \dfrac {28} {30}$ which is not in the correct form. Hence the $7$ possible solutions given. {{qed}} \end{proof}
21290
\section{Sextuple Angle Formula for Tangent} Tags: Tangent Function, Sextuple Angle Formulas \begin{theorem} :$\tan 6 \theta = \dfrac { 6 \tan \theta - 20 \tan^3 \theta + 6 \tan^5 \theta } { 1 - 15 \tan^2 \theta + 15 \tan^4 \theta - \tan^6 \theta }$ where $\tan$ denotes tangent. {{Delete|duplicate page}} \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \tan 6 \theta | r = \frac {\sin 6 \theta} {\cos 6 \theta} | c = Tangent is Sine divided by Cosine }} {{eqn | r = \frac {\cos^6 \theta \paren {6 \tan \theta - 20 \tan^3 \theta + 6 \tan^5 \theta} } {\cos 6 \theta } | c = Formulation 2/Examples/Sine of Sextuple Angle }} {{eqn | r = \frac {\cos^6 \theta \paren {6 \tan \theta - 20 \tan^3 \theta + 6 \tan^5 \theta} } {\cos^6 \theta \paren {1 - 15 \tan^2 \theta + 15 \tan^4 \theta - \tan^6 \theta} } | c = Formulation 2/Examples/Cosine of Sextuple Angle }} {{eqn | r = \frac { 6 \tan \theta - 20 \tan^3 \theta + 6 \tan^5 \theta } { 1 - 15 \tan^2 \theta + 15 \tan^4 \theta - \tan^6 \theta } | c = }} {{end-eqn}} {{qed}} Category:Tangent Function Category:Sextuple Angle Formulas \end{proof}
21291
\section{Sextuple Angle Formulas/Cosine} Tags: Sextuple Angle Formulas, Sextuple Angle Formula for Cosine, Cosine Function \begin{theorem} :$\cos 6 \theta = 32 \cos^6 \theta - 48 \cos^4 \theta + 18 \cos^2 \theta - 1$ where $\cos$ denotes cosine. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \cos 6 \theta + i \sin 6 \theta | r = \paren {\cos \theta + i \sin \theta}^6 | c = De Moivre's Formula }} {{eqn | r = \paren {\cos \theta}^6 + \binom 6 1 \paren {\cos \theta}^5 \paren {i \sin \theta} + \binom 6 2 \paren {\cos \theta}^4 \paren {i \sin \theta}^2 | c = Binomial Theorem }} {{eqn | o = | ro=+ | r = \binom 6 3 \paren {\cos \theta}^3 \paren {i \sin \theta}^3 + \binom 6 4 \paren {\cos \theta}^2 \paren {i \sin \theta}^4 + \binom 6 5 \paren {\cos \theta} \paren {i \sin \theta}^5 + \paren {i \sin \theta}^6 }} {{eqn | r = \cos^6 \theta + 6 i \cos^5 \theta \sin \theta - 15 \cos^4 \sin^2 \theta | c = substituting for binomial coefficients }} {{eqn | o = | ro=- | r = 20 i \cos^3 \theta \sin^3 \theta + 15 \cos^2 \theta \sin^4 \theta + 6 i \cos \theta \sin^5 \theta - \sin^6 \theta | c = and using $i^2 = -1$ }} {{eqn | n = 1 | r = \cos^6 \theta - 15 \cos^4 \sin^2 \theta + 15 \cos^2 \theta \sin^4 \theta - \sin^6 \theta }} {{eqn | o = | ro=+ | r = i \paren {6 \cos^5 \theta \sin \theta - 20 \cos^3 \theta \sin^3 \theta + 6 \cos \theta \sin^5 \theta} | c = rearranging }} {{end-eqn}} Hence: {{begin-eqn}} {{eqn | l = \cos 6 \theta | r = \cos^6 \theta - 15 \cos^4 \sin^2 \theta + 15 \cos^2 \theta \sin^4 \theta - \sin^6 \theta | c = equating real parts in $(1)$ }} {{eqn | r = \cos^6 \theta - 15 \cos^4 \paren {1 - \cos^2 \theta} + 15 \cos^2 \theta \paren {1 - \cos^2 \theta}^2 \theta - \paren {1 - \cos^2 \theta}^3 | c = Sum of Squares of Sine and Cosine }} {{eqn | r = 32 \cos^6 \theta - 48 \cos^4 \theta + 18 \cos^2 \theta - 1 | c = multiplying out and gathering terms }} {{end-eqn}} {{qed}} \end{proof}
21292
\section{Sextuple Angle Formulas/Sine} Tags: Sine Function, Sextuple Angle Formulas, Sextuple Angle Formula for Sine \begin{theorem} :$\dfrac {\sin 6 \theta} {\sin \theta} = 32 \cos^5 \theta - 32 \cos^3 \theta + 6 \cos \theta$ where $\cos$ denotes cosine and $\sin$ denotes sine. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \cos 6 \theta + i \sin 6 \theta | r = \paren {\cos \theta + i \sin \theta}^6 | c = De Moivre's Formula }} {{eqn | r = \paren {\cos \theta}^6 + \binom 6 1 \paren {\cos \theta}^5 \paren {i \sin \theta} + \binom 6 2 \paren {\cos \theta}^4 \paren {i \sin \theta}^2 | c = Binomial Theorem }} {{eqn | o = | ro=+ | r = \binom 6 3 \paren {\cos \theta}^3 \paren {i \sin \theta}^3 + \binom 6 4 \paren {\cos \theta}^2 \paren {i \sin \theta}^4 + \binom 6 5 \paren {\cos \theta} \paren {i \sin \theta}^5 + \paren {i \sin \theta}^6 }} {{eqn | r = \cos^6 \theta + 6 i \cos^5 \theta \sin \theta - 15 \cos^4 \sin^2 \theta | c = substituting for binomial coefficients }} {{eqn | o = | ro=- | r = 20 i \cos^3 \theta \sin^3 \theta + 15 \cos^2 \theta \sin^4 \theta + 6 i \cos \theta \sin^5 \theta - \sin^6 \theta | c = and using $i^2 = -1$ }} {{eqn | n = 1 | r = \cos^6 \theta - 15 \cos^4 \sin^2 \theta + 15 \cos^2 \theta \sin^4 \theta - \sin^6 \theta }} {{eqn | o = | ro=+ | r = i \paren {6 \cos^5 \theta \sin \theta - 20 \cos^3 \theta \sin^3 \theta + 6 \cos \theta \sin^5 \theta} | c = rearranging }} {{end-eqn}} Hence: {{begin-eqn}} {{eqn | l = \sin 6 \theta | r = 6 \cos^5 \theta \sin \theta - 20 \cos^3 \theta \sin^\theta + 6 \cos \theta \sin^5 \theta | c = equating imaginary parts in $(1)$ }} {{eqn | ll= \leadsto | l = \dfrac {\map \sin {6 \theta} } {\sin \theta} | r = 6 \cos^5 \theta - 20 \cos^3 \theta \sin^2 \theta + 6 \cos \theta \sin^4 \theta | c = }} {{eqn | r = 6 \cos^5 \theta - 20 \cos^3 \theta \paren {1 - \cos^2 \theta} + 6 \cos \theta \paren {1 - \cos^2 \theta}^2 | c = Sum of Squares of Sine and Cosine }} {{eqn | r = 32 \cos^5 \theta - 32 \cos^3 \theta + 6 \cos \theta | c = multiplying out and gathering terms }} {{end-eqn}} {{qed}} \end{proof}
21293
\section{Sextuple Angle Formulas/Sine/Corollary} Tags: Sextuple Angle Formula for Sine \begin{theorem} :$\sin 6 \theta = 6 \sin \theta \cos \theta - 32 \sin^3 \theta \cos \theta + 32 \sin^5 \theta \cos \theta$ where $\sin$ denotes sine and $\cos$ denotes cosine. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \sin 6 \theta | r = \paren {2 \cos \theta } \sin 5 \theta - \sin 4 \theta | c = Sine of Integer Multiple of Argument/Formulation 4 }} {{eqn | r = \paren {2 \cos \theta } \paren { 5 \sin \theta - 20 \sin^3 \theta + 16 \sin^5 \theta } - \paren { 4 \sin \theta \cos \theta - 8 \sin^3 \theta \cos \theta } | c = }} {{eqn | r = \paren {10 - 4 } \sin \theta \cos \theta + \paren {-40 + 8} \sin^3 \theta \cos \theta + 32 \sin^5 \theta \cos \theta | c = Gathering terms }} {{eqn | r = 6 \sin \theta \cos \theta - 32 \sin^3 \theta \cos \theta + 32 \sin^5 \theta \cos \theta | c = }} {{end-eqn}} {{qed}} Category:Sextuple Angle Formula for Sine \end{proof}
21294
\section{Shape of Cosecant Function} Tags: Cosecant Function, Analysis \begin{theorem} The nature of the cosecant function on the set of real numbers $\R$ is as follows: :$(1): \quad$ strictly decreasing on the intervals $\hointr {-\dfrac \pi 2} 0$ and $\hointl 0 {\dfrac \pi 2}$ :$(2): \quad$ strictly increasing on the intervals $\hointr {\dfrac \pi 2} \pi$ and $\hointl \pi {\dfrac {3 \pi} 2}$ :$(3): \quad$ $\csc x \to +\infty$ as $x \to 0^+$ :$(4): \quad$ $\csc x \to +\infty$ as $x \to \pi^-$ :$(5): \quad$ $\csc x \to -\infty$ as $x \to \pi^+$ :$(6): \quad$ $\csc x \to -\infty$ as $x \to 2 \pi^-$ \end{theorem} \begin{proof} From Derivative of Cosecant Function:: :$\map {D_x} {\csc x} = -\dfrac {\cos x} {\sin^2 x}$ From Sine and Cosine are Periodic on Reals: Corollary: :$\forall x \in \openint {-\dfrac \pi 2} {\dfrac {3 \pi} 2} \setminus \set {0, \pi}: \sin x \ne 0$ Thus, from Square of Non-Zero Element of Ordered Integral Domain is Strictly Positive: :$\forall x \in \openint {-\dfrac \pi 2} {\dfrac {3 \pi} 2} \setminus \set {0, \pi}: \sin^2 x > 0$ {{improve|Might need to find a less abstract-algebraic version of the above result}} From Sine and Cosine are Periodic on Reals: Corollary: :$\cos x > 0$ on the open interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$ It follows that: :$\forall x \in \openint {-\dfrac \pi 2} {\dfrac \pi 2} \setminus \set 0: -\dfrac {\cos x} {\sin^2 x} < 0$ From Sine and Cosine are Periodic on Reals: Corollary: :$\cos x < 0$ on the open interval $\openint {\dfrac \pi 2} {\dfrac {3 \pi} 2}$ It follows that: :$\forall x \in \openint {\dfrac \pi 2} {\dfrac {3 \pi} 2} \setminus \set \pi: -\dfrac {\cos x} {\sin^2 x} > 0$ Thus, $(1)$ and $(2)$ follow from Derivative of Monotone Function. From Zeroes of Sine and Cosine: $\sin 0 = \sin \pi = \sin 2 \pi = 0$. From Sine and Cosine are Periodic on Reals: Corollary: :$\sin x > 0$ on the open interval $\openint 0 \pi$ From the same source: :$\sin x < 0$ on the open interval $\openint \pi {2 \pi}$ Thus, $(3)$, $(4)$, $(5)$ and $(6)$ follow from Infinite Limit Theorem. \end{proof}
21295
\section{Shape of Cosine Function} Tags: Analysis, Cosine Function \begin{theorem} The cosine function is: :$(1): \quad$ strictly decreasing on the interval $\closedint 0 \pi$ :$(2): \quad$ strictly increasing on the interval $\closedint \pi {2 \pi}$ :$(3): \quad$ concave on the interval $\closedint {-\dfrac \pi 2} {\dfrac \pi 2}$ :$(4): \quad$ convex on the interval $\closedint {\dfrac \pi 2} {\dfrac {3 \pi} 2}$ \end{theorem} \begin{proof} From the discussion of Sine and Cosine are Periodic on Reals, we know that: :$\cos x \ge 0$ on the closed interval $\closedint {-\dfrac \pi 2} {\dfrac \pi 2}$ and: :$\cos x > 0$ on the open interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$ From the same discussion, we have that: :$\map \sin {x + \dfrac \pi 2} = \cos x$ So immediately we have that $\sin x \ge 0$ on the closed interval $\closedint 0 \pi$, $\sin x > 0$ on the open interval $\openint 0 \pi$. But $\map {D_x} {\cos x} = -\sin x$ from Derivative of Cosine Function. Thus from Derivative of Monotone Function, $\cos x$ is strictly decreasing on $\closedint 0 \pi$. From Derivative of Sine Function it follows that: :$\map {D_{xx} } {\cos x} = -\cos x$ On $\closedint {-\dfrac \pi 2} {\dfrac \pi 2}$ where $\cos x \ge 0$, therefore, $\map {D_{xx} } {\cos x} \le 0$. From Second Derivative of Concave Real Function is Non-Positive it follows that $\cos x$ is concave on $\closedint {-\dfrac \pi 2} {\dfrac \pi 2}$. The rest of the result follows similarly. \end{proof}
21296
\section{Shape of Cotangent Function} Tags: Cotangent Function, Analysis \begin{theorem} The nature of the cotangent function on the set of real numbers $\R$ is as follows: :$\cot x$ is continuous and strictly decreasing on the interval $\openint 0 \pi$ :$\cot x \to +\infty$ as $x \to 0^+$ :$\cot x \to -\infty$ as $x \to \pi^-$ :$\cot x$ is not defined on $\forall n \in \Z: x = n \pi$, at which points it is discontinuous :$\forall n \in \Z: \map \cot {n + \dfrac 1 2} \pi = 0$ \end{theorem} \begin{proof} $\cot x$ is continuous and strictly decreasing on $\openint 0 \pi$: Continuity follows from the Quotient Rule for Continuous Real Functions: :$(1): \quad$ Both $\sin x$ and $\cos x$ are continuous on $\openint 0 \pi$ from Real Sine Function is Continuous and Cosine Function is Continuous :$(2): \quad \sin x > 0$ on this interval. The fact of $\cot x$ being strictly decreasing on this interval has been demonstrated in the discussion on Cotangent Function is Periodic on Reals. $\cot x \to + \infty$ as $x \to 0^+$: From Sine and Cosine are Periodic on Reals, we have that both $\sin x > 0$ and $\cos x > 0$ on $\openint 0 {\dfrac \pi 2}$. We have that: :$(1): \quad \cos x \to 1$ as $x \to 0^+$ :$(2): \quad \sin x \to 0$ as $x \to 0^+$ Thus it follows that $\cot x = \dfrac {\cos x} {\sin x} \to + \infty$ as $x \to 0^+$. * $\tan x \to - \infty$ as $x \to \pi^-$: From Sine and Cosine are Periodic on Reals, we have that $\sin x > 0$ and $\cos x < 0$ on $\openint {\dfrac \pi 2} \pi$. We have that: :$(1): \quad \cos x \to -1$ as $x \to \pi^-$ :$(2): \quad \sin x \to 0$ as $x \to \pi^-$ Thus it follows that $\cot x = \dfrac {\cos x} {\sin x} \to - \infty$ as $x \to \pi^-$. $\cot x$ is not defined and discontinuous at $x = n \pi$: From the discussion of Sine and Cosine are Periodic on Reals, it was established that $\forall n \in \Z: x = n \pi \implies \sin x = 0$. As division by zero is not defined, it follows that at these points $\cot x$ is not defined either. Now, from the above, we have: :$(1): \quad \cot x \to + \infty$ as $x \to 0^+$ :$(2): \quad \cot x \to - \infty$ as $x \to \pi^-$ As $\map \cot {x + \pi} = \cot x$ from Cotangent Function is Periodic on Reals, it follows that $\cot x \to + \infty$ as $x \to \pi^+$. Hence the left hand limit and right hand limit at $x = \pi$ are not the same. From the periodic nature of $\cot x$, it follows that the same applies $\forall n \in \Z: x = n \pi$. The fact of its discontinuity at these points follows from the definition of discontinuity. $\map \cot {n + \dfrac 1 2} \pi = 0$: Follows directly from Sine and Cosine are Periodic on Reals: :$\forall n \in \Z: \map \cos {n + \dfrac 1 2} \pi = 0$ {{qed}} \end{proof}
21297
\section{Shape of Secant Function} Tags: Analysis, Secant Function \begin{theorem} The nature of the secant function on the set of real numbers $\R$ is as follows: :$(1): \quad \sec x$ is continuous and strictly increasing on the intervals $\hointr 0 {\dfrac \pi 2}$ and $\hointl {\dfrac \pi 2} \pi$ :$(2): \quad \sec x$ is continuous and strictly decreasing on the intervals $\hointr {-\pi} {-\dfrac \pi 2}$ and $\hointl {-\dfrac \pi 2} 0$ :$(3): \quad \sec x \to + \infty$ as $x \to -\dfrac \pi 2^+$ :$(4): \quad \sec x \to + \infty$ as $x \to \dfrac \pi 2^-$ :$(5): \quad \sec x \to - \infty$ as $x \to \dfrac \pi 2^+$ :$(6): \quad \sec x \to - \infty$ as $x \to \dfrac {3 \pi} 2^-$ \end{theorem} \begin{proof} From Derivative of Secant Function: :$\map {D_x} {\sec x} = \dfrac {\sin x} {\cos^2 x}$ From Sine and Cosine are Periodic on Reals: Corollary: :$\forall x \in \openint {-\pi} \pi \setminus \set {-\dfrac \pi 2, \dfrac \pi 2}: \cos x \ne 0$ Thus, from Square of Non-Zero Element of Ordered Integral Domain is Strictly Positive: :$\forall x \in \openint {-\pi} \pi \setminus \set {-\dfrac \pi 2, \dfrac \pi 2}: \cos^2 x > 0$ {{improve|Might need to find a less abstract-algebraic version of the above result}} From Sine and Cosine are Periodic on Reals: Corollary: :$\sin x > 0$ on the open interval $\openint 0 \pi$ It follows that: :$\forall x \in \openint 0 \pi \setminus \set {\dfrac \pi 2}: \dfrac {\sin x} {\cos^2 x} > 0$ From Sine and Cosine are Periodic on Reals: Corollary:: :$\sin x < 0$ on the open interval $\openint {-\pi} 0$ It follows that: :$\forall x \in \openint {-\pi} 0 \setminus \set {-\dfrac \pi 2}: \dfrac {\sin x} {\cos^2 x} < 0$ Thus, $(1)$ and $(2)$ follow from Derivative of Monotone Function and Differentiable Function is Continuous. From Zeroes of Sine and Cosine:: :$\cos - \dfrac \pi 2 = \cos \dfrac \pi 2 = \cos \dfrac {3 \pi} 2 = 0$ From Sine and Cosine are Periodic on Reals: Corollary: :$\cos x > 0$ on the open interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$ From the same source: :$\cos x < 0$ on the open interval $\openint {\dfrac \pi 2} {\dfrac {3 \pi} 2}$ Thus, $(3)$, $(4)$, $(5)$ and $(6)$ follow from Infinite Limit Theorem. \end{proof}
21298
\section{Shape of Sine Function} Tags: Sine Function, Analysis \begin{theorem} The sine function is: :$(1): \quad$ strictly increasing on the interval $\closedint {-\dfrac \pi 2} {\dfrac \pi 2}$ :$(2): \quad$ strictly decreasing on the interval $\closedint {\dfrac \pi 2} {\dfrac {3 \pi} 2}$ :$(3): \quad$ concave on the interval $\closedint 0 \pi$ :$(4): \quad$ convex on the interval $\closedint \pi {2 \pi}$ \end{theorem} \begin{proof} From the discussion of Sine and Cosine are Periodic on Reals, we have that: : $\sin \paren {x + \dfrac \pi 2} = \cos x$ The result then follows directly from the Shape of Cosine Function. \end{proof}
21299
\section{Shape of Tangent Function} Tags: Tangent Function, Analysis \begin{theorem} The nature of the tangent function on the set of real numbers $\R$ is as follows: :$\tan x$ is continuous and strictly increasing on the interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$ :$\tan x \to +\infty$ as $x \to \dfrac \pi 2 ^-$ :$\tan x \to -\infty$ as $x \to -\dfrac \pi 2 ^+$ :$\tan x$ is not defined on $\forall n \in \Z: x = \paren {n + \dfrac 1 2} \pi$, at which points it is discontinuous :$\forall n \in \Z: \tan \left({n \pi}\right) = 0$. \end{theorem} \begin{proof} $\tan x$ is continuous and strictly increasing on $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$: Continuity follows from the Quotient Rule for Continuous Real Functions: :$(1): \quad$ Both $\sin x$ and $\cos x$ are continuous on $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$ from Real Sine Function is Continuous and Cosine Function is Continuous :$(2): \quad$ $\cos x > 0$ on this interval. The fact of $\tan x$ being strictly increasing on this interval has been demonstrated in the discussion on Tangent Function is Periodic on Reals. $\tan x \to + \infty$ as $x \to \dfrac \pi 2 ^-$: From Sine and Cosine are Periodic on Reals, we have that both $\sin x > 0$ and $\cos x > 0$ on $\openint 0 {\dfrac \pi 2}$. We have that: :$(1): \quad \cos x \to 0$ as $x \to \dfrac \pi 2^-$ :$(2): \quad \sin x \to 1$ as $x \to \dfrac \pi 2^-$ From the Infinite Limit Theorem it follows that: :$\tan x = \dfrac {\sin x} {\cos x} \to + \infty$ as $x \to \dfrac \pi 2 ^-$ $\tan x \to - \infty$ as $x \to -\dfrac \pi 2 ^+$: From Sine and Cosine are Periodic on Reals, we have that $\sin x < 0$ and $\cos x > 0$ on $\openint {-\dfrac \pi 2} 0$. We have that: :$(1): \quad \cos x \to 0$ as $x \to -\dfrac \pi 2 ^+$ :$(2): \quad \sin x \to -1$ as $x \to -\dfrac \pi 2 ^+$ Thus it follows that $\tan x = \dfrac {\sin x} {\cos x} \to -\infty$ as $x \to -\dfrac \pi 2 ^+$. $\tan x$ is not defined and discontinuous at $x = \paren {n + \dfrac 1 2} \pi$: From the discussion of Sine and Cosine are Periodic on Reals, it was established that: :$\forall n \in \Z: x = \paren {n + \dfrac 1 2} \pi \implies \cos x = 0$ As division by zero is not defined, it follows that at these points $\tan x$ is not defined either. Now, from the above, we have: :$(1): \quad \tan x \to + \infty$ as $x \to \dfrac \pi 2^-$ :$(2): \quad \tan x \to - \infty$ as $x \to -\dfrac \pi 2^+$ As $\map \tan {x + \pi} = \tan x$ from Tangent Function is Periodic on Reals, it follows that: :$\tan x \to - \infty$ as $x \to \dfrac \pi 2 ^+$ Hence the left hand limit and right hand limit at $x = \dfrac \pi 2$ are not the same. From Tangent Function is Periodic on Reals, it follows that the same applies $\forall n \in \Z: x = \paren {n + \dfrac 1 2} \pi$. The fact of its discontinuity at these points follows from the definition of discontinuity. $\tan \left({n \pi}\right) = 0$: Follows directly from Sine and Cosine are Periodic on Reals:: :$\forall n \in \Z: \map \sin {n \pi} = 0$ {{qed}} \end{proof}
21300
\section{Shortest Distance between Two Points is Straight Line} Tags: Euclidean Geometry \begin{theorem} The shortest distance between $2$ points is a straight line. \end{theorem} \begin{proof} Let $s$ be the length of a curve between $2$ points $A$ and $B$. The problem becomes one of finding the curve for which $\ds \int_a^B \rd s$ is a minimum. {{ProofWanted|In due course as the work progresses}} Hence such a curve has the equation: :$y = m x + c$ which defines a straight line. \end{proof}
21301
\section{Shortest Possible Distance between Lattice Points on Straight Line in Cartesian Plane} Tags: Perpendicular Distance from Straight Line in Plane to Point, Linear Diophantine Equations, Straight Lines \begin{theorem} Let $\LL$ be the straight line defined by the equation: :$a x - b y = c$ Let $p_1$ and $p_2$ be lattice points on $\LL$. Then the shortest possible distance $d$ between $p_1$ and $p_2$ is: :$d = \dfrac {\sqrt {a^2 + b^2} } {\gcd \set {a, b} }$ where $\gcd \set {a, b}$ denotes the greatest common divisor of $a$ and $b$. \end{theorem} \begin{proof} Let $p_1 = \tuple {x_1, y_1}$ and $p_2 = \tuple {x_2, y_2}$ be on $\LL$. Thus: {{begin-eqn}} {{eqn | l = a x_1 - b y_1 | r = c }} {{eqn | l = a x_2 - b y_2 | r = c | c = }} {{end-eqn}} From Solution of Linear Diophantine Equation, it is necessary and sufficient that: :$\gcd \set {a, b} \divides c$ for such lattice points to exist. Also from Solution of Linear Diophantine Equation, all lattice points on $\LL$ are solutions to the equation: :$\forall k \in \Z: x = x_1 + \dfrac b m k, y = y_1 - \dfrac a m k$ where $m = \gcd \set {a, b}$. So we have: {{begin-eqn}} {{eqn | l = x_2 | r = x_1 + \dfrac b m k }} {{eqn | l = y_2 | r = y_1 - \dfrac a m k | c = }} {{end-eqn}} for some $k \in \Z$ such that $k \ne 0$. The distance $d$ between $p_1$ and $p_2$ is given by the Distance Formula: {{begin-eqn}} {{eqn | l = d | r = \sqrt {\paren {x_1 - \paren {x_1 + \dfrac b m k} }^2 + \paren {y_1 - \paren {y_1 - \dfrac a m k} }^2} | c = }} {{eqn | r = \sqrt {\paren {\dfrac {b k} m}^2 + \paren {\dfrac {a k} m}^2} | c = }} {{eqn | r = \sqrt {\dfrac {k^2 \paren {a^2 + b^2} } {m^2} } | c = }} {{eqn | r = k \dfrac {\sqrt {a^2 + b^2} } m | c = }} {{end-eqn}} This is a minimum when $k = 1$. Hence the result. {{qed}} \end{proof}
21302
\section{Side of Spherical Triangle is Less than 2 Right Angles} Tags: Spherical Geometry, Spherical Triangles \begin{theorem} Let $ABC$ be a spherical triangle on a sphere $S$. Let $AB$ be a side of $ABC$. The '''length''' of $AB$ is less than $2$ right angles. \end{theorem} \begin{proof} $A$ and $B$ are two points on a great circle $E$ of $S$ which are not both on the same diameter. So $AB$ is not equal to $2$ right angles. Then it is noted that both $A$ and $B$ are in the same hemisphere, from Three Points on Sphere in Same Hemisphere. That means the distance along $E$ is less than one semicircle of $E$. The result follows by definition of spherical angle and length of side of $AB$. {{qed}} \end{proof}
21303
\section{Side of Spherical Triangle is Supplement of Angle of Polar Triangle} Tags: Polar Triangles \begin{theorem} Let $\triangle ABC$ be a spherical triangle on the surface of a sphere whose center is $O$. Let the sides $a, b, c$ of $\triangle ABC$ be measured by the angles subtended at $O$, where $a, b, c$ are opposite $A, B, C$ respectively. Let $\triangle A'B'C'$ be the polar triangle of $\triangle ABC$. Then $A'$ is the supplement of $a$. That is: :$A' = \pi - a$ and it follows by symmetry that: :$B' = \pi - b$ :$C' = \pi - c$ \end{theorem} \begin{proof} :400px Let $BC$ be produced to meet $A'B'$ and $A'C'$ at $L$ and $M$ respectively. Because $A'$ is the pole of the great circle $LBCM$, the spherical angle $A'$ equals the side of the spherical triangle $A'LM$. That is: :$(1): \quad \sphericalangle A' = LM$ From Spherical Triangle is Polar Triangle of its Polar Triangle, $\triangle ABC'$ is also the polar triangle of $\triangle A'B'C'$. That is, $C$ is a pole of the great circle $A'LB'$. Hence $CL$ is a right angle. Similarly, $BM$ is also a right angle. Thus we have: {{begin-eqn}} {{eqn | l = LM | r = LB + BM | c = }} {{eqn | n = 2 | r = LB + \Box | c = where $\Box$ denotes a right angle }} {{end-eqn}} By definition, we have that: :$BC = a$ {{begin-eqn}} {{eqn | l = BC | r = a | c = by definition of $\triangle ABC$ }} {{eqn | ll= \leadsto | l = LB + a | r = LC | c = }} {{eqn | n = 3 | ll= \leadsto | l = LB | r = \Box - a | c = as $LC = \Box$ }} {{end-eqn}} Then: {{begin-eqn}} {{eqn | l = \sphericalangle A' | r = LM | c = from $(1)$ }} {{eqn | r = LB + \Box | c = from $(2)$ }} {{eqn | r = \paren {\Box - a} + \Box | c = from $(3)$ }} {{eqn | r = \paren {2 \Box} - a | c = }} {{end-eqn}} where $2 \Box$ is $2$ right angles, that is, $\pi$ radians. That is, $A'$ is the supplement of $a$: :$A' = \pi - a$ By applying the same analysis to $B'$ and $C'$, it follows similarly that: :$B' = \pi - b$ :$C' = \pi - c$ {{qed}} \end{proof}
21304
\section{Sides of Equal and Equiangular Parallelograms are Reciprocally Proportional} Tags: Parallelograms \begin{theorem} {{:Euclid:Proposition/VI/14}} Note: in the above, ''equal'' is to be taken to mean ''of equal area''. \end{theorem} \begin{proof} Let $\Box AB$ and $\Box BC$ be two equiangular parallelograms of equal area such that the angles at $B$ are equal. Let $DB, BE$ be placed in a straight line. By Two Angles making Two Right Angles make Straight Line it follows that $FB, BG$ also make a straight line. We need to show that $DB : BE = GB : BF$, that is, the sides about the equal angles are reciprocally proportional. :300px Let the parallelogram $\Box FE$ be completed. We have that $\Box AB$ is of equal area with $\Box BC$, and $\Box FE$ is another area. So from Ratios of Equal Magnitudes: : $\Box AB : \Box FE = \Box BC : \Box FE$ But from Areas of Triangles and Parallelograms Proportional to Base: : $\Box AB : \Box FE = DB : BE$ Also from Areas of Triangles and Parallelograms Proportional to Base: : $\Box BC : \Box FE = GB : BF$ So from Equality of Ratios is Transitive: : $DB : BE = GB : BF$ {{qed|lemma}} Next, suppose that $DB : BE = GB : BF$. From Areas of Triangles and Parallelograms Proportional to Base: : $DB : BE = \Box AB : \Box FE$ Also from Areas of Triangles and Parallelograms Proportional to Base: : $GB : BF = \Box BC : \Box FE$ So from Equality of Ratios is Transitive: : $\Box AB : \Box FE = \Box BC : \Box FE$ So from Magnitudes with Same Ratios are Equal: : $\Box AB = \Box BC$ {{qed}} {{Euclid Note|14|VI}} \end{proof}
21305
\section{Sides of Equiangular Triangles are Reciprocally Proportional} Tags: Triangles \begin{theorem} {{:Euclid:Proposition/VI/15}} Note: in the above, ''equal'' is to be taken to mean ''of equal area''. \end{theorem} \begin{proof} Let $\triangle ABC, \triangle ADE$ be triangles of equal area which have one angle equal to one angle, namely $\angle BAC = \angle DAE$. We need to show that $CA : AD = EA : AB$, that is, the sides about the equal angles are reciprocally proportional. :250px Place them so $CA$ is in a straight line with $AD$. From Two Angles making Two Right Angles make Straight Line $EA$ is also in a straight line with $AB$. Join $BD$. It follows from Ratios of Equal Magnitudes that: : $\triangle CAB : \triangle BAD = \triangle EAD : \triangle BAD$ But from Areas of Triangles and Parallelograms Proportional to Base: : $\triangle CAB : \triangle BAD = CA : AD$ Also from Areas of Triangles and Parallelograms Proportional to Base: :$\triangle EAD : \triangle BAD = EA : AB$ So from Equality of Ratios is Transitive: : $CA : AD = EA : AB$ {{qed|lemma}} Now let the sides in $\triangle ABC, \triangle ADE$ be reciprocally proportional. That is, $CA : AD = EA : AB$. Join $BD$. From Areas of Triangles and Parallelograms Proportional to Base: : $\triangle CAB : \triangle BAD = CA : AD$ Also from Areas of Triangles and Parallelograms Proportional to Base: : $\triangle EAD : \triangle BAD = EA : AB$ It follows from Equality of Ratios is Transitive that: : $\triangle CAB : \triangle BAD = \triangle EAD : \triangle BAD$ So from Magnitudes with Same Ratios are Equal: : $\triangle ABC = \triangle ADE$ {{qed}} {{Euclid Note|15|VI}} \end{proof}
21306
\section{Sierpiński's Theorem} Tags: Hausdorff Spaces, Connected Spaces, Compact Spaces, Connectedness, Sierpiński's Theorem \begin{theorem} Let $\left({S, \tau}\right)$ be a compact connected Hausdorff space. Let $\left\{{F_n: n \in \N}\right\}$ be a pairwise disjoint closed cover of $S$. {{explain|In context it's obvious, but worth mentioning that $F_n$ is a finite cover as well?}} Then $F_n = S$ for some $n \in \N$. \end{theorem} \begin{proof} {{ProofWanted}} {{Namedfor|Wacław Franciszek Sierpiński|cat = Sierpiński}} Category:Compact Spaces Category:Connected Spaces Category:Hausdorff Spaces Category:Sierpiński's Theorem \end{proof}
21307
\section{Sierpiński's Theorem/Lemma 1} Tags: Hausdorff Spaces, Compact Spaces, Sierpiński's Theorem, Connectedness \begin{theorem} Let $\struct {S, \tau}$ be a compact connected Hausdorff space. Let $A$ be a closed, non-empty proper subset of $S$. Let $C$ be a component of $A$. Then: :$C \cap \partial A \ne \O$ where $\partial A$ denotes the boundary of $A$. \end{theorem} \begin{proof} Let $p \in C$. Let $\VV$ be the set of all subsets of $A$ containing $p$ that are clopen relative to $A$.. By Quasicomponents and Components are Equal in Compact Hausdorff Space and Quasicomponent is Intersection of Clopen Sets: :$C$ is the intersection of $\VV$. {{AimForCont}}: :$C \cap \partial A = \O$ By Boundary of Set is Closed, $K \cap \partial A$ is closed for each $K \in \VV$. Thus by Compact Space satisfies Finite Intersection Axiom, there exists a finite set $\VV' \subseteq \VV$ such that $\partial A \cap \bigcap \VV' = \O$. But then: :$\ds K = \bigcap \VV' \in \VV$ {{explain|Where from?}} Therefore there exists a $K \in \VV$ such that $K \cap \partial A = \O$. Since $A$ is closed in $S$, and $K$ is clopen in $A$, $K$ is closed in $S$. {{explain|Where from?}} We have that: :$\partial A = A^- \setminus \map {\operatorname {Int} } A$ where $A^-$ is the closure of $A$ and $\map {\operatorname {Int} } A$ is the interior of $A$ {{explain|Where from?}} Hence as $K \subseteq A$, it follows that $K \subseteq \map {\operatorname {Int} } A$. Since $K$ is open relative to $A$, it is open relative to $\map {\operatorname {Int} } A$. {{explain|"open relative"}} {{explain|Where from?}} We have that $\map {\operatorname {Int} } A$ is open in $S$. {{explain|Where from?}} Therefore $K$ is open in $S$. Thus $K$ is clopen in $S$. We have that $p \in K \subseteq A \subsetneqq S$. {{explain|Where from?}} Therefore $S$ is not connected. {{explain|Where from?}} From this contradiction it follows that: :$C \cap \partial A \ne \O$ {{qed}} Category:Sierpiński's Theorem \end{proof}
21308
\section{Sierpiński Space is Irreducible} Tags: Irreducible Spaces, Hyperconnectedness, Sierpinski Space, Sierpiński Space \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is irreducible. \end{theorem} \begin{proof} A Sierpiński space is a particular point space by definition. A Particular Point Space is Irreducible. {{qed}} \end{proof}
21309
\section{Sierpiński Space is Path-Connected} Tags: Path-Connectedness, Path-Connected Spaces, Sierpinski Space, Sierpiński Space \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is path-connected. \end{theorem} \begin{proof} A Sierpiński space is a particular point space by definition. A Particular Point Space is Path-Connected. {{qed}} \end{proof}
21310
\section{Sierpiński Space is T4} Tags: T4 Spaces, Sierpinski Space, Sierpiński Space \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is a $T_4$ space. \end{theorem} \begin{proof} We have that the Sierpiński Space is $T_5$. Then we have that a $T_5$ Space is $T_4$. {{qed}} \end{proof}
21311
\section{Sierpiński Space is T5} Tags: T5 Spaces, T5 Space, Sierpinski Space, Sierpiński Space \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is a $T_5$ space. \end{theorem} \begin{proof} The only closed sets in $T$ are $\O$, $\set 1$ and $\set {0, 1}$. So there are no two separated sets $A, B \subseteq \set {0, 1}$. So $T$ is a $T_5$ space vacuously. {{qed}} \end{proof}
21312
\section{Sierpiński Space is Ultraconnected} Tags: Ultraconnectedness, Sierpinski Space, Sierpiński Space, Ultraconnected Spaces \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is ultraconnected. \end{theorem} \begin{proof} The only closed sets of $T$ are $\O, \set 1$ and $\set {0, 1}$. $\set 1$ and $\set {0, 1}$ are not disjoint. Hence the result by definition of ultraconnected. {{qed}} \end{proof}
21313
\section{Sierpiński Space is not Arc-Connected} Tags: Arc-Connected Spaces, Sierpinski Space, Arc-Connectedness, Sierpiński Space \begin{theorem} Let $T = \struct {\set {0, 1}, \tau_0}$ be a Sierpiński space. Then $T$ is not arc-connected. \end{theorem} \begin{proof} A Sierpiński space is a particular point space by definition. A Particular Point Space is not Arc-Connected. {{qed}} \end{proof}
21314
\section{Sigma-Algebra Closed under Countable Intersection} Tags: Sigma-Algebras \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Suppose that $\sequence {E_n}_{n \mathop \in \N} \in \Sigma$ is a collection of measurable sets. Then: :$\ds \bigcap_{n \mathop \in \N} E_n \in \Sigma$, where $\ds \bigcap$ denotes set intersection. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | q = \forall n \in \N | l = E_n | o = \in | r = \Sigma }} {{eqn | ll= \leadsto | l = X \setminus E_n | o = \in | r = \Sigma | c = Axiom $(2)$ for $\sigma$-algebras }} {{eqn | ll= \leadsto | l = \bigcup_{n \mathop \in \N} \paren {X \setminus E_n} | o = \in | r = \Sigma | c = Axiom $(3)$ for $\sigma$-algebras }} {{eqn | ll= \leadsto | l = X \setminus \paren {\bigcup_{n \mathop \in \N} \paren {X \setminus E_n} } | o = \in | r = \Sigma | c = Axiom $(2)$ for $\sigma$-algebras }} {{end-eqn}} From De Morgan's laws: Complement of Intersection: :$\ds \bigcup_{n \mathop \in \N} \paren {X \setminus E_n} = X \setminus \paren {\bigcap_{n \mathop \in \N} E_n}$ Also, by Set Difference with Set Difference and Set Union Preserves Subsets: :$\ds X \setminus \paren {X \setminus \paren {\bigcap_{n \mathop \in \N} E_n} } = \bigcap_{n \mathop \in \N} E_n$ Combining the previous equalities, it follows that: :$\ds \bigcap_{n \mathop \in \N} E_n \in \Sigma$ {{qed}} \end{proof}
21315
\section{Sigma-Algebra Closed under Finite Intersection} Tags: Sigma-Algebras, Proofread \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Let $A_1, \ldots, A_n \in \Sigma$. Then $\ds \bigcap_{k \mathop = 1}^n A_k \in \Sigma$. \end{theorem} \begin{proof} Define for $k \in \N, k > n: A_k = X$. By axiom $(1)$ of a $\sigma$-algebra, it follows that $\forall k \in \N, k > n: A_k \in \Sigma$. From Sigma-Algebra Closed under Countable Intersection, it follows that $\ds \bigcap_{k \mathop \in \N} A_k = \bigcap_{k \mathop = 1}^n A_k \in \Sigma$. {{qed}} Category:Sigma-Algebras \end{proof}
21316
\section{Sigma-Algebra Closed under Set Difference} Tags: Sigma-Algebras, Set Difference \begin{theorem} Let $\struct {X, \Sigma}$ be a measurable space. Let $A, B \in \Sigma$. Then the set difference $A \setminus B$ is contained in $\Sigma$. \end{theorem} \begin{proof} Since $\sigma$-algebras are closed under relative complement, we have: :$\relcomp X B \in \Sigma$ By Sigma-Algebra Closed under Finite Intersection, we have: :$A \cap \relcomp X B \in \Sigma$ From Set Difference as Intersection with Relative Complement, we have: :$A \setminus B = A \cap \relcomp X B$ so: :$A \setminus B \in \Sigma$ {{qed}} Category:Sigma-Algebras Category:Set Difference \end{proof}
21317
\section{Sigma-Algebra Closed under Symmetric Difference} Tags: Symmetric Difference, Sigma-Algebras \begin{theorem} Let $\struct {X, \Sigma}$ be a measurable space. Let $A, B \in \Sigma$. Then the symmetric difference $A \Delta B$ is contained in $\Sigma$. \end{theorem} \begin{proof} From Sigma-Algebra Closed under Set Difference, we have: :$A \setminus B \in \Sigma$ and: :$B \setminus A \in \Sigma$ Since $\sigma$-algebras are closed under countable union, we have: :$\paren {A \setminus B} \cup \paren {B \setminus A} \in \Sigma$ From the definition of symmetric difference, we have: :$A \Delta B = \paren {A \setminus B} \cup \paren {B \setminus A}$ so: :$A \Delta B \in \Sigma$ {{qed}} Category:Sigma-Algebras Category:Symmetric Difference \end{proof}
21318
\section{Sigma-Algebra Closed under Union} Tags: Sigma-Algebras \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Let $A, B \in \Sigma$ be measurable sets. Then $A \cup B \in \Sigma$, where $\cup$ denotes set union. \end{theorem} \begin{proof} Define $A_1 = A, A_2 = B$, and for $n \in \N, n \ge 2: A_n = \O$. Then by Sigma-Algebra Contains Empty Set, axiom $(3)$ of a $\sigma$-algebra applies. Hence: :$\ds \bigcup_{n \mathop \in \N} A_n = A \cup B \in \Sigma$ {{qed}} \end{proof}
21319
\section{Sigma-Algebra Closed under Union/Corollary} Tags: Sigma-Algebras \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Let $A_1, \ldots, A_n \in \Sigma$. Then $\ds \bigcup_{k \mathop = 1}^n A_k \in \Sigma$. \end{theorem} \begin{proof} Define for $k \in \N, k > n: A_k = \O$. Then by Sigma-Algebra Contains Empty Set, axiom $(3)$ of a $\sigma$-algebra applies. Hence: :$\ds \bigcup_{k \mathop \in \N} A_k = \bigcup_{k \mathop = 1}^n A_k \in \Sigma$ {{qed}} Category:Sigma-Algebras \end{proof}
21320
\section{Sigma-Algebra Contains Empty Set} Tags: Sigma-Algebras \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Then $\O \in \Sigma$. \end{theorem} \begin{proof} Axiom $(1)$ of a $\sigma$-algebra grants $X \in \Sigma$. By axiom $(2)$ and Set Difference with Self is Empty Set, it follows that $\O = X \setminus X \in \Sigma$. {{qed}} \end{proof}
21321
\section{Sigma-Algebra Contains Generated Sigma-Algebra of Subset} Tags: Sigma-Algebras \begin{theorem} Let $\sigma_\FF$ be a be a $\sigma$-algebra on a set $\FF$. Let $\sigma_\FF$ contain a set of sets $\EE$. Let $\map \sigma \EE$ be the $\sigma$-algebra generated by $\EE$. Then $\map \sigma \EE \subseteq \sigma_\FF$ \end{theorem} \begin{proof} $\sigma_\FF$ is a $\sigma$-algebra containing $\EE$. $\map \sigma \EE$ is a subset of ''all'' $\sigma$-algebras containing $\FF$, by definition of a generated $\sigma$-algebra. Therefore it contains $\map \sigma \EE$. {{qed}} \end{proof}
21322
\section{Sigma-Algebra Extended by Single Set} Tags: Sigma-Algebras \begin{theorem} Let $\Sigma$ be a $\sigma$-algebra on a set $X$. Let $S \subseteq X$ be a subset of $X$. For subsets $T \subseteq X$ of $X$, denote $T^\complement$ for the set difference $X \setminus T$. Then: :$\map \sigma {\Sigma \cup \set S} = \set {\paren {E_1 \cap S} \cup \paren {E_2 \cap S^\complement}: E_1, E_2 \in \Sigma}$ where $\sigma$ denotes generated $\sigma$-algebra. \end{theorem} \begin{proof} Define $\Sigma'$ as follows: :$\Sigma' := \set {\paren {E_1 \cap S} \cup \paren {E_2 \cap S^\complement}: E_1, E_2 \in \Sigma}$ Picking $E_1 = X$ and $E_2 = \O$ (allowed by Sigma-Algebra Contains Empty Set), it follows that $S \in \Sigma'$. On the other hand, for any $E_1 \in \Sigma$, have by Intersection Distributes over Union and Union with Relative Complement: :$\paren {E_1 \cap S} \cup \paren {E_1 \cap S^\complement} = E_1 \cap \paren {S \cup S^\complement} = E_1 \cap X = E_1$ Hence $E_1 \in \Sigma'$ for all $E_1$, hence $\Sigma \subseteq \Sigma'$. Therefore, $\Sigma \cup \set S \subseteq \Sigma'$. Moreover, from Sigma-Algebra Closed under Union, Sigma-Algebra Closed under Intersection and axiom $(2)$ for a $\sigma$-algebra, it is necessarily the case that: :$\Sigma' \subseteq \map \sigma {\Sigma \cup \set S}$ It will thence suffice to demonstrate that $\Sigma'$ is a $\sigma$-algebra. Since $X \in \Sigma$, also $X \in \Sigma'$. Next, for any $E_1, E_2 \in \Sigma$, observe: {{begin-eqn}} {{eqn | l = \paren {\paren {E_1 \cap S} \cup \paren {E_2 \cap S^\complement} }^\complement | r = \paren {E_1 \cap S}^\complement \cap \paren {E_2 \cap S^\complement}^\complement | c = De Morgan's Laws: Difference with Union }} {{eqn | r = \paren {E_1^\complement \cup S^\complement} \cap \paren {E_2^\complement \cup S} | c = De Morgan's Laws: Difference with Intersection, Set Difference with Set Difference }} {{eqn | r = \paren {\paren {E_1^\complement \cup S^\complement} \cap E_2^\complement} \cup \paren {\paren {E_1^\complement \cup S^\complement} \cap S} | c = Intersection Distributes over Union }} {{eqn | r = \paren {E_1^\complement \cap E_2^\complement} \cup \paren {E_2^\complement \cap S^\complement} \cup \paren {E_1^\complement \cap S} \cup \paren {S^\complement \cap S} | c = Union Distributes over Intersection }} {{eqn | r = \paren {\paren {E_1^\complement \cap E_2^\complement} \cap \paren {S^\complement \cup S} } \cup \paren {E_2^\complement \cap S^\complement} \cup \paren {E_1^\complement \cap S} | c = Union with Relative Complement, Set Difference Intersection with Second Set is Empty Set }} {{eqn | r = \paren {E_1^\complement \cap E_2^\complement \cap S} \cup \paren {E_1^\complement \cap S} \cup \paren {E_1^\complement \cap E_2^\complement \cap S^\complement} \cup \paren {E_2^\complement \cap S^\complement} | c = Intersection Distributes over Union }} {{eqn | r = \paren {\paren {\paren {E_1^\complement \cap E_2^\complement} \cup E_1^\complement} \cap S} \cup \paren {\paren {\paren {E_1^\complement \cap E_2^\complement} \cup E_2^\complement} \cap S^\complement} | c = Intersection Distributes over Union }} {{eqn | r = \paren {E_1^\complement \cap S} \cup \paren {E_2^\complement \cap S^\complement} | c = Intersection is Subset, Union with Superset is Superset }} {{end-eqn}} As $\Sigma$ is a $\sigma$-algebra, $E_1^\complement, E_2^\complement \in \Sigma$ and so indeed: :$\paren {\paren {E_1 \cap S} \cup \paren {E_2 \cap S^\complement} }^\complement \in \Sigma'$ Finally, let $\sequence {E_{1, n} }_{n \mathop \in \N}$ and $\sequence {E_{2, n} }_{n \mathop \in \N}$ be sequences in $\Sigma$. Then: {{begin-eqn}} {{eqn | l = \bigcup_{n \mathop \in \N} \paren {E_{1, n} \cap S} \cup \paren {E_{2, n} \cap S^\complement} | r = \paren {\bigcup_{n \mathop \in \N} \paren {E_{1, n} \cap S} } \cup \paren {\bigcup_{n \mathop \in \N} \paren {E_{2, n} \cap S^\complement} } | c = Union Distributes over Union/Families of Sets }} {{eqn | r = \paren {\paren {\bigcup_{n \mathop \in \N} E_{1, n} } \cap S} \cup \paren {\paren {\bigcup_{n \mathop \in \N} E_{2, n} } \cap S^\complement} | c = Union Distributes over Intersection }} {{end-eqn}} Since $\ds \bigcup_{n \mathop \in \N} E_{1, n}, \bigcup_{n \mathop \in \N} E_{2, n} \in \Sigma$, it follows that: :$\ds \bigcup_{n \mathop \in \N} \paren {E_{1, n} \cap S} \cup \paren {E_{2, n} \cap S^\complement} \in \Sigma'$ Hence it is established that $\Sigma'$ is a $\sigma$-algebra. It follows that: :$\ds \map \sigma {\Sigma \cup \set S} = \Sigma'$ {{qed}} Category:Sigma-Algebras \end{proof}
21323
\section{Sigma-Algebra Generated by Complements of Generators} Tags: Sigma-Algebras \begin{theorem} Let $\Sigma$ be a $\sigma$-algebra on a set $X$. Let $\GG$ be a generator for $\Sigma$. Then: :$\GG' := \set {X \setminus G: G \in \GG}$ the set of relative complements of $\GG$, is also a generator for $\Sigma$. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | q = \forall G \in \GG | l = G | o = \in | r = \Sigma | c = {{Defof|Sigma-Algebra Generated by Collection of Subsets/Generator|Generator of Sigma-Algebra}} }} {{eqn | ll= \leadsto | q = \forall G \in \GG | l = X \setminus G | o = \in | r = \Sigma | c = {{Defof|Sigma-Algebra}}: $\text{(SA3)}$ }} {{eqn | ll= \leadsto | q = \forall G \in \GG' | l = G | o = \in | r = \Sigma }} {{eqn | ll= \leadsto | l = \map \sigma {\GG'} | o = \subseteq | r = \Sigma | c = {{Defof|Sigma-Algebra Generated by Collection of Subsets}} }} {{eqn | q = \forall G \in \GG' | l = G | o = \in | r = \map \sigma {\GG'} | c = {{Defof|Sigma-Algebra Generated by Collection of Subsets}} }} {{eqn | ll= \leadsto | q = \forall G \in \GG | l = X \setminus G | o = \in | r = \map \sigma {\GG'} }} {{eqn | ll= \leadsto | q = \forall G \in \GG | l = X \setminus \paren {X \setminus G} | o = \in | r = \map \sigma {\GG'} | c = {{Defof|Sigma-Algebra}}: $\text{(SA3)}$ }} {{eqn | ll= \leadsto | q = \forall G \in \GG | l = X \cap G | o = \in | r = \map \sigma {\GG'} | c = Set Difference with Set Difference }} {{eqn | ll= \leadsto | q = \forall G \in \GG | l = G | o = \in | r = \map \sigma {\GG'} | c = Intersection with Subset is Subset }} {{eqn | ll= \leadsto | l = \Sigma | o = \subseteq | r = \map \sigma {\GG'} | c = {{Defof|Sigma-Algebra Generated by Collection of Subsets/Generator|Generator of Sigma-Algebra}} }} {{eqn | ll= \leadsto | l = \Sigma | r = \map \sigma {\GG'} | c = {{Defof|Set Equality|index = 2}} }} {{end-eqn}} Hence the result. {{qed}} Category:Sigma-Algebras \end{proof}
21324
\section{Sigma-Algebra as Magma of Sets} Tags: Sigma-Algebras, Magmas of Sets \begin{theorem} The concept of $\sigma$-algebra is an instance of a magma of sets. \end{theorem} \begin{proof} It will suffice to define partial mappings such that the axiom for a magma of sets crystallises into the axioms for a $\sigma$-algebra. Let $X$ be any set, and let $\powerset X$ be its power set. Define: :$\phi_1: \powerset X \to \powerset X: \map {\phi_1} S := X$ :$\phi_2: \powerset X \to \powerset X: \map {\phi_2} S := X \setminus S$ :$\phi_3: \powerset X^\N \to \powerset X: \map {\phi_3} {\sequence {S_n}_{n \mathop \in \N} } := \ds \bigcup_{n \mathop \in \N} S_n$ It is blatantly obvious that $\phi_1, \phi_2$ and $\phi_3$ capture the axioms for a $\sigma$-algebra. {{finish|Bladibladiblabla, i.e. show that they are mappings, and that the MoS property translates into the $\sigma$-axiom}} {{qed}} Category:Sigma-Algebras Category:Magmas of Sets \end{proof}
21325
\section{Sigma-Algebra is Delta-Algebra} Tags: Sigma-Algebras, Set Systems \begin{theorem} A $\sigma$-algebra is also a $\delta$-algebra. \end{theorem} \begin{proof} Let $\SS$ be a $\sigma$-algebra whose unit is $\mathbb U$. Let $A_1, A_2, \ldots$ be a countably infinite collection of elements of $\SS$. Then: {{begin-eqn}} {{eqn | q = \forall i | l = \mathbb U \setminus A_i | o = \in | r = \SS | c = $\SS$ is closed under relative complement with $\mathbb U$ }} {{eqn | ll= \leadsto | l = \bigcup_{i \mathop = 1}^\infty \paren {\mathbb U \setminus A_i} | o = \in | r = \SS | c = $\SS$ is closed under countable unions }} {{eqn | ll= \leadsto | l = \mathbb U \setminus \bigcap_{i \mathop = 1}^\infty A_i | o = \in | r = \SS | c = De Morgan's Laws }} {{eqn | ll= \leadsto | l = \bigcap_{i \mathop = 1}^\infty A_i | o = \in | r = \SS | c = $\SS$ is closed under relative complement with $\mathbb U$ }} {{end-eqn}} Thus $\SS$ is a $\delta$-algebra. {{qed}} \end{proof}
21326
\section{Sigma-Algebra is Dynkin System} Tags: Dynkin Systems, Sigma-Algebras \begin{theorem} Let $X$ be a set, and let $\Sigma$ be a $\sigma$-algebra on $X$. Then $\Sigma$ is a Dynkin system on $X$. \end{theorem} \begin{proof} The axioms $(1)$ and $(2)$ for both $\sigma$-algebras and Dynkin systems are identical. Dynkin system axiom $(3)$ is seen to be a specification of $\sigma$-algebra axiom $(3)$ to pairwise disjoint sequences. Hence $\Sigma$ is trivially a Dynkin system on $X$. {{qed}} \end{proof}
21327
\section{Sigma-Algebra is Monotone Class} Tags: Sigma-Algebras, Monotone Classes \begin{theorem} Let $\Sigma$ be a $\sigma$-algebra on a set $X$. Then $\Sigma$ is also a monotone class. \end{theorem} \begin{proof} By definition, $\Sigma$, being a $\sigma$-algebra, is closed under countable unions. From Sigma-Algebra Closed under Countable Intersection, it is also closed under countable intersections. Thence, by definition, $\Sigma$ is a monotone class. {{qed}} Category:Sigma-Algebras Category:Monotone Classes \end{proof}
21328
\section{Sigma-Algebra of Countable Sets} Tags: Sigma-Algebras \begin{theorem} Let $X$ be a set. Let $\Sigma$ be the set of countable and co-countable subsets of $X$. Then $\Sigma$ is a $\sigma$-algebra. \end{theorem} \begin{proof} Let us verify in turn the axioms of a $\sigma$-algebra. \end{proof}
21329
\section{Sigma-Algebras with Independent Generators are Independent} Tags: Measure Theory \begin{theorem} Let $\struct {\Omega, \EE, \Pr}$ be a probability space. Let $\Sigma, \Sigma'$ be sub-$\sigma$-algebras of $\EE$. Suppose that $\GG, \HH$ are $\cap$-stable generators for $\Sigma, \Sigma'$, respectively. Suppose that, for all $G \in \GG, H \in \HH$: :$(1): \quad \map \Pr {G \cap H} = \map \Pr G \map \Pr H$ Then $\Sigma$ and $\Sigma'$ are $\Pr$-independent. \end{theorem} \begin{proof} Fix $H \in \HH$. Define, for $E \in \Sigma$: :$\map \mu E := \map \Pr {E \cap H}$ :$\map \nu E := \map \Pr E \map \Pr H$ Then by Intersection Measure is Measure and Restricted Measure is Measure, $\mu$ is a measure on $\Sigma$. Namely, it is the intersection measure $\Pr_H$ restricted to $\Sigma$, that is $\Pr_H \restriction_\Sigma$. Next, by Linear Combination of Measures and Restricted Measure is Measure, $\nu$ is also a measure on $\Sigma$. Namely, it is the restricted measure $\map \Pr H \Pr \restriction_\Sigma$. Let $\GG' := \GG \cup \set X$. It is immediate that $\GG'$ is also a $\cap$-stable generator for $\Sigma$. By assumption $(1)$, $\mu$ and $\nu$ coincide on $\GG$ (since $\map \Pr X = 1$). From Restricting Measure Preserves Finiteness, $\mu$ and $\nu$ are also finite measures. Hence, $\GG$ contains the exhausting sequence of which every term equals $X$. Having verified all conditions, Uniqueness of Measures applies to yield $\mu = \nu$. Now fix $E \in \Sigma$ and define, for $E' \in \Sigma'$: :$\map {\mu'_E} {E'} := \map \Pr {E \cap E'}$ :$\map {\nu'_E} {E'} := \map \Pr E \map \Pr {E'}$ Mutatis mutandis, above consideration applies again, and we conclude by Uniqueness of Measures: :$\mu'_E = \nu'_E$ for all $E \in \Sigma$. That is, expanding the definition of the measures $\mu'_E$ and $\nu'_E$: :$\forall E \in \Sigma: \forall E' \in \Sigma': \map \Pr {E \cap E'} = \map \Pr E \map \Pr {E'}$ This is precisely the statement that $\Sigma$ and $\Sigma'$ are $\Pr$-independent $\sigma$-algebras. {{qed}} \end{proof}
21330
\section{Sigma-Compact Space is Lindelöf} Tags: Compact Spaces, Sigma-Compact Spaces, Lindelöf Spaces \begin{theorem} Every $\sigma$-compact space is a Lindelöf space. \end{theorem} \begin{proof} Let $T = \struct {S, \tau}$ be a $\sigma$-compact space. By definition: :$T$ is a Lindelöf space {{iff}} every open cover of $X$ has a countable subcover. By definition of $\sigma$-compact space, $T = \bigcap \TT$ where $\TT$ is the union of countably many compact subspaces. Let $\CC$ be an open cover of $T$. Each element of $\TT$ is covered by a finite number of elements of $\CC$. Hence $T$ is covered by a countable union of a finite number of elements of $\CC$. Hence $\CC$ has a countable subcover. Hence the result. {{qed}} \end{proof}
21331
\section{Sigma-Compactness is Preserved under Continuous Surjection} Tags: Surjections, Continuous Mappings, Sigma-Compact Spaces \begin{theorem} Let $T_A = \struct {S_A, \tau_A}$ and $T_B = \struct {S_B, \tau_B}$ be topological spaces. Let $\phi: T_A \to T_B$ be a continuous surjection. If $T_A$ is $\sigma$-compact, then $T_B$ is also $\sigma$-compact. \end{theorem} \begin{proof} Let $T_A$ be $\sigma$-compact. Then: :$\ds S_A = \bigcup_{i \mathop = 1}^\infty S_i$ where $S_i \subseteq S_A$ are compact. Since $\phi$ is surjective, we have from Image of Union under Relation: :$\ds \phi \sqbrk {S_A} = S_B = \phi \sqbrk {\bigcup_{i \mathop = 1}^\infty S_i} = \bigcup_{i \mathop = 1}^\infty \phi \sqbrk {S_i}$ From Compactness is Preserved under Continuous Surjection, we have that $\phi \sqbrk {S_i}$ is compact for all $i \in \N$. So $S_B$ is the union of a countable number of compact subsets. Thus, by definition, $T_B$ is also $\sigma$-compact. {{qed}} \end{proof}
21332
\section{Sigma-Ring is Closed under Countable Intersections} Tags: Sigma-Rings \begin{theorem} Let $\RR$ be a $\sigma$-ring. Let $\sequence {A_n}_{n \mathop \in \N} \in \RR$ be a sequence of sets in $\RR$. Then: :$\ds \bigcap_{n \mathop = 1}^\infty A_n \in \RR$ \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | q = \forall n \in \N_{>0} | l = A_1, A_n \in \RR | o = \leadsto | r = A_1 \setminus A_n \in \RR | c = Axiom $(\text {SR} 2)$ for $\sigma$-rings }} {{eqn | o = \leadsto | r = \bigcup_{n \mathop = 2}^\infty \paren {A_1 \setminus A_n} \in \RR | c = Axiom $(\text {SR} 3)$ for $\sigma$-rings }} {{eqn | o = \leadsto | r = A_1 \setminus \paren {\bigcup_{n \mathop = 2}^\infty \paren {A_1 \setminus A_n} } \in \RR | c = Axiom $(\text {SR} 2)$ for $\sigma$-rings }} {{end-eqn}} From De Morgan's laws: Difference with Intersection: :$\ds \bigcup_{n \mathop = 2}^\infty \paren {A_1 \setminus A_n} = A_1 \setminus \paren {\bigcap_{n \mathop = 2}^\infty A_n}$ From Set Difference with Set Difference: {{begin-eqn}} {{eqn | l = A_1 \setminus \paren {A_1 \setminus \paren {\ds \bigcap_{n \mathop = 2}^\infty A_n} } | r = A_1 \cap \paren {\ds \bigcap_{n \mathop = 2}^\infty A_n} }} {{eqn | r = {\bigcap_{n \mathop = 1}^\infty A_n} }} {{end-eqn}} Combining the previous equalities, it follows that: :$\ds \bigcap_{n \mathop = 1}^\infty A_n \in \RR$ {{qed}} \end{proof}
21333
\section{Sign of Composition of Permutations} Tags: Permutation Theory, Sign of Permutation \begin{theorem} Let $n \in \N$ be a natural number. Let $N_n$ denote the set of natural numbers $\set {1, 2, \ldots, n}$. Let $S_n$ denote the set of permutations on $N_n$. Let $\map \sgn \pi$ denote the sign of $\pi$ of a permutation $\pi$ of $N_n$. Let $\pi_1, \pi_2 \in S_n$. Then: :$\map \sgn {\pi_1} \map \sgn {\pi_2} = \map \sgn {\pi_1 \circ \pi_2}$ where $\pi_1 \circ \pi_2$ denotes the composite of $\pi_1$ and $\pi_2$. \end{theorem} \begin{proof} From Sign of Permutation on n Letters is Well-Defined, it is established that the sign each of $\pi_1$, $\pi_2$ and $\pi_1 \circ \pi_2$ is either $+1$ and $-1$. By Existence and Uniqueness of Cycle Decomposition, each of $\pi_1$ and $\pi_2$ has a unique cycle decomposition. Thus each of $\pi_1$ and $\pi_2$ can be expressed as the composite of $p_1$ and $p_2$ transpositions respectively. Thus $\pi_1 \circ \pi_2$ can be expressed as the composite of $p_1 + p_2$ transpositions. From Sum of Even Integers is Even, if $p_1$ and $p_2$ are both even then $p_1 + p_2$ is even. In this case: :$\map \sgn {\pi_1} = 1$ :$\map \sgn {\pi_2} = 1$ :$\map \sgn {\pi_1} \map \sgn {\pi_2} = 1 = 1 \times 1$ {{finish}} {{improve}} {{proofread}} \end{proof}
21334
\section{Sign of Cosecant} Tags: Cosecant Function, Sine Function \begin{theorem} Let $x$ be a real number. Then: {{begin-eqn}} {{eqn | l = \csc x | o = > | r = 0 | c = if there exists an integer $n$ such that $2 n \pi < x < \paren {2 n + 1} \pi$ }} {{eqn | l = \csc x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi$ }} {{end-eqn}} where $\csc$ is the real cosecant function. \end{theorem} \begin{proof} For the first part: {{begin-eqn}} {{eqn | l = \sin x | o = > | r = 0 | c = if there exists an integer $n$ such that $2 n \pi < x < \paren {2 n + 1} \pi$ | cc= Sign of Sine }} {{eqn | ll= \leadsto | l = \frac 1 {\sin x} | o = > | r = 0 | c = if there exists an integer $n$ such that $2 n \pi < x < \paren {2 n + 1} \pi$ | cc= Reciprocal of Strictly Positive Real Number is Strictly Positive }} {{eqn | ll= \leadsto | l = \csc x | o = > | r = 0 | c = if there exists an integer $n$ such that $2 n \pi < x < \paren {2 n + 1} \pi$ | cc= Cosecant is Reciprocal of Sine }} {{end-eqn}} For the second part: {{begin-eqn}} {{eqn | l = \sin x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi$ | cc= Sign of Sine }} {{eqn | ll= \leadsto | l = \frac 1 {\sin x} | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi$ | cc= Reciprocal of Strictly Negative Real Number is Strictly Negative }} {{eqn | ll= \leadsto | l = \csc x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi$ | cc= Cosecant is Reciprocal of Sine }} {{end-eqn}} {{qed}} \end{proof}
21335
\section{Sign of Cosine} Tags: Cosine Function \begin{theorem} Let $x$ be a real number. Then: {{begin-eqn}} {{eqn | l = \cos x | o = > | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n - \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 1 2} \pi$ }} {{eqn | l = \cos x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 3 2} \pi$ }} {{end-eqn}} where $\cos$ is the real cosine function. \end{theorem} \begin{proof} Proof by induction: \end{proof}
21336
\section{Sign of Cotangent} Tags: Cotangent Function, Tangent Function \begin{theorem} Let $x$ be a real number. Then: {{begin-eqn}} {{eqn | l = \cot x | o = > | r = 0 | c = if there exists an integer $n$ such that $n \pi < x < \paren {n + \dfrac 1 2} \pi$ }} {{eqn | l = \cot x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {n + \dfrac 1 2} \pi < x < \paren {n + 1} \pi$ }} {{end-eqn}} where $\cot$ is the real cotangent function. \end{theorem} \begin{proof} For the first part: {{begin-eqn}} {{eqn | l = \tan x | o = > | r = 0 | c = if there exists an integer $n$ such that $n \pi < x < \paren {n + \dfrac 1 2} \pi$ | cc= Sign of Tangent }} {{eqn | ll= \leadsto | l = \frac 1 \tan x | o = > | r = 0 | c = if there exists an integer $n$ such that $n \pi < x < \paren {n + \dfrac 1 2} \pi$ | cc= Reciprocal of Strictly Positive Real Number is Strictly Positive }} {{eqn | ll= \leadsto | l = \cot x | o = > | r = 0 | c = if there exists an integer $n$ such that $n \pi < x < \paren {n + \dfrac 1 2} \pi$ | cc= Cotangent is Reciprocal of Tangent }} {{end-eqn}} For the second part: {{begin-eqn}} {{eqn | l = \tan x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {n + \dfrac 1 2} \pi < x < \paren {n + 1} \pi$ | cc= Sign of Tangent }} {{eqn | ll= \leadsto | l = \frac 1 {\tan x} | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {n + \dfrac 1 2} \pi < x < \paren {n + 1} \pi$ | cc= Reciprocal of Strictly Negative Real Number is Strictly Negative }} {{eqn | ll= \leadsto | l = \cot x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {n + \dfrac 1 2} \pi < x < \paren {n + 1} \pi$ | cc= Cotangent is Reciprocal of Tangent }} {{end-eqn}} {{qed}} \end{proof}
21337
\section{Sign of Function Matches Sign of Definite Integral} Tags: Integral Calculus \begin{theorem} Let $f$ be a real function continuous on some closed interval $\closedint a b$, where $a < b$. Then: :If $\forall x \in \closedint a b: \map f x \ge 0$ then $\ds \int_a^b \map f x \rd x \ge 0$ :If $\forall x \in \closedint a b: \map f x > 0$ then $\ds \int_a^b \map f x \rd x > 0$ :If $\forall x \in \closedint a b: \map f x \le 0$ then $\ds \int_a^b \map f x \rd x \le 0$ :If $\forall x \in \closedint a b: \map f x < 0$ then $\ds \int_a^b \map f x \rd x < 0$ \end{theorem} \begin{proof} From Continuous Real Function is Darboux Integrable, the definite integrals under discussion are guaranteed to exist. Consider the case where $\forall x \in \closedint a b: \map f x \ge 0$. Define a constant mapping: :$f_0: \closedint a b \to \R$: :$\map {f_0} x = 0$ Then: {{begin-eqn}} {{eqn | l = \map {f_0} x | o = \le | r = \map f x | c = for any $x \in \closedint a b$: recall $\map f x \ge 0$ }} {{eqn | ll= \leadsto | l = \int_a^b \map {f_0} x \rd x | o = \le | r = \int_a^b \map f x \rd x | c = Relative Sizes of Definite Integrals }} {{eqn | ll= \leadsto | l = 0 \paren {b - a} | o = \le | r = \int_a^b \map f x \rd x | c = Integral of Constant }} {{eqn | ll= \leadsto | l = \int_a^b \map f x \rd x | o = \ge | r = 0 }} {{end-eqn}} The proofs of the other cases are similar. {{qed}} Category:Integral Calculus \end{proof}
21338
\section{Sign of Half-Plane is Well-Defined} Tags: Half-Planes \begin{theorem} Let $\LL$ be a straight line embedded in a cartesian plane $\CC$, given by the equation: :$l x + m y + n = 0$ Let $\HH_1$ and $\HH_2$ be the half-planes into which $\LL$ divides $\CC$. Let the sign of a point $P = \tuple {x_1, y_1}$ in $\CC$ be defined as the sign of the expression $l x_1 + m y_1 + n$. Then the sign of $\HH_1$ and $\HH_2$ is well-defined in the sense that: :all points in one half-plane $\HH \in \set {\HH_1, \HH_2}$ have the same sign :all points in $\HH_1$ are of the opposite sign from the points in $\HH_2$ :all points on $\LL$ itself have sign $0$. \end{theorem} \begin{proof} By definition of $\LL$, if $P$ is on $\LL$ then $l x_1 + m y_1 + n = 0$. Similarly, if $P$ is not on $\LL$ then $l x_1 + m y_1 + n \ne 0$. Let $P = \tuple {x_1, y_1}$ and $Q = \tuple {x_2, y_2}$ be two points not on $\LL$ such that the line $PQ$ intersects $\LL$ at $R = \tuple {x, y}$. Let $PR : RQ = k$. Then from Joachimsthal's Section-Formulae: {{begin-eqn}} {{eqn | l = x | r = \dfrac {k x_2 + x_1} {k + 1} | c = }} {{eqn | l = y | r = \dfrac {k y_2 + y_1} {k + 1} | c = }} {{eqn | ll= \leadsto | l = l \paren {k x_2 + x_1} + m \paren {k y_2 + y_1} + n \paren {k + 1} | r = 0 | c = as these values satisfy the equation of $\LL$ }} {{eqn | ll= \leadsto | l = k | r = -\dfrac {l x_1 + m y_1 + n} {l x_2 + m y_2 + n} | c = }} {{eqn | ll= \leadsto | l = k | r = -\dfrac {u_1} {u_2} | c = where $u_1 = l x_1 + m y_1 + n$ and $u_2 = l x_2 + m y_2 + n$ }} {{end-eqn}} If $u_1$ and $u_2$ have the same sign, then $k$ is negative. By definition of the position-ratio of $R$, it then follows that $R$ is not on the ine segment $PQ$. Hence $P$ and $Q$ are in the same one of the half-planes defined by $\LL$. Similarly, if $u_1$ and $u_2$ have the opposite signs, then $k$ is positive. Again by definition of the position-ratio of $R$, it then follows that $R$ is on the ine segment $PQ$. That is, $\LL$ intersects the ine segment $PQ$. That is, $P$ and $Q$ are on opposite sides of $\LL$. Hence $P$ and $Q$ are in opposite half-planes. {{qed}} \end{proof}
21339
\section{Sign of Haversine} Tags: Haversines \begin{theorem} The haversine is non-negative for all $\theta \in \R$. \end{theorem} \begin{proof} The haversine is conventionally defined on the real numbers only. We have that: :$\forall \theta \in \R: -1 < \cos \theta < 1$ and so: :$\forall \theta \in \R: 0 < 1 - \cos \theta < 2$ from which the result follows by definition of haversine. {{qed}} \end{proof}
21340
\section{Sign of Odd Power} Tags: Algebra, Real Analysis \begin{theorem} Let $x \in \R$ be a real number. Let $n \in \Z$ be an odd integer. Then: :$x^n = 0 \iff x = 0$ :$x^n > 0 \iff x > 0$ :$x^n < 0 \iff x < 0$ That is, the sign of an odd power matches the number it is a power of. \end{theorem} \begin{proof} If $n$ is an odd integer, then $n = 2 k + 1$ for some $k \in \N$. Thus $x^n = x \cdot x^{2 k}$. But $x^{2 k} \ge 0$ from Even Power is Non-Negative. The result follows. {{qed}} \end{proof}
21341
\section{Sign of Permutation is Plus or Minus Unity} Tags: Sign of Permutation, Symmetric Group, Symmetric Groups \begin{theorem} Let $n \in \N$ be a natural number. Let $\N_n$ denote the set of natural numbers $\set {1, 2, \ldots, n}$. Let $S_n$ denote the symmetric group on $n$ letters. Let $\sequence {x_k}_{k \mathop \in \N_n}$ be a finite sequence in $\R$. Let $\pi \in S_n$. Let $\map {\Delta_n} {x_1, x_2, \ldots, x_n}$ be the product of differences of $\tuple {x_1, x_2, \ldots, x_n}$. Let $\map \sgn \pi$ be the sign of $\pi$. Let $\pi \cdot \map {\Delta_n} {x_1, x_2, \ldots, x_n}$ be defined as: :$\pi \cdot \map {\Delta_n} {x_1, x_2, \ldots, x_n} := \map {\Delta_n} {x_{\map \pi 1}, x_{\map \pi 2}, \ldots, x_{\map \pi n} }$ Then either: :$\pi \cdot \Delta_n = \Delta_n$ or: :$\pi \cdot \Delta_n = -\Delta_n$ That is: :$\map \sgn \pi = \begin{cases} 1 & :\pi \cdot \Delta_n = \Delta_n \\ -1 & : \pi \cdot \Delta_n = -\Delta_n \end{cases}$ Thus: :$\pi \cdot \Delta_n = \map \sgn \pi \Delta_n$ \end{theorem} \begin{proof} If $\exists i, j \in \N_n$ such that $x_i = x_j$, then $\map {\Delta_n} {x_1, x_2, \ldots, x_n} = 0$ and the result follows trivially. So, suppose all the elements $x_k$ are distinct. Let us use $\Delta_n$ to denote $\map {\Delta_n} {x_1, x_2, \ldots, x_n}$. Let $1 \le a < b \le n$. Then $x_a - x_b$ is a divisor of $\Delta_n$. Then $x_{\map \pi a} - x_{\map \pi b}$ is a factor of $\pi \cdot \Delta_n$. There are two possibilities for the ordering of $\map \pi a$ and $\map \pi b$: Either $\map \pi a < \map \pi b$ or $\map \pi a > \map \pi b$. If the former, then $x_{\map \pi a} - x_{\map \pi b}$ is a factor of $\Delta_n$. If the latter, then $-\paren {x_{\map \pi a} - x_{\map \pi b} }$ is a factor of $\Delta_n$. The same applies to all factors of $\Delta_n$. Thus: {{begin-eqn}} {{eqn | l = \pi \cdot \Delta_n | r = \pi \cdot \prod_{1 \mathop \le i \mathop < j \mathop \le n} \paren {x_i - x_j} | c = }} {{eqn | r = \pm \prod_{1 \mathop \le i \mathop < j \mathop \le n} \paren {x_i - x_j} | c = }} {{eqn | r = \pm \Delta_n | c = }} {{end-eqn}} {{Qed}} \end{proof}
21342
\section{Sign of Permutation on n Letters is Well-Defined} Tags: Permutation Theory, Sign of Permutation \begin{theorem} Let $n \in \N$ be a natural number. Let $S_n$ denote the symmetric group on $n$ letters. Let $\rho \in S_n$ be a permutation in $S_n$. Let $\map \sgn \rho$ denote the sign of $\rho$. Then $\map \sgn \rho$ is well-defined, in that it is either $1$ or $-1$. \end{theorem} \begin{proof} What is needed to be proved is that for any permutation $\rho \in S_n$, $\rho$ cannot be expressed as the composite of both an even number and an odd number of transpositions. Consider the permutation formed by composing $\rho$ with an arbitrary transposition $\tau$. Let $\rho$ be expressed as the composite of disjoint cycles whose lengths are all greater than $1$. By Disjoint Permutations Commute, the order in which the various cycles of $\rho$ are composed does not matter. Let $\tau = \begin {bmatrix} a & b \end {bmatrix}$ for some $a, b \in \set {1, 2, \ldots, n}$ where $a \ne b$. There are three cases: $(1): \quad$ Neither $a$ nor $b$ appear in the expression for $\rho$. That is, $\tau$ and $\rho$ are disjoint. Then $\rho \circ \tau$ can be expressed as the same set of disjoint cycles as $\rho$, but with an extra cycle $\begin {bmatrix} a & b \end {bmatrix}$ appended. $(2): \quad$ Just one of $a$ and $b$ occurs in the expression for $\rho$. {{WLOG}}, let $a$ appear in the expression for $\rho$. Let $a$ appear in the cycle $\rho_0$. Then: {{begin-eqn}} {{eqn | l = \rho_0 \circ \tau | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m \end {bmatrix} \circ \begin {bmatrix} a & b \end {bmatrix} | c = }} {{eqn | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m b \end {bmatrix} | c = }} {{end-eqn}} Thus composing $\rho$ with $\tau$ results in adding an extra element to one cycle and leaving the others as they are. $(3): \quad$ Both $a$ and $b$ occur in the expression for $\rho$. If $a$ and $b$ both occur in the same cycle $\rho_0$, the operation of composition goes like this: {{begin-eqn}} {{eqn | l = \rho_0 \circ \tau | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m & b & c_1 & c_2 & \cdots c_k \end {bmatrix} \circ \begin {bmatrix} a & b \end {bmatrix} | c = }} {{eqn | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m \end {bmatrix} \circ \begin {bmatrix} b & c_1 & c_2 & \cdots c_k \end {bmatrix} | c = }} {{end-eqn}} If $a$ and $b$ appear in different cycles $\rho_1$ and $\rho_2$, we have: {{begin-eqn}} {{eqn | l = \rho_1 \circ \rho_2 \circ \tau | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m \end {bmatrix} \circ \begin {bmatrix} b & c_1 & c_2 & \cdots c_k \end {bmatrix} \circ \begin {bmatrix} a & b \end {bmatrix} | c = }} {{eqn | r = \begin {bmatrix} a & b_1 & b_2 & \cdots & b_m & b & c_1 & c_2 & \cdots c_k \end {bmatrix} | c = }} {{end-eqn}} Thus in case $(3)$, composition with $\tau$ results in the number of cycles either increasing or decreasing by $1$, while the total number of elements in those cycles stays the same. For all $\rho \in S_n$, Let $\rho$ be expressed in cycle notation as a composite of $n$ cycles containing $m_1, m_2, \ldots, m_n$ elements respectively, where each $m_i \ge 2$. Let the mapping $P: S_n \to \set {1, -1}$ be defined as follows: :$\forall \rho \in S_n: \map P \rho = \paren {-1}^{m_1 - 1} \paren {-1}^{m_1 - 1} \cdots \paren {-1}^{m_n - 1}$ where $\map P {I_{S_n} } = 1$. From the above, it can be seen that $\map P {\rho \circ \tau} = -\map P \rho$. Let $\rho$ be expressible as the composite of $r$ transpositions. By an inductive proof it can be shown that $\map P \rho = \paren {-1}^r$. But $\map P \rho$ is independent of the actual transpositions that are used to build $\rho$. Thus $\map P \rho = 1$ for one such expression {{iff}} $\map P \rho = 1$ for all such expressions. That is, $\rho$ cannot have an expression in cycle notation as the composite of an even number of transpositions and at the same time have an expression in cycle notation as the composite of an odd number of transpositions. Hence the result. {{qed}} \end{proof}
21343
\section{Sign of Quadratic Function Between Roots} Tags: Quadratic Functions \begin{theorem} Let $a \in \R_{>0}$ be a (strictly) positive real number. Let $\alpha$ and $\beta$, where $\alpha < \beta$, be the roots of the quadratic function: :$\map Q x = a x^2 + b x + c$ whose discriminant $b^2 - 4 a c$ is (strictly) positive. Then: :$\begin {cases} \map Q x < 0 & : \text {when $\alpha < x < \beta$} \\ \map Q x > 0 & : \text {when $x < \alpha$ or $x > \beta$} \end {cases}$ \end{theorem} \begin{proof} Because $b^2 - 4 a c > 0$, we have from Solution to Quadratic Equation with Real Coefficients that the roots of $\map Q x$ are real and unequal. This demonstrates the existence of $\alpha$ and $\beta$, where by hypothesis we state that $\alpha < \beta$. We can express $\map Q x$ as: :$\map Q x = a \paren {x - \alpha} \paren {x - \beta}$ When $\alpha < x < \beta$ we have that: :$x - \alpha > 0$ :$x - \beta < 0$ and so: :$\map Q x = a \paren {x - \alpha} \paren {x - \beta} < 0$ {{qed|lemma}} When $x < \alpha$ we have that: :$x - \alpha < 0$ :$x - \beta < 0$ and so: :$\map Q x = a \paren {x - \alpha} \paren {x - \beta} > 0$ {{qed|lemma}} When $x > \beta$ we have that: :$x - \alpha > 0$ :$x - \beta > 0$ and so: :$\map Q x = a \paren {x - \alpha} \paren {x - \beta} > 0$ {{qed|lemma}} Hence the result. {{qed}} \end{proof}
21344
\section{Sign of Quotient of Factors of Difference of Squares} Tags: Signum Function, Real Analysis \begin{theorem} Let $a, b \in \R$ such that $a \ne b$. Then :$\map \sgn {a^2 - b^2} = \map \sgn {\dfrac {a + b} {a - b} } = \map \sgn {\dfrac {a - b} {a + b} }$ where $\sgn$ denotes the signum of a real number. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \map \sgn {\frac {a - b} {a + b} } | r = \map \sgn {a - b} \frac 1 {\map \sgn {a + b} } | c = Signum Function is Completely Multiplicative }} {{eqn | r = \map \sgn {a - b} \map \sgn {a + b} | c = Signum Function of Reciprocal }} {{eqn | r = \map \sgn {\paren {a - b} \paren {a + b} } | c = Signum Function is Completely Multiplicative }} {{eqn | r = \map \sgn {a^2 - b^2} | c = Difference of Two Squares }} {{eqn | r = \map \sgn {\paren {a + b} \paren {a - b} } | c = Difference of Two Squares }} {{eqn | r = \map \sgn {a + b} \map \sgn {a - b} | c = Signum Function is Completely Multiplicative }} {{eqn | r = \map \sgn {a + b} \frac 1 {\map \sgn {a - b} } | c = Signum Function of Reciprocal }} {{eqn | r = \map \sgn {\frac {a + b} {a - b} } | c = Signum Function is Completely Multiplicative }} {{end-eqn}} {{qed}} Category:Real Analysis Category:Signum Function \end{proof}
21345
\section{Sign of Quotient of Factors of Difference of Squares/Corollary} Tags: Signum Function, Real Analysis \begin{theorem} Let $a, b \in \R$ such that $a \ne b$. Then :$-\operatorname{sgn} \left({\dfrac {b - a} {b + a} }\right) = \operatorname{sgn} \left({a^2 - b^2}\right) = -\operatorname{sgn} \left({\dfrac {b + a} {b - a} }\right)$ where $\operatorname{sgn}$ denotes the signum of a real number. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \map \sgn {\frac {b - a} {b + a} } | r = \map \sgn {\paren {-1} \frac {a - b} {a + b} } | c = }} {{eqn | r = \map \sgn {-1} \map \sgn {\frac {a - b} {a + b} } | c = Signum Function is Completely Multiplicative }} {{eqn | r = \paren {-1} \map \sgn {\frac {a - b} {a + b} } | c = {{Defof|Signum Function}} }} {{eqn | r = -\map \sgn {a^2 - b^2} | c = Sign of Quotient of Factors of Difference of Squares }} {{eqn | r = \paren {-1} \map \sgn {\frac {a + b} {a - b} } | c = Sign of Quotient of Factors of Difference of Squares }} {{eqn | r = \map \sgn {-1} \map \sgn {\frac {a + b} {a - b} } | c = {{Defof|Signum Function}} }} {{eqn | r = \map \sgn {\paren {-1} \frac {a + b} {a - b} } | c = Signum Function is Completely Multiplicative }} {{eqn | r = \map \sgn {\frac {b + a} {b - a} } | c = }} {{end-eqn}} {{qed}} Category:Real Analysis Category:Signum Function \end{proof}
21346
\section{Sign of Secant} Tags: Secant Function \begin{theorem} Let $x$ be a real number. {{begin-eqn}} {{eqn | l = \sec x | o = > | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n - \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 1 2} \pi$ }} {{eqn | l = \sec x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 3 2} \pi$ }} {{end-eqn}} where $\sec$ is the real secant function. \end{theorem} \begin{proof} For the first part: {{begin-eqn}} {{eqn | l = \cos x | o = > | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n - \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 1 2} \pi$ | cc= Sign of Cosine }} {{eqn | ll= \leadsto | l = \frac 1 {\cos x} | o = > | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n - \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 1 2} \pi$ | cc= Reciprocal of Strictly Positive Real Number is Strictly Positive }} {{eqn | ll= \leadsto | l = \sec x | o = > | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n - \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 1 2} \pi$ | cc= Secant is Reciprocal of Cosine }} {{end-eqn}} For the second part: {{begin-eqn}} {{eqn | l = \cos x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 3 2} \pi$ | cc= Sign of Cosine }} {{eqn | ll= \leadsto | l = \frac 1 {\cos x} | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 3 2} \pi$ | cc= Reciprocal of Strictly Negative Real Number is Strictly Negative }} {{eqn | ll= \leadsto | l = \sec x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + \dfrac 1 2} \pi < x < \paren {2 n + \dfrac 3 2} \pi$ | cc= Secant is Reciprocal of Cosine }} {{end-eqn}} {{qed}} \end{proof}
21347
\section{Sign of Sine} Tags: Sine Function \begin{theorem} Let $x$ be a real number. {{begin-eqn}} {{eqn | l = \sin x | o = > | r = 0 | c = if there exists an integer $n$ such that $2 n \pi < x < \paren {2 n + 1} \pi$ }} {{eqn | l = \sin x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi$ }} {{end-eqn}} where $\sin$ is the real sine function. \end{theorem} \begin{proof} First the case where $n \ge 0$ is addressed. The proof proceeds by induction. For all $n \in \Z_{\ge 0}$, let $\map P n$ be the proposition: :$\forall x \in \R:$ ::$2 n \pi < x < \paren {2 n + 1} \pi \implies \sin x > 0$ ::$\paren {2 n + 1} \pi < x < \paren {2 n + 2} \pi \implies \sin x < 0$ \end{proof}
21348
\section{Sign of Tangent} Tags: Tangent Function \begin{theorem} Let $x$ be a real number. Then: {{begin-eqn}} {{eqn | l = \tan x | o = > | r = 0 | c = if there exists an integer $n$ such that $n \pi < x < \paren {n + \dfrac 1 2} \pi$ }} {{eqn | l = \tan x | o = < | r = 0 | c = if there exists an integer $n$ such that $\paren {n + \dfrac 1 2} \pi < x < \paren {n + 1} \pi$ }} {{end-eqn}} where $\tan$ denotes the tangent function. \end{theorem} \begin{proof} From Tangent is Sine divided by Cosine: :$\tan x = \dfrac {\sin x} {\cos x}$ Since $n$ is an integer, $n$ is either odd or even. \end{proof}
21349
\section{Signed Measure may not be Monotone} Tags: Signed Measures \begin{theorem} Let $\struct {X, \Sigma}$ be measurable space. Let $\mu$ be a signed measure on $\struct {X, \Sigma}$. Then $\mu$ may not be monotone. \end{theorem} \begin{proof} Let: :$\struct {X, \Sigma} = \struct {\R, \map \BB \R}$ where $\map \BB \R$ is the Borel $\sigma$-algebra on $\R$. Define: :$\mu = \delta_1 - 2 \delta_2$ where $\delta_1$ and $\delta_2$ are the Dirac measures at $1$ and $2$ respectively. Since $\delta_1$ and $\delta_2$ are both finite measures, we have: :$\mu$ is a signed measure from Linear Combination of Signed Measures is Signed Measure. Then, we have: :$\closedint 0 1 \subseteq \closedint 0 2$ with: {{begin-eqn}} {{eqn | l = \map \mu {\closedint 0 1} | r = \map {\delta_1} {\closedint 0 1} - 2 \map {\delta_2} {\closedint 0 1} }} {{eqn | r = 1 - 0 | c = {{Defof|Dirac Measure}} }} {{eqn | r = 1 }} {{end-eqn}} and: {{begin-eqn}} {{eqn | l = \map \mu {\closedint 0 2} | r = \map {\delta_1} {\closedint 0 2} - 2 \map {\delta_2} {\closedint 0 2} }} {{eqn | r = 1 - 2 | c = {{Defof|Dirac Measure}} }} {{eqn | r = -1 }} {{end-eqn}} So: :$\closedint 0 1 \subseteq \closedint 0 2$ and $\map \mu {\closedint 0 2} \le \map \mu {\closedint 0 1}$ So: :$\mu$ is not monotone. {{qed}} Category:Signed Measures \end{proof}
21350
\section{Signed Measure of Limit of Increasing Sequence of Measurable Sets} Tags: Signed Measures \begin{theorem} Let $\struct {X, \Sigma}$ be a measurable space. Let $\mu$ be a signed measure on $\struct {X, \Sigma}$. Let $E \in \Sigma$. Let $\sequence {E_n}_{n \mathop \in \N}$ be an increasing sequence of $\Sigma$-measurable sets such that: :$E_n \uparrow E$ where $E_n \uparrow E$ denotes the limit of an increasing sequence of sets. Then: :$\ds \map \mu E = \lim_{n \mathop \to \infty} \map \mu {E_n}$ \end{theorem} \begin{proof} Let $\tuple {\mu^+, \mu^-}$ be the Jordan decomposition of $\mu$. Then $\mu^+$ and $\mu^-$ are measures with: :$\mu = \mu^+ - \mu^-$ where at least one of $\mu^+$ and $\mu^-$ is finite. Then we have: :$\map \mu E = \map {\mu^+} E - \map {\mu^-} E$ From Measure of Limit of Increasing Sequence of Measurable Sets, we have: :$\ds \map {\mu^+} E = \lim_{n \mathop \to \infty} \map {\mu^+} {E_n}$ and: :$\ds \map {\mu^-} E = \lim_{n \mathop \to \infty} \map {\mu^-} {E_n}$ with at most one of these limits infinite. So: {{begin-eqn}} {{eqn | l = \map \mu E | r = \lim_{n \mathop \to \infty} \map {\mu^+} {E_n} - \lim_{n \mathop \to \infty} \map {\mu^-} {E_n} }} {{eqn | r = \lim_{n \mathop \to \infty} \paren {\map {\mu^+} {E_n} - \map {\mu^-} {E_n} } | c = Combination Theorem for Sequences: Real: Difference Rule }} {{eqn | r = \lim_{n \mathop \to \infty} \map \mu {E_n} }} {{end-eqn}} {{qed}} Category:Signed Measures \end{proof}
21351
\section{Signed Stirling Number of the First Kind of 0} Tags: Stirling Number, Stirling Numbers, Examples of Stirling Numbers of the First Kind \begin{theorem} :$\map s {0, n} = \delta_{0 n}$ where: :$\map s {0, n}$ denotes a signed Stirling number of the first kind :$\delta_{0 n}$ denotes the Kronecker delta. \end{theorem} \begin{proof} By definition of signed Stirling number of the first kind: $\ds x^{\underline 0} = \sum_k \map s {0, k} x^k$ Thus we have: {{begin-eqn}} {{eqn | l = x^{\underline 0} | r = 1 | c = Number to Power of Zero Falling is One }} {{eqn | r = x^0 | c = {{Defof|Integer Power}} }} {{end-eqn}} Thus, in the expression: :$\ds x^{\underline 0} = \sum_k \map s {0, k} x^k$ we have: :$\map s {0, 0} = 1$ and for all $k \in \Z_{>0}$: :$\map s {0, k} = 0$ That is: :$\map s {0, k} = \delta_{0 k}$ {{qed}} \end{proof}
21352
\section{Signed Stirling Number of the First Kind of Number with Greater} Tags: Stirling Numbers \begin{theorem} Let $n, k \in \Z_{\ge 0}$ Let $k > n$. Let $\map s {n, k}$ denote a signed Stirling number of the first kind. Then: :$\map s {n, k} = 0$ \end{theorem} \begin{proof} By definition, the signed Stirling numbers of the first kind are defined as the polynomial coefficients $\map s {n, k}$ which satisfy the equation: :$\ds x^{\underline n} = \sum_k \map s {n, k} x^k$ where $x^{\underline n}$ denotes the $n$th falling factorial of $x$. Both of the expressions on the {{LHS}} and {{RHS}} are polynomials in $x$ of degree $n$. Hence the coefficient $\map s {n, k}$ of $x^k$ where $k > n$ is $0$. {{qed}} \end{proof}
21353
\section{Signed Stirling Number of the First Kind of Number with Self} Tags: Stirling Numbers, Examples of Stirling Numbers of the First Kind \begin{theorem} :$\map s {n, n} = 1$ where $\map s {n, n}$ denotes a signed Stirling number of the first kind. \end{theorem} \begin{proof} From Relation between Signed and Unsigned Stirling Numbers of the First Kind: :$\ds {n \brack n} = \paren {-1}^{n + n} \map s {n, n}$ We have that: :$\paren {-1}^{n + n} = \paren {-1}^{2 n} = 1$ and so: :$\ds {n \brack n} = \map s {n, n}$ The result follows from Unsigned Stirling Number of the First Kind of Number with Self. {{qed}} \end{proof}
21354
\section{Signed Stirling Number of the First Kind of n+1 with 0} Tags: Stirling Numbers, Examples of Stirling Numbers of the First Kind \begin{theorem} Let $n \in \Z_{\ge 0}$. Then: :$\map s {n + 1, 0} = 0$ where $\map s {n + 1, 0}$ denotes a signed Stirling number of the first kind. \end{theorem} \begin{proof} By definition of signed Stirling number of the first kind: :$\map s {n, k} = \delta_{n k}$ where $\delta_{n k}$ is the Kronecker delta. Thus {{begin-eqn}} {{eqn | l = n | o = \ge | r = 0 | c = by hypothesis }} {{eqn | ll= \leadsto | l = n + 1 | o = > | r = 0 | c = }} {{eqn | ll= \leadsto | l = n + 1 | o = \ne | r = 0 | c = }} {{eqn | ll= \leadsto | l = \delta_{\paren {n + 1} 0} | r = 0 | c = }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
21355
\section{Signed Stirling Number of the First Kind of n+1 with 1} Tags: Stirling Numbers, Examples of Stirling Numbers of the First Kind \begin{theorem} Let $n \in \Z_{\ge 0}$. Then: :$\map s {n + 1, 1} = \paren {-1}^n n!$ where $\map s {n + 1, 1}$ denotes a signed Stirling number of the first kind. \end{theorem} \begin{proof} By Relation between Signed and Unsigned Stirling Numbers of the First Kind: :$\ds {n + 1 \brack 1} = \paren {-1}^{n + 1 + 1} \map s {n + 1, 1}$ where $\ds {n + 1 \brack 1}$ denotes an unsigned Stirling number of the first kind. We have that: :$\paren {-1}^{n + 1 + 1} = \paren {-1}^n$ and so: :$\ds {n + 1 \brack 1} = \paren {-1}^n \map s {n + 1, 1}$ The result follows from Unsigned Stirling Number of the First Kind of Number with Self. {{qed}} \end{proof}
21356
\section{Signed Stirling Number of the First Kind of n with n-1} Tags: Stirling Numbers, Examples of Stirling Numbers of the First Kind \begin{theorem} Let $n \in \Z_{> 0}$ be an integer greater than $0$. Then: :$\map s {n, n - 1} = -\dbinom n 2$ where: :$\map s {n, k}$ denotes a signed Stirling number of the first kind :$\dbinom n 2$ denotes a binomial coefficient. \end{theorem} \begin{proof} From Relation between Signed and Unsigned Stirling Numbers of the First Kind: :$\ds {n \brack n - 1} = \paren {-1}^{n + n - 1} \map s {n, n - 1}$ where $\ds {n \brack n - 1}$ denotes an unsigned Stirling number of the first kind. We have that: :$\paren {-1}^{n + n - 1} = \paren {-1}^{2 n - 1} = -1$ and so: :$\ds {n \brack n} = -\map s {n, n}$ The result follows from Unsigned Stirling Number of the First Kind of Number with Self. {{qed}} \end{proof}
21357
\section{Signum Function is Completely Multiplicative} Tags: Signum Function, Completely Multiplicative Functions, Real Analysis \begin{theorem} The signum function on the set of real numbers is a completely multiplicative function: :$\forall x, y \in \R: \map \sgn {x y} = \map \sgn x \map \sgn y$ \end{theorem} \begin{proof} Let $x = 0$ or $y = 0$. Then: {{begin-eqn}} {{eqn | l = x y | r = 0 | c = }} {{eqn | n = 1 | ll= \leadsto | l = \map \sgn {x y} | r = 0 | c = }} {{end-eqn}} and either $\map \sgn x = 0$ or $\map \sgn y = 0$ and so: {{begin-eqn}} {{eqn | l = \map \sgn x \map \sgn y | r = 0 | c = }} {{eqn | r = \map \sgn {x y} | c = from $(1)$ above }} {{end-eqn}} {{qed|lemma}} Let $x > 0$ and $y > 0$. Then: {{begin-eqn}} {{eqn | l = \map \sgn x | r = 1 | c = }} {{eqn | lo= \land | l = \map \sgn y | r = 1 | c = }} {{eqn | ll= \leadsto | l = \map \sgn x \map \sgn y | r = 1 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | l = x y | o = > | r = 0 | c = }} {{eqn | l = \map \sgn {x y} | r = 1 | c ={{Defof|Signum Function}} }} {{eqn | r = \map \sgn x \map \sgn y | c = }} {{end-eqn}} {{qed|lemma}} Let $x < 0$ and $y < 0$. Then: {{begin-eqn}} {{eqn | l = \map \sgn x | r = -1 | c = }} {{eqn | lo= \land | l = \map \sgn y | r = -1 | c = }} {{eqn | ll= \leadsto | l = \map \sgn x \map \sgn y | r = 1 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | l = x y | o = > | r = 0 | c = }} {{eqn | l = \map \sgn {x y} | r = 1 | c = {{Defof|Signum Function}} }} {{eqn | r = \map \sgn x \map \sgn y | c = }} {{end-eqn}} {{qed|lemma}} Let $x < 0$ and $y > 0$. Then: {{begin-eqn}} {{eqn | l = \map \sgn x | r = -1 | c = }} {{eqn | lo= \land | l = \map \sgn y | r = 1 | c = }} {{eqn | ll= \leadsto | l = \map \sgn x \map \sgn y | r = -1 | c = }} {{end-eqn}} and: {{begin-eqn}} {{eqn | l = x y | o = < | r = 0 | c = }} {{eqn | l = \map \sgn {x y} | r = -1 | c = {{Defof|Signum Function}} }} {{eqn | r = \map \sgn x \map \sgn y | c = }} {{end-eqn}} {{qed|lemma}} The same argument, mutatis mutandis, covers the case where $x > 0$ and $y < 0$. {{qed}} Category:Signum Function Category:Completely Multiplicative Functions \end{proof}
21358
\section{Signum Function is Primitive Recursive} Tags: Primitive Recursive Functions \begin{theorem} Let $\operatorname{sgn}: \N \to \N$ be defined as the signum function. Then: : $\operatorname{sgn}$ is primitive recursive. : $\overline {\operatorname{sgn}}$ is primitive recursive. \end{theorem} \begin{proof} We have that the characteristic function $\chi_{\N^*}$ of $\N^*$, where $\N^* = \N \setminus \left\{{0}\right\}$, is primitive recursive. We also have by definition that $\operatorname{sgn} \left({n}\right) = \chi_{\N^*} \left({n}\right)$. Thus $\operatorname{sgn}$ is primitive recursive. Now $\N - \N^* = \left\{{0}\right\}$ from Relative Complement of Relative Complement. We also have by definition that $\overline {\operatorname{sgn}} \left({n}\right) = \chi_{\left\{{0}\right\}} \left({n}\right)$. Thus $\overline {\operatorname{sgn}}$ is primitive recursive from Complement of Primitive Recursive Set. {{qed}} Category:Primitive Recursive Functions \end{proof}
21359
\section{Signum Function is Quotient of Number with Absolute Value} Tags: Signum Function, Real Analysis, Absolute Value Function \begin{theorem} Let $x \in \R_{\ne 0}$ be a non-zero real number. Then: :$\map \sgn x = \dfrac x {\size x} = \dfrac {\size x} x$ where: :$\map \sgn x$ denotes the signum function of $x$ :$\size x$ denotes the absolute value of $x$. \end{theorem} \begin{proof} Let $x \in \R_{\ne 0}$. Then either $x > 0$ or $x < 0$. Let $x > 0$. Then: {{begin-eqn}} {{eqn | l = \frac x {\size x} | r = \frac x x | c = {{Defof|Absolute Value}}, as $x > 0$ }} {{eqn | r = 1 | c = }} {{eqn | r = \map \sgn x | c = {{Defof|Signum Function}}, as $x > 0$ }} {{end-eqn}} Similarly: {{begin-eqn}} {{eqn | l = \frac {\size x} x | r = \frac x x | c = {{Defof|Absolute Value}}, as $x > 0$ }} {{eqn | r = 1 | c = }} {{eqn | r = \map \sgn x | c = {{Defof|Signum Function}}, as $x > 0$ }} {{end-eqn}} {{qed|lemma}} Let $x < 0$. Then: {{begin-eqn}} {{eqn | l = \frac x {\size x} | r = \frac x {-x} | c = {{Defof|Absolute Value}}, as $x < 0$ }} {{eqn | r = -1 | c = }} {{eqn | r = \map \sgn x | c = {{Defof|Signum Function}}, as $x < 0$ }} {{end-eqn}} Similarly: {{begin-eqn}} {{eqn | l = \frac {\size x} x | r = \frac {-x} x | c = {{Defof|Absolute Value}}, as $x < 0$ }} {{eqn | r = -1 | c = }} {{eqn | r = \map \sgn x | c = {{Defof|Signum Function}}, as $x < 0$ }} {{end-eqn}} {{qed}} Category:Signum Function Category:Absolute Value Function \end{proof}
21360
\section{Signum Function of Reciprocal} Tags: Signum Function \begin{theorem} Let $x \in \R$ such that $x \ne 0$. Then: :$\map \sgn x = \map \sgn {\dfrac 1 x}$ where $\map \sgn x$ denotes the signum of $x$. \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \map \sgn x | r = 1 | c = }} {{eqn | ll= \leadstoandfrom | l = x | o = > | r = 0 | c = {{Defof|Signum Function}} }} {{eqn | ll= \leadstoandfrom | l = \frac 1 x | o = > | r = 0 | c = Reciprocal of Strictly Positive Real Number is Strictly Positive }} {{eqn | ll= \leadstoandfrom | l = \map \sgn {\dfrac 1 x} | r = 1 | c = {{Defof|Signum Function}} }} {{end-eqn}} {{qed}} Category:Signum Function \end{proof}
21361
\section{Signum Function on Integers is Extension of Signum on Natural Numbers} Tags: Number Theory \begin{theorem} Let $\sgn_\Z: \Z \to \set {-1, 0, 1}$ be the signum function on the integers. Let $\sgn_\N: \N \to \set {0, 1}$ be the signum function on the natural numbers. Then $\sgn_\Z: \Z \to \Z$ is an extension of $\sgn_\N: \N \to \N$. \end{theorem} \begin{proof} Let $n \in \Z: n \ge 0$. Then by definition of the signum function: :$\map {\sgn_\Z} n = \begin {cases} 0 & : n = 0 \\ 1 & : n > 0 \end {cases}$ So by definition of the signum function on the natural numbers: :$\forall n \in \N: \map {\sgn_\Z} n = \map {\sgn_\N} n$ Hence the result, by definition of extension. {{qed}} Category:Number Theory \end{proof}
21362
\section{Similar Matrices are Equivalent} Tags: Matrix Algebra \begin{theorem} If two square matrices over a ring with unity $R$ are similar, then they are equivalent. That is: :every equivalence class for the similarity relation on $\map {\MM_R} n$ is contained in an equivalence class for the relation of matrix equivalence. where $\map {\MM_R} n$ denotes the $n \times n$ matrix space over $R$. \end{theorem} \begin{proof} If $\mathbf A \sim \mathbf B$ then $\mathbf B = \mathbf P^{-1} \mathbf A \mathbf P$. Let $\mathbf Q = \mathbf P$. Then $\mathbf A$ are equivalent to $\mathbf B$, as: :$\mathbf B = \mathbf Q^{-1} \mathbf A \mathbf P$ {{qed}} \end{proof}
21363
\section{Similar Matrices have same Traces} Tags: Traces of Matrices \begin{theorem} Let $\mathbf A = \sqbrk a_n$ and $\mathbf B = \sqbrk b_n$ be square matrices of order $n$. Let $\mathbf A$ and $\mathbf B$ be similar. Then: :$\map \tr {\mathbf A} = \map \tr {\mathbf B}$ where $\map \tr {\mathbf A}$ denotes the trace of $\mathbf A$. \end{theorem} \begin{proof} By definition of similar matrices :$\exists \mathbf P: \mathbf P^{-1} \mathbf A \mathbf P = \mathbf B$ where $\mathbf P$ is an invertible matrix of order $n$. Thus it remains to show that: :$\map \tr {\mathbf P^{-1} \mathbf A \mathbf P} = \map \tr {\mathbf A}$ {{ProofWanted}} \end{proof}
21364
\section{Similarity Mapping is Automorphism} Tags: Similarity Mappings, Automorphisms, Linear Algebra \begin{theorem} Let $G$ be a vector space over a field $\struct {K, +, \times}$. Let $\beta \in K$. Let $s_\beta: G \to G$ be the similarity on $G$ defined as: :$\forall \mathbf x \in G: \map {s_\beta} {\mathbf x} = \beta \mathbf x$ If $\beta \ne 0$ then $s_\beta$ is an automorphism of $G$. \end{theorem} \begin{proof} By definition, a vector space automorphism on $G$ is a vector space isomorphism from $G$ to $G$ itself. To prove that $s_\beta$ is a '''automorphism''' it is sufficient to demonstrate that: By definition, a vector space isomorphism is a mapping $s_\beta: G \to G$ such that: :$(1): \quad s_\beta$ is a bijection :$(2): \quad \forall \mathbf x, \mathbf y \in G: \map {s_\beta} {\mathbf x + \mathbf y} = \map {s_\beta} {\mathbf x} + \map {s_\beta} {\mathbf y}$ :$(3): \quad \forall \mathbf x \in G: \forall \lambda \in K: \map {s_\beta} {\lambda \mathbf x} = \lambda \map {s_\beta} {\mathbf x}$ It has been established in Similarity Mapping is Linear Operator that $s_\beta$ is a linear operator on $G$. Hence $(2)$ and $(3)$ follow by definition of linear operator. It remains to prove bijectivity. That is, that $s_\beta$ is both injective and surjective. Let $1_K$ denote the multiplicative identity of $K$. We have: {{begin-eqn}} {{eqn | q = \forall \mathbf x, \mathbf y \in G | l = \map {s_\beta} {\mathbf x} | r = \map {s_\beta} {\mathbf y} | c = }} {{eqn | ll= \leadsto | l = \beta \, \mathbf x | r = \beta \, \mathbf y | c = Definition of $s_\beta$ }} {{eqn | ll= \leadsto | l = \beta^{-1} \beta \, \mathbf x | r = \beta^{-1} \beta \, \mathbf y | c = {{Field-axiom|M4}} }} {{eqn | ll= \leadsto | l = 1_K \, \mathbf x | r = 1_K \, \mathbf y | c = {{Field-axiom|M3}} }} {{eqn | ll= \leadsto | l = \mathbf x | r = \mathbf y | c = {{Vector-space-axiom|8}} }} {{end-eqn}} Hence it has been demonstrated that $s_\beta$ is injective. Let $\mathbf y \in G$. Consider $\beta^{-1} \in K$ defined such that $\beta \beta^{-1} = 1_K$. By {{Field-axiom|M4}}, $\beta^{-1}$ always exists. Then: {{begin-eqn}} {{eqn | q = \forall \mathbf y \in G: \exists \mathbf x \in G | l = \mathbf x | r = \beta^{-1} \mathbf y | c = }} {{eqn | ll= \leadsto | l = \beta \, \mathbf x | r = \beta \beta^{-1} \mathbf y | c = }} {{eqn | r = 1_K \mathbf y | c = }} {{eqn | r = \mathbf y | c = {{Field-axiom|M3}} }} {{eqn | ll= \leadsto | q = \forall \mathbf y \in G: \exists \mathbf x \in G | l = \map {s_\beta} {\mathbf x} | r = \mathbf y | c = Definition of $s_\beta$ }} {{end-eqn}} Hence it has been demonstrated that $s_\beta$ is surjective. Hence the result. {{qed}} \end{proof}
21365
\section{Similarity Mapping is Linear Operator} Tags: Similarity Mappings, Linear Operators, Linear Algebra \begin{theorem} Let $G$ be a vector space over a field $\struct {K, + \times}$. Let $\beta \in K$. Then the similarity $s_\beta: G \to G$ defined as: :$\forall \mathbf x \in G: \map {s_\beta} {\mathbf x} = \beta \mathbf x$ is a linear operator on $G$. \end{theorem} \begin{proof} To prove that $s_\beta$ is a '''linear operator''' it is sufficient to demonstrate that: :$(1): \quad \forall \mathbf x, \mathbf y \in G: \map {s_\beta} {\mathbf x + \mathbf y} = \map {s_\beta} {\mathbf x} + \map {s_\beta} {\mathbf y}$ :$(2): \quad \forall \mathbf x \in G: \forall \lambda \in K: \map {s_\beta} {\lambda \mathbf x} = \lambda \map {s_\beta} {\mathbf x}$ Indeed: {{begin-eqn}} {{eqn | q = \forall \mathbf x, \mathbf y \in G | l = \map {s_\beta} {\mathbf x + \mathbf y} | r = \beta \paren {\mathbf x + \mathbf y} | c = Definition of $s_\beta$ }} {{eqn | r = \beta \, \mathbf x + \beta \, \mathbf y | c = {{Vector-space-axiom|6}} }} {{eqn | r = \map {s_\beta} {\mathbf x} + \map {s_\beta} {\mathbf y} | c = Definition of $s_\beta$ }} {{end-eqn}} and: {{begin-eqn}} {{eqn | q = \forall \mathbf x \in G: \forall \lambda \in K | l = \map {s_\beta} {\lambda \mathbf x} | r = \beta \paren {\lambda \mathbf x} | c = Definition of $s_\beta$ }} {{eqn | r = \beta \lambda \paren {\mathbf x} | c = {{Vector-space-axiom|7}} }} {{eqn | r = \lambda \beta \paren {\mathbf x} | c = {{Field-axiom|M2}} }} {{eqn | r = \lambda \map {s_\beta} {\mathbf x} | c = Definition of $s_\beta$ }} {{end-eqn}} Hence the result. {{qed}} \end{proof}
21366
\section{Similarity Mapping on Plane Commutes with Half Turn about Origin} Tags: Geometric Rotations, Euclidean Geometry, Similarity Mappings, Analytic Geometry \begin{theorem} Let $\beta \in \R_{>0}$ be a (strictly) positive real number. Let $s_{-\beta}: \R^2 \to \R^2$ be the similarity mapping on $\R^2$ whose scale factor is $-\beta$. Then $s_{-\beta}$ is the same as: :a stretching or contraction of scale factor $\beta$ followed by a rotation one half turn and: :a rotation one half turn followed by a stretching or contraction of scale factor $\beta$. \end{theorem} \begin{proof} Let $P = \tuple {x, y} \in \R^2$ be an aribtrary point in the plane. From Similarity Mapping on Plane with Negative Parameter, $s_{-\beta}$ is a stretching or contraction of scale factor $\beta$ followed by a rotation one half turn. Thus: {{begin-eqn}} {{eqn | l = \map {s_{-\beta} } P | r = \map {s_{-1} } {\map {s_\beta} P} | c = Similarity Mapping on Plane with Negative Parameter }} {{eqn | r = \paren {-1} \map {s_\beta} P | c = Definition of $s_{-1}$ }} {{eqn | r = \paren {-1} \tuple {\beta x, \beta y} | c = Definition of $s_\beta$ }} {{eqn | r = \tuple {-\beta x, -\beta y} | c = }} {{eqn | r = \beta \tuple {-x, -y} | c = }} {{eqn | r = \beta \map {s_{-1} } P | c = Definition of $s_{-1}$ }} {{eqn | r = \map {s_\beta} {\map {s_{-1} } P} | c = Definition of $s_\beta$ }} {{end-eqn}} That is: :$s_\beta$ is a rotation one half turn followed by a stretching or contraction of scale factor $\beta$. {{qed}} \end{proof}
21367
\section{Similarity Mapping on Plane with Negative Parameter} Tags: Euclidean Geometry, Similarity Mappings, Analytic Geometry \begin{theorem} Let $\beta \in \R_{<0}$ be a (strictly) negative real number. Let $s_\beta: \R^2 \to \R^2$ be the similarity mapping on $\R^2$ whose scale factor is $\beta$. Then $s_\beta$ is a stretching or contraction followed by a rotation one half turn. \end{theorem} \begin{proof} Let $\beta = -\gamma$ where $\gamma \in \R_{>0}$. Let $P = \tuple {x, y} \in \R^2$ be an aribtrary point in the plane. Then: {{begin-eqn}} {{eqn | l = \map {s_\beta} P | r = \tuple {\paren {-\gamma} x, \paren {-\gamma} y} | c = Definition of $\beta$ }} {{eqn | r = \paren {-1} \tuple {\gamma x, \gamma y} | c = }} {{eqn | r = \paren {-1} \map {s_\gamma} P | c = Definition of $s_\gamma$ }} {{eqn | r = \map {s_{-1} } {\map {s_\gamma} P} | c = Definition of $s_{-1}$ }} {{end-eqn}} Because $\gamma > 0$ we have by definition that $s_\gamma$ is a stretching or contraction. From Similarity Mapping on Plane with Scale Factor Minus 1, $s_{-1}$ is the plane rotation of the plane about the angle $\pi$. Hence, by definition of half turn: :$s_\beta$ is a stretching or contraction followed by a rotation one half turn. {{qed}} \end{proof}
21368
\section{Similarity Mapping on Plane with Scale Factor Minus 1} Tags: Geometric Rotations, Euclidean Geometry, Similarity Mappings, Analytic Geometry \begin{theorem} Let $s_{-1}: \R^2 \to \R^2$ be a similarity mapping on $\R^2$ whose scale factor is $-1$. Then $s_{-1}$ is the same as the rotation $r_\pi$ of the plane about the origin one half turn. \end{theorem} \begin{proof} Let $P = \tuple {x, y} \in \R^2$ be an aribtrary point in the plane. Then: {{begin-eqn}} {{eqn | l = \map {r_\pi} P | r = \tuple {\paren {\cos \pi - \sin \pi} x, \paren {\sin \pi + \cos \pi} y} | c = Rotation of Plane about Origin is Linear Operator }} {{eqn | r = \tuple {\paren {\paren {-1} - 0} x, \paren {0 + \paren {-1} } y} | c = Cosine of Straight Angle, Sine of Straight Angle }} {{eqn | r = \tuple {-x, -y} | c = }} {{eqn | r = \paren {-1} \tuple {x, y} | c = }} {{eqn | r = \map {s_{-1} } P | c = Definition of $s_{-1}$ }} {{end-eqn}} {{qed}} \end{proof}
21369
\section{Similarity of Polygons is Equivalence Relation} Tags: Polygons \begin{theorem} Let $A, B, C$ be polygons. If $A$ and $B$ are both similar to $C$, then $A$ is similar to $B$. {{:Euclid:Proposition/VI/21}} It is also worth noting that: :$A$ is similar to $A$, and so similarity between polygons is reflexive. :If $A$ is similar to $B$, then $B$ is similar to $A$, and so similarity between polygons is symmetric. Hence the relation of similarity between polygons is an equivalence relation. \end{theorem} \begin{proof} :500px We have that $A$ is similar to $C$. From {{EuclidDefLink|VI|1|Similar Rectilineal Figures}}, it is equiangular with it and the sides about the equal angles are proportional. We also have that $B$ is similar to $C$. Again, from {{EuclidDefLink|VI|1|Similar Rectilineal Figures}}, it is equiangular with it and the sides about the equal angles are proportional. So by definition $A$ is similar to $B$. The statements of reflexivity and symmetry are shown similarly. It follows that if $A$ is similar to $B$, and $B$ is similar to $C$, then $A$ is similar to $C$. Thus similarity between polygons is transitive. Hence the result, by definition of equivalence relation. {{qed}} {{Euclid Note|21|VI|{{AuthorRef|Euclid}} himself did not have the concept of an equivalence relation.<br/>However, the extra statements leading to the main result are sufficiently straightforward to justify adding the full proof here.}} \end{proof}
21370
\section{Simple Algebraic Field Extension consists of Polynomials in Algebraic Number} Tags: Field Extensions \begin{theorem} Let $F$ be a field. Let $\theta \in \C$ be algebraic over $F$. Let $\map F \theta$ be the simple field extension of $F$ by $\theta$. Then $\map F \theta$ consists of polynomials that can be written in the form $\map f \theta$, where $\map f x$ is a polynomial over $F$. \end{theorem} \begin{proof} Let $H$ be the set of all numbers which can be written in the form $\map f \theta$. We have that: :$H$ is closed under addition and multiplication. :$H$ contains $0$ and $1$ :For every element of $H$, $H$ also contains its negative. Let $\map f \theta \ne 0$. Then $\theta$ is not a root of $\map f x$. Hence from Polynomial with Algebraic Number as Root is Multiple of Minimal Polynomial: :the minimal polynomial $\map m x$ in $\theta$ does not divide $\map f x$. From Minimal Polynomial is Irreducible, the GCD of $\map m x$ and $\map f x$ is $1$. Therefore: :$\exists \map s x, \map t x: \map s x \map m x + \map t x \map f x = 1$ Substituting for $\theta$: :$\map s \theta \, \map m \theta + \map t \theta \, \map f \theta = 1$ Because $\map m \theta = 0$ it follows that: :$\map t \theta \, \map f \theta = 1$ We have that $\map t \theta \in H$. Thus $\map t \theta$ is the product inverse of $\map f x$ in $H$. Thus $H$ is a field. A field containing $F$ and $\theta$ must contain $1$ and all the powers of $\theta$ for positive integer index. Hence such a field also contains all linear combinations of these, with coefficients in $F$. So a field containing $F$ and $\theta$ contains all the elements of $H$: :$H \subseteq \map F \theta$ But by definition, $\map F \theta$ is the smallest field containing $F$ and $\theta$. That is: :$\map F \theta \subseteq H$ Thus: :$\map F \theta = H$ and the result follows. {{Qed}} \end{proof}
21371
\section{Simple Events are Mutually Exclusive} Tags: Events \begin{theorem} Let $\EE$ be an experiment. Let $e_1$ and $e_2$ be distinct simple events in $\EE$. Then $e_1$ and $e_2$ are mutually exclusive. \end{theorem} \begin{proof} By definition of simple event: {{begin-eqn}} {{eqn | l = e_1 | r = \set {s_1} }} {{eqn | l = e_2 | r = \set {s_2} }} {{end-eqn}} for some elementary events $s_1$ and $s_2$ of $\EE$ such that $s_1 \ne s_2$. It follows that: {{begin-eqn}} {{eqn | l = e_1 \cap e_2 | r = \set {s_1} \cap \set {s_2} | c = Definition of $e_1$ and $e_2$ }} {{eqn | r = \O | c = {{Defof|Set Intersection}} }} {{end-eqn}} The result follows by definition of mutually exclusive events. {{qed}} \end{proof}
21372
\section{Simple Function is Measurable} Tags: Measure Theory, Measurable Functions, Simple Functions \begin{theorem} Let $\struct {X, \Sigma}$ be a measurable space. Let $f: X \to \R$ be a simple function. Then $f$ is $\Sigma$-measurable. \end{theorem} \begin{proof} Let $f$ be written in the following form: :$f = \ds \sum_{i \mathop = 1}^n a_i \chi_{S_i}$ where $a_i \in \R$ and the $S_i$ are $\Sigma$-measurable. Next, for each ordered $n$-tuple $b$ of zeroes and ones define: :$\map {T_b} i := \begin{cases} S_i & : \text {if $\map b i = 0$}\\ X \setminus S_i & : \text {if $\map b i = 1$} \end{cases}$ and subsequently: :$T_b := \ds \bigcap_{i \mathop = 1}^n \map {T_b} i$ From Sigma-Algebra Closed under Intersection, $T_b \in \Sigma$ for all $b$. Also, the $T_b$ are pairwise disjoint, and furthermore: :$f = \ds \sum_b a_b \chi_{T_b}$ where: :$a_b := \ds \sum_{i \mathop = 1}^n \map b i a_i$ {{finish|prove it, it's a messy business}} Now we have, for all $\lambda \in \R$: :$\set {x \in X: \map f x > \lambda} = \ds \bigcup \set {T_b: a_b > \lambda}$ which by Sigma-Algebra Closed under Union is a $\Sigma$-measurable set. From Characterization of Measurable Functions: $(5)$ it follows that $f$ is measurable. {{qed}} \end{proof}