id
stringlengths 1
260
| contents
stringlengths 1
234k
|
---|---|
23573
|
\section{Definition:Null Module}
Tags: Module Theory, Definitions: Module Theory, Modules
\begin{theorem}
Let $\left({R, +_R, \circ_R}\right)$ be a ring.
Let $G$ be the trivial group.
Then $\left({G, +_G, \circ}\right)_R$ is an $R$-module.
This module is known as the '''null module'''.
\end{theorem}
\begin{proof}
Follows from the fact that $\left({G, +_G, \circ}\right)_R$ has to be, by definition, a trivial module:
$\circ$ can only be defined as:
:$\forall \lambda \in R: \forall x \in G: \lambda \circ x = e_G$
{{qed}}
Category:Module Theory
92260
92255
2012-05-21T06:08:04Z
Prime.mover
59
92260
wikitext
text/x-wiki
\end{proof}
|
23574
|
\section{Definition:Null Ring}
Tags: Definitions: Ring Theory, Ring Theory, Rings, Definitions: Examples of Rings, Definitions: Ring Examples
\begin{theorem}
A ring with one element is called the '''null ring'''.
That is, the null ring is $\left({\left\{{0_R}\right\}, +, \circ}\right)$, where ring addition and the ring product are defined as:
* $0_R + 0_R = 0_R$
* $0_R \circ 0_R = 0_R$
The null ring is a trivial ring and therefore a commutative ring.
Consequently, a '''non-null ring''' is a ring with more than one element.
\end{theorem}
\begin{proof}
It needs proof that the null ring is actually a ring.
So, taking the ring axioms in turn:
\end{proof}
|
23575
|
\section{Definition:Ordered Dual Basis}
Tags: Linear Transformations, Definitions: Linear Algebra
\begin{theorem}
Let $R$ be a commutative ring.
Let $\left({G, +_G, \circ}\right)_R$ be an $n$-dimensional module over $R$.
Let $\left \langle {a_n} \right \rangle$ be an ordered basis of $G$.
Let $G^*$ be the algebraic dual of $G$.
Then there is an ordered basis $\left \langle {a'_n} \right \rangle$ of $G^*$ satisfying $\forall i, j \in \left[{1 \,.\,.\, n}\right]: a'_i \left({a_j}\right) = \delta_{i j}$.
This ordered basis $\left \langle {a'_n} \right \rangle$ of $G^*$ is called the '''ordered basis of $G^*$ dual to $\left \langle {a_n} \right \rangle$''', or the '''ordered dual basis of $G^*$'''.
\end{theorem}
\begin{proof}
Since $\left\{{1_R}\right\}$ is a basis of the $R$-module $R$, by Product of Linear Transformations this basis as described exists.
{{Qed}}
\end{proof}
|
23576
|
\section{Definition:Power of Element}
Tags: Semigroups, Definitions: Abstract Algebra, Abstract Algebra, Definitions: Powers (Abstract Algebra)
\begin{theorem}
Let $\left({S, \circ}\right)$ be a semigroup. Let $x \in S$.
Let $\left({x_1, x_2, \ldots, x_n}\right)$ be the ordered $n$-tuple defined by $x_k = x$ for each $k \in \N_n$.
Then:
:$\displaystyle \prod_{k=1}^n x_k = \circ^n x$
In a general semigroup, we usually write $\circ^n x$ as $x^n$.
In a semigroup in which $\circ$ is addition, or derived from addition, this can be written $n x$, that is, '''$n$ times $x$'''.
It can be defined inductively as:
:$x^n = \begin{cases}
x & : n = 1 \\
x^{n-1} \circ x & : n > 1
\end{cases}$
or
:$n x = \begin{cases}
x & : n = 1 \\
\left({n - 1}\right) x \circ x & : n > 1
\end{cases}$
Sometimes, for clarity, $n \cdot x$ is preferred to $n x$.
\end{theorem}
\begin{proof}
Follows directly from Recursive Mapping to Semigroup.
{{Qed}}
\end{proof}
|
23577
|
\section{Definition:Prime Decomposition}
Tags: Definitions: Prime Decompositions, Definitions: Prime Decomposition, Number Theory, Definitions: Prime Numbers, Definitions: Number Theory, Definitions
\begin{theorem}
Let <math>n > 1 \in \mathbb{Z}</math>.
Then <math>n</math> has a unique factorization of the form:
<math>n = p_1^{k_1} p_2^{k_2} \ldots p_r^{k_r}</math>
where <math>p_1 < p_2 < \ldots < p_r</math> are distinct primes and <math>k_1, k_2, \ldots, k_r</math> are positive integers.
This unique expression is known as the ''prime decomposition'' of <math>n</math>.
\end{theorem}
\begin{proof}
This is just another way of stating the Fundamental Theorem of Arithmetic.
{{Stub}}
Category:Number Theory
12910
12907
2009-04-04T12:35:30Z
Prime.mover
59
12910
wikitext
text/x-wiki
Let <math>n > 1 \in \Z</math>.
From the Fundamental Theorem of Arithmetic, <math>n</math> has a unique factorization of the form:
:<math>n = p_1^{k_1} p_2^{k_2} \ldots p_r^{k_r}</math>
where <math>p_1 < p_2 < \ldots < p_r</math> are distinct primes and <math>k_1, k_2, \ldots, k_r</math> are positive integers.
This unique expression is known as the '''prime decomposition''' of <math>n</math>.
{{SUBPAGENAME}}
19999
12910
2009-10-22T06:55:15Z
Prime.mover
59
19999
wikitext
text/x-wiki
Let <math>n > 1 \in \Z</math>.
From the Fundamental Theorem of Arithmetic, <math>n</math> has a unique factorization of the form:
:<math>n = p_1^{k_1} p_2^{k_2} \ldots p_r^{k_r}</math>
where <math>p_1 < p_2 < \ldots < p_r</math> are distinct primes and <math>k_1, k_2, \ldots, k_r</math> are positive integers.
This unique expression is known as the '''prime decomposition''' of <math>n</math>.
{{SUBPAGENAME}}
27587
19999
2010-05-15T20:21:54Z
Prime.mover
59
27587
wikitext
text/x-wiki
Let <math>n > 1 \in \Z</math>.
From the Fundamental Theorem of Arithmetic, <math>n</math> has a unique factorization of the form:
:<math>n = p_1^{k_1} p_2^{k_2} \ldots p_r^{k_r}</math>
where <math>p_1 < p_2 < \ldots < p_r</math> are distinct primes and <math>k_1, k_2, \ldots, k_r</math> are positive integers.
This unique expression is known as the '''prime decomposition''' of <math>n</math>.
\end{proof}
|
23578
|
\section{Definition:Principal Ideal of Ring}
Tags: Definitions: Ideal Theory, Ideal Theory
\begin{theorem}
Let <math>\left({R, +, \circ}\right)</math> be a ring with unity.
Let <math>a \in R</math>.
We define <math>\left({a}\right) = \left\{{r \circ a: r \in R}\right\}</math>.
Then:
# <math>\forall a \in R: \left({a}\right)</math> is an ideal of <math>R</math>;
# <math>\forall a \in R: a \in \left({a}\right)</math>;
# <math>\forall a \in R:</math> if <math>J</math> is an ideal of <math>R</math>, and <math>a \in J</math>, then <math>\left({a}\right) \subseteq J</math>. That is, <math>\left({a}\right)</math> is the smallest ideal of <math>R</math> containing <math>a</math>.
The ideal <math>\left({a}\right)</math> is called the '''principal ideal of <math>R</math> generated by <math>a</math>'''.
That is, an ideal is a principal ideal if <math>\exists a \in R</math> such that <math>\left \langle {a} \right \rangle</math> is the ideal generated by <math>a</math>.
\end{theorem}
\begin{proof}
Let <math>a \in R</math>.
First we establish that <math>\left({a}\right)</math> is an ideal of <math>R</math>, by verifying the conditions of Test for Ideal.
* <math>a \ne \varnothing</math>, as <math>1_R \circ a = a \in \left({a}\right)</math>.
* Let <math>x, y \in \left({a}\right)</math>. Then:
{{begin-equation}}
{{equation | ll=<math>\exists r, s \in R:</math>
| l=<math>x</math>
| r=<math>r \circ a, y = s \circ a</math>
| c=
}}
{{equation | ll=<math>\Longrightarrow</math>
| l=<math>x + \left({- y}\right)</math>
| r=<math>r \circ a + \left({- s \circ a}\right)</math>
| c=
}}
{{equation | r=<math>\left({r + \left({- s}\right)}\right) \circ a</math>
| c=
}}
{{equation | ll=<math>\Longrightarrow</math>
| l=<math>x + \left({- y}\right)</math>
| o=<math>\in</math>
| r=<math>\left({a}\right)</math>
| c=
}}
{{end-equation}}
* Let <math>s \in \left({a}\right), x \in R</math>.
{{begin-equation}}
{{equation | l=<math>s</math>
| o=<math>\in</math>
| r=<math>\left({a}\right), x \in R</math>
| c=
}}
{{equation | ll=<math>\Longrightarrow</math>
| l=<math>\exists r \in R: s</math>
| o=<math>=</math>
| r=<math>r \circ a</math>
| c=
}}
{{equation | ll=<math>\Longrightarrow</math>
| l=<math>x \circ s</math>
| o=<math>=</math>
| r=<math>x \circ r \circ a</math>
| c=
}}
{{equation | o=<math>\in</math>
| r=<math>\left({a}\right)</math>
| c=
}}
{{end-equation}}
... and similarly <math>s \circ x \in \left({a}\right)</math>.
Thus by Test for Ideal, <math>\left({a}\right)</math> is an ideal of <math>R</math>.
* Now let <math>J</math> be an ideal of <math>R</math> such that <math>a \in J</math>.
By the definition of an ideal, <math>\forall r \in R: r \circ a \in J</math>.
So every element of <math>\left({a}\right)</math> is in <math>J</math>, thus <math>\left({a}\right) \subseteq J</math>.
\end{proof}
|
23579
|
\section{Definition:Quotient Epimorphism}
Tags: Definitions: Quotient Epimorphisms, Quotient Groups, Epimorphisms, Group Epimorphisms, Normal Subgroups, Definitions: Epimorphisms, Definitions: Quotient Mappings, Definitions: Group Homomorphisms, Definitions: Quotient Structures, Definitions: Homomorphisms
\begin{theorem}
Let $G$ be a group.
Let $N$ be a normal subgroup of $G$.
Let $G / N$ be the quotient group of $G$ by $N$.
Then the mapping $q: G \to G / N$, defined as:
:$q: G \to G / N: q \left({x}\right) = x N$
is a group epimorphism, and its kernel is $N$.
This epimorphism is known as the '''natural epimorphism''' from $G$ to $G / N$.
\end{theorem}
\begin{proof}
The proof follows from Quotient Mapping on Structure is Canonical Epimorphism.
When $N \triangleleft G$, we have:
{{begin-eqn}}
{{eqn | ll=\forall x, y \in G:
| l=q \left({x y}\right)
| r=x y N
| c=by definition
}}
{{eqn | r=\left({x N}\right) \left({y N}\right)
| c=Definition of Quotient Group
}}
{{eqn | r=q \left({x}\right) q \left({y}\right)
| c=by definition
}}
{{end-eqn}}
Therefore $q$ is a homomorphism.
$\forall x \in G: x N \in G / N = q \left({x}\right)$, so $q$ is surjective.
Therefore $q$ is an epimorphism.
Let $x \in G$.
{{begin-eqn}}
{{eqn | l=x
| o=\in
| r=\ker \left({q}\right)
| c=
}}
{{eqn | ll=\iff
| l=q \left({x}\right)
| r=e_{G/N}
| c=Definition of Kernel
}}
{{eqn | ll=\iff
| l=x N
| r=e_{G/N} N
| c=by definition of $q$
}}
{{eqn | ll=\iff
| l=x N
| r=N
| c=Coset by Identity
}}
{{eqn | ll=\iff
| l=x
| o=\in
| r=N
| c=Coset Equals Subgroup iff Element in Subgroup
}}
{{end-eqn}}
... thus proving that $\ker \left({q}\right) = N$ from definition of subset.
{{qed}}
\end{proof}
|
23580
|
\section{Definition:Quotient Group}
Tags: Definitions: Group Theory, Quotient Groups, Normal Subgroups, Definitions: Quotient Groups, Definitions: Normality in Groups
\begin{theorem}
Let <math>G</math> be a group.
Let <math>N</math> be a normal subgroup of G.
Then the left coset space <math>G / N</math> is a group, where the group product is defined as:
:<math>\left({a N}\right) \left({b N}\right) = \left({a b}\right) N</math>
<math>G / N</math> is called the '''quotient group''' (or '''factor group''') of <math>G</math> by <math>N</math>.
\end{theorem}
\begin{proof}
The operation has been shown in Product of Cosets to be well-defined.
Now we need to demonstrate that <math>G / N</math> is a group.
\end{proof}
|
23581
|
\section{Definition:Quotient Ring}
Tags: Equivalence Relations, Definitions: Ring Theory, Ideal Theory, Rings, Definitions: Ideal Theory
\begin{theorem}
Let <math>\left({R, +, \circ,}\right)</math> be a ring.
Let <math>\equiv</math> be an equivalence relation on <math>R</math> compatible with both <math>\circ</math> and <math>+</math>, i.e. a congruence relation on <math>R</math>.
Let <math>J = \left[\!\left[{0_R}\right]\!\right]_\equiv</math> be the equivalence class of <math>0_R</math> under <math>\equiv</math>.
Then:
* <math>J = \left[\!\left[{0_R}\right]\!\right]_\equiv</math> is an ideal of <math>R</math>, and the equivalence defined by the partition <math>R / J</math> is <math>\equiv</math> itself;
* <math>\left({R / \equiv, +_\equiv, \circ_\equiv}\right)</math> is a ring, where <math>R / \equiv</math>is the quotient set of <math>R</math> by <math>\equiv</math>;
The ring <math>\left({R / \equiv, +_\equiv, \circ_\equiv}\right)</math> is the '''quotient ring''' of <math>R</math> and <math>\equiv</math>, and is the same thing as <math>\left({R / J, +, \circ}\right)</math>.
Similarly, if <math>J</math> is an ideal of <math>R</math>, then <math>J</math> induces a congruence relation <math>\equiv_J</math> on <math>R</math> such that <math>\left({R / J, +, \circ}\right)</math> is itself a quotient ring.
Let addition be defined on <math>\left({R / J, +, \circ}\right)</math> as here, and ring product be as defined here.
Then we are justified in our claim that <math>\left({R / J, +, \circ}\right)</math> is a ring.
The mapping <math>q_J: \left({R, +, \circ,}\right) \to \left({R / J, +, \circ}\right)</math> is a ring epimorphism.
\end{theorem}
\begin{proof}
This follows from the fact that, for a congruence <math>\equiv</math>, the quotient mapping from <math>R</math> to <math>R / \equiv</math> is an epimorphism.
As <math>\equiv</math> is an equivalence relation on <math>R</math> compatible with both <math>\circ</math> and <math>+</math>, it is therefore a congruence on <math>R</math> for both operations.
* Let <math>J = \left[\!\left[{0_R}\right]\!\right]_\equiv</math>.
By Compatible Relation Normal Subgroup, <math>\left({J, +}\right)</math> is a normal subgroup of <math>\left({R, +}\right)</math>.
The elements of <math>R / J</math> are the cosets of <math>\left[\!\left[{0_R}\right]\!\right]_\equiv</math>, and the fact that the equivalence defined by the partition <math>R / J</math> is <math>\equiv</math> itself follows from the same result.
* Now note that as <math>\equiv</math> is also compatible with <math>\circ</math>, we also have:
<math>\forall y \in R: \left[\!\left[{y}\right]\!\right]_\equiv \circ \left[\!\left[{0_R}\right]\!\right]_\equiv = \left[\!\left[{0_R}\right]\!\right]_\equiv = \left[\!\left[{0_R}\right]\!\right]_\equiv \circ \left[\!\left[{y}\right]\!\right]_\equiv</math>
That is:
<math>\forall x \in J, y \in R: y \circ x \in R, x \circ y \in R</math>
* The equivalence <math>\equiv_J</math> defined on <math>R</math> by <math>J</math> is a congruence for both <math>+</math> and <math>\circ</math>:
Let <math>x + \left({-x'}\right), y + \left({-y'}\right) \in J</math>. Then:
<math>x \circ y + \left({- x' \circ y'}\right) = \left({x + \left({-x'}\right)}\right) \circ y + x' \circ \left({y + \left({-y'}\right)}\right) \in J</math>
Hence the result.
{{qed}}
\end{proof}
|
23582
|
\section{Definition:Rational Number/Canonical Form}
Tags: Definition: Numbers, Definitions: Rational Numbers, Definitions: Numbers, Analysis, Rational Numbers, Definitions: Fractions
\begin{theorem}
Let $r \in \Q$ be a rational number.
The '''canonical form of $r$''' is the expression $\dfrac p q$, where:
:$r = \dfrac p q: p \in \Z, q \in \Z_{>0}, p \perp q$
where $p \perp q$ denotes that $p$ and $q$ have no common divisor except $1$.
That is, in its '''canonical form''', $r$ is expressed as $\dfrac p q$ where:
:$p$ is an integer
:$q$ is a strictly positive integer
:$p$ and $q$ are coprime.
\end{theorem}
\begin{proof}
As the set of rational numbers is the quotient field of the set of integers, it follows from Divided By a Positive in Quotient Field that:
:$\exists s \in \Z, t \in \Z_{>0}: r = \dfrac s t$
Now if $s \perp t$, our job is done.
Otherwise, let $\gcd \left\{{s, t}\right\} = d$.
Then let $s = p d, t = q d$. As $t, d \in \Z_{>0}$, so is $q$.
From Divide by GCD for Coprime Integers, $p \perp q$.
Also:
:$\displaystyle \frac s t = \frac {p d} {q d} = \frac p q \frac d d = \frac p q 1 = \frac p q$
Thus $r = p / q$ where $p \perp q$ and $q \in \Z_{>0}$.
{{qed}}
\end{proof}
|
23583
|
\section{Definition:Real Number Plane}
Tags: Euclidean Geometry, Analysis, Definitions: Euclidean Geometry, Definitions: Analytic Geometry, Analytic Geometry
\begin{theorem}
The points on the plane are in one-to-one correspondence with the $\R$-vector space $\R^2$.
So from the definition of an ordered $n$-tuple, the general element of $\R^2$ can be defined as an ordered couple $\left({x_1, x_2}\right)$ where $x_1, x_2 \in \R$, or, conventionally, $\left({x, y}\right)$.
Thus, we can identify the elements of $\R^2$ with points in the plane and refer to the point ''as'' its coordinates.
Thus we can refer to $\R^2$ ''as'' '''the plane'''.
\end{theorem}
\begin{proof}
This is shown in Ordered Basis for Coordinate Plane.
{{qed}}
Category:Analytic Geometry
Category:Euclidean Geometry
83446
83444
2012-03-13T07:40:07Z
Prime.mover
59
83446
wikitext
text/x-wiki
\end{proof}
|
23584
|
\section{Definition:Ring of Endomorphisms}
Tags: Morphisms, Definitions: Group Endomorphisms, Linear Algebra, Group Endomorphisms, Rings, Endomorphisms, Rings with Unity
\begin{theorem}
Let $\struct {G, \oplus}$ be an abelian group.
Let $\mathbb G$ be the set of all group endomorphisms of $\struct {G, \oplus}$.
Let $*: \mathbb G \times \mathbb G \to \mathbb G$ be the operation defined as:
:$\forall u, v \in \mathbb G: u * v = u \circ v$
where $u \circ v$ is defined as composition of mappings.
Then $\struct {\mathbb G, \oplus, *}$ is a ring with unity, called the '''ring of endomorphisms''' of the abelian group $\struct {G, \oplus}$.
\end{theorem}
\begin{proof}
By Structure Induced by Group Operation is Group, $\struct {\mathbb G, \oplus}$ is an abelian group.
By Set of Homomorphisms is Subgroup of All Mappings, it follows that $\struct {\mathbb G, \oplus}$ is a subgroup of $\struct {G^G, \oplus}$.
Next, we establish that $*$ is associative.
By definition, $\forall u, v \in \mathbb G: u * v = u \circ v$ where $u \circ v$ is defined as composition of mappings.
Associativity of $*$ follows directly from Composition of Mappings is Associative.
Next, we establish that $*$ is distributive over $\oplus$.
Let $u, v, w \in \mathbb G$.
Then:
:$\paren {u \oplus v} * w = \paren {u \oplus v} \circ w$
:$u * \paren {v \oplus w} = u \circ \paren {v \oplus w}$
So let $x \in G$.
Then:
{{begin-eqn}}
{{eqn | l = \map {\paren {\paren {u \oplus v} * w} } x
| r = \map {\paren {\paren {u \oplus v} \circ w} } x
| c =
}}
{{eqn | r = \map {\paren {u \oplus v} } {\map w x}
| c =
}}
{{eqn | r = \map u {\map w x} \oplus \map v {\map w x}
| c =
}}
{{eqn | r = \map {\paren {u \circ w} } x \oplus \map {\paren {v \circ w} } x
| c =
}}
{{eqn | r = \map {\paren {u * w} } x \oplus \map {\paren {v * w} } x
| c =
}}
{{end-eqn}}
So $\paren {u \oplus v} * w = \paren {u * w} \oplus \paren {v * w}$.
Similarly:
{{begin-eqn}}
{{eqn | l = \map {\paren {u * \paren {v \oplus w} } } x
| r = \map {\paren {u \circ \paren {v \oplus w} } } x
| c =
}}
{{eqn | r = \map u {\map {\paren {v \oplus w} } x}
| c =
}}
{{eqn | r = \map u {\map v x \oplus \map w x}
| c =
}}
{{eqn | r = \map u {\map v x} \oplus \map u {\map w x}
| c = $u$ has the morphism property
}}
{{eqn | r = \map {\paren {u \circ v} } x \oplus \map {\paren {u \circ w} } x
| c =
}}
{{eqn | r = \map {\paren {u * v} } x \oplus \map {\paren {u * w} } x
| c =
}}
{{end-eqn}}
So:
:$u * \paren {v \oplus w} = \paren {u * v} \oplus \paren {u * w}$
Thus $*$ is distributive over $\oplus$.
The ring axioms are satisfied, and $\struct {\mathbb G, \oplus, *}$ is a ring.
The zero is easily checked to be the mapping which takes everything to the identity:
:$e: G \to \set {e_G}: \map e x = e_G$
The unity is easily checked to be the identity mapping, which is known to be an automorphism.
{{Qed}}
\end{proof}
|
23585
|
\section{Definition:Ring of Integers Modulo m}
Tags: Definitions: Commutative Algebra, Modulo Arithmetic, Definitions: Modulo Arithmetic, Commutative Algebra, Rings, Definitions: Examples of Rings, Definitions: Ring Examples
\begin{theorem}
For all $m \in \N: m \ge 2$, the algebraic structure $\left({\Z_m, +_m, \times_m}\right)$ is a [[Definition:Commutative and Unitary Ring|commutative ring with unity $\left[\!\left[{1}\right]\!\right]_m$]].
The zero of $\left({\Z_m, +_m, \times_m}\right)$ is $\left[\!\left[{0}\right]\!\right]_m$.
\end{theorem}
\begin{proof}
First we check the ring axioms:
* '''A:''' The Additive Group of Integers Modulo m $\left({\Z_m, +_m}\right)$ is an abelian group.
* '''M0:''' The Multiplicative Monoid of Integers Modulo m $\left({\Z_m, \times_m}\right)$ is closed.
* '''M1:''' The Multiplicative Monoid of Integers Modulo m $\left({\Z_m, \times_m}\right)$ is associative.
* '''D:''' $\times_m$ distributes over $+_m$ in $\Z_m$.
Then we note that Multiplicative Monoid of Integers Modulo m $\left({\Z_m, \times_m}\right)$ is commutative.
Then we note that the Multiplicative Monoid of Integers Modulo m $\left({\Z_m, \times_m}\right)$ has an identity $\left[\!\left[{1}\right]\!\right]_m$.
Finally we note that [[Modulo Addition has Identity|$\left[\!\left[{0}\right]\!\right]_m$ is the identity]] of the additive group $\left({\Z_m, +_m}\right)$.
{{qed}}
\end{proof}
|
23586
|
\section{Definition:Ring of Linear Operators}
Tags: Definitions: Linear Operators, Definitions: Examples of Rings, Linear Transformations, Linear Operators
\begin{theorem}
Let $\map {\LL_R} G$ be the set of all linear operators on $G$.
{{explain|the precise nature of $G$}}
Let $\phi \circ \psi$ denote the composition of the two linear operators $\phi$ and $\psi$.
Then $\struct {\map {\LL_R} G, +, \circ}$ is a ring.
\end{theorem}
\begin{proof}
Follows from Composite of R-Algebraic Structure Homomorphisms is Homomorphism, as it is a subring of the ring of all endomorphisms of the abelian group $\struct {G, +}$.
{{ProofWanted}}
Category:Linear Operators
540366
540364
2021-10-10T14:43:18Z
Prime.mover
59
540366
wikitext
text/x-wiki
\end{proof}
|
23587
|
\section{Definition:Ring of Polynomial Functions}
Tags: Definitions: Pointwise Operations, Definitions: Polynomial Theory, Polynomial Theory
\begin{theorem}
Let $(R,+,\circ)$ be a commutative ring with unity.
Let $R \left[\left\{{X_j: j \in J}\right\}\right]$ be the ring of polynomial forms over $R$ in the indeterminates $\left\{{X_j: j \in J}\right\}$.
Let $R^J$ be the free module on $J$.
Let $A$ be the set of all polynomial functions $R^J \to R$.
Then the operations $+$ and $\circ$ on $R$ induce operations on $A$.
We denote these operations by the same symbols:
:$\forall x \in R^J: \left({f + g}\right) \left({x}\right) = f \left({x}\right) + g \left({x}\right)$
:$\forall x \in R^J: \left({f \circ g}\right) \left({x}\right) = f \left({x}\right)\circ g \left({x}\right)$
Then $\left({A, +, \circ}\right)$ is a commutative ring with unity.
\end{theorem}
\begin{proof}
First we check that $A$ is closed under multiplication and addition.
Let $Z$ be the set of all multiindices indexed by $J$.
Let $\displaystyle f = \sum_{k \mathop \in Z} a_k \mathbf X^k,\ \displaystyle g = \sum_{k \mathop \in Z} b_k \mathbf X^k \in R \left[{\left\{{X_j: j \in J}\right\}}\right]$.
Under the evaluation homomorphism, $f$ and $g$ map to
:$\displaystyle A \owns \hat f: \forall x \in R^J: \hat f \left({x}\right) = \sum_{k \mathop \in Z} a_k x^k$
:$\displaystyle A \owns \hat g: \forall x \in R^J: \hat g \left({x}\right) = \sum_{k \mathop \in Z} b_k x^k$
Then the induced sum of $\hat f$ and $\hat g$ is
{{begin-eqn}}
{{eqn|l= \hat f \left({x}\right) + \hat g \left({x}\right)
|r= \sum_{k \mathop \in Z} a_k x^k + \sum_{k \mathop \in Z} b_k x^k
}}
{{eqn|l=
|r= \sum_{k \mathop \in Z} \left({a_k + b_k}\right) x^k
}}
{{eqn|l=
|r= \widehat{f + g} \left({x}\right)
|c= by the definition of addition of polynomial forms
}}
{{end-eqn}}
Thus polynomial functions are closed under addition.
The induced product of $\hat f$ and $\hat g$ is
{{begin-eqn}}
{{eqn|l= \hat f \left({x}\right) \circ \hat g \left({x}\right)
|r= \left({\sum_{k \mathop \in Z} a_k x^k}\right) \circ \left({\sum_{k \mathop \in Z} a_k x^k}\right)
}}
{{eqn|l=
|r= \sum_{k \mathop \in Z} \left({\sum_{p + q \mathop = k} a_p b_q}\right) \mathbf X^k
}}
{{eqn|l=
|r= \widehat {f \circ g} \left({x}\right)
|c= by the definition of multiplication of polynomial forms.
}}
{{end-eqn}}
Thus polynomial functions are closed under multiplication.
Finally, we invoke Induced Ring, which shows that $\left({A, +, \circ}\right)$ is a commutative ring with unity.
{{qed}}
\end{proof}
|
23588
|
\section{Definition:Smith-Volterra-Cantor Set}
Tags: Definitions: Cantor Set, Cantor Set, Definitions: Examples of Topologies
\begin{theorem}
Let $G$ be a Cantor collection.
Let $g_0 \in G$ such that $\map \mu {g_0} = b$.
Let $p$ be a natural number.
Then there are two nonempty disjoint sets $N_p$ and $P$ such that:
:$g_0 = N_p \bigcup P$
where $N_p$ is nowhere dense in the relative topology on $g_0$ and $\map \mu P \le b / p$.
\end{theorem}
\begin{proof}
Suppose that $g \in G$ and for some $x \in X$, we have $x \in g \bigcap N_q$.
From the construction of $N_p$, $\set x \cap g$ intersects a set $g_k$ of type $k$ in $N_p$ for every natural number $k$.
Since $G$ is a Cantor collection, by property $(4)$ there is a natural number $n^*$ such that:
:if $x \in g^* \in G$ and $\mu g^* \le \dfrac 1 {n^*}$ then $g^* \subseteq g$.
The sets of type $k$ have measure $\le \dfrac b {n^k}$.
Let $k$ be chosen so that $\dfrac b {n^k} \le \dfrac 1 {n^*}$.
Then it follows that there is a set of type $k$ for which $g_k \subseteq g^* \subseteq g$.
Moreover, since $g_k$ contains a subset with measure $\le \dfrac b {n^{k+1} }$ that will be removed to form the set $P$, it is clear that $N_p$ has empty interior.
Thus $N_p$ is nowhere dense in the relative topology on $g_0$.
$N_p = g_0 - P$ and $P$ open in the relative topology on $g_0$ imply that $N_p$ is closed in the relative topology on $g_0$.
$N_p$ closed with no isolated points means that it is also a perfect set.
In conclusion we see that by removing diminishing proportions of the remaining sets we have constructed a nowhere dense set of positive measure, a "Fat" Cantor set.
Therefore our objective has been achieved.
{{qed}}
{{NamedforDef|Henry John Stephen Smith|name2 = Samuel Giuseppe Vito Volterra|name3 = Georg Cantor|cat = Smith, Henry|cat2 = Volterra|cat3 = Cantor}}
\end{proof}
|
23589
|
\section{Definition:Symmetric Group}
Tags: Definitions: Group Examples, Definitions: Permutation Theory, Definitions: Examples of Groups, Definitions: Groups: Examples, Symmetric Group, Group Examples, Definitions, Definitions: Symmetric Group, Definitions: Symmetric Groups
\begin{theorem}
Let <math>S_n</math> denote the set of permutations on <math>n</math> letters.
The structure <math>\left({S_n, \circ}\right)</math>, where <math>\circ</math> denotes composition of mappings, forms a group.
This is called the '''symmetric group on <math>n</math> letters''', and is usually denoted, when the context is clear, without the operator: <math>S_n</math>.
Some sources refer to this as the '''full symmetric group (on <math>n</math> letters)'''.
<math>\left({S_n, \circ}\right)</math> is isomorphic to the Group of Permutations of the <math>n\,</math> elements of any set <math>T</math> whose cardinality is <math>n</math>.
That is:
:<math>\forall T \subseteq \mathbb U, \left|{T}\right| = n: \left({S_n, \circ}\right) \cong \left({\Gamma \left({T}\right), \circ}\right)</math>
In order not to make notation overly cumbersome, the product notation is usually used for composition, thus <math>\pi \circ \rho</math> is written <math>\pi \rho</math>.
Also, for the same reason, rather than using <math>I_{S_n}</math> for the identity mapping, the symbol <math>e</math> is usually used.
\end{theorem}
\begin{proof}
The fact that <math>\left({S_n, \circ}\right)</math> is a group follows directly from Group of Permutations.
By definition of cardinality, as <math>\left|{T}\right| = n</math> we can find a bijection between <math>T</math> and <math>\N_n</math>.
From Number of Permutations, it is immediate that <math>\left|{\left({\Gamma \left({T}\right), \circ}\right)}\right| = n! = \left|{\left({S_n, \circ}\right)}\right|</math>.
Again, we can find a bijection <math>\phi</math> between <math>\left({\Gamma \left({T}\right), \circ}\right)</math> and <math>\left({S_n, \circ}\right)</math>.
The result follows directly from the Transplanting Theorem.
{{qed}}
\end{proof}
|
23590
|
\section{Definition:Topological Ring}
Tags: Definitions: Topology, Definitions: Topological Rings, Definitions: Ring Theory
\begin{theorem}
Let $\left({R, +, \circ}\right)$ be a ring with unity.
Let $a$ be a topology over $R$.
Suppose that $+$ and $\circ$ are $\tau$-continuous mappings.
Then $\left({R, +, \circ, \tau}\right)$ is a topological ring.
\end{theorem}
\begin{proof}
As we presume $\circ$ to be continuous, we need only prove that $\left({R, +, \tau}\right)$ is a topological group.
As we presume $+$ to be continuous, we need only show that negation is continuous.
For each $b \in R$, $- b = (- 1_R) \circ b$.
Since $\circ$ is continuous, it is continuous in each argument, so negation is continuous.
{{MissingLinks|This last line needs some reference to a result on continuity in one argument}}
{{qed}}
Category:Definitions/Topological Rings
125355
125346
2013-01-10T12:31:06Z
Dfeuer
1672
125355
wikitext
text/x-wiki
\end{proof}
|
23591
|
\section{Definition:Trivial Group}
Tags: Group Theory, Definitions: Examples of Groups, Definitions: Groups: Examples, Group Examples, Definitions: Group Examples
\begin{theorem}
The '''trivial group''' is a group with only one element $e$.
\end{theorem}
\begin{proof}
For a group $G = \left\{{e}\right\}$ to be a group, it follows that $e \circ e = e$.
Showing that $\left({G, \circ}\right)$ is in fact a group is straightforward:
* $G$ is closed:
:$\forall e \in G: e \circ e = e$
* $e$ is the identity:
:$\forall e \in G: e \circ e = e$
* $\circ $ is associative:
:$e \circ \left({e \circ e}\right) = e = \left({e \circ e}\right) \circ e$
* Every element of $G$ (all one of them) has an inverse:
This follows from the fact that the identity is self-inverse, and the only element in $G$ is indeed the identity:
:$e \circ e = e \implies e^{-1} = e$
{{qed}}
\end{proof}
|
23592
|
\section{Definition:Trivial Module}
Tags: Module Theory, Definitions: Module Theory, Modules
\begin{theorem}
Let $\left({G, +_G}\right)$ be an abelian group whose identity is $e_G$.
Let $\left({R, +_R, \circ_R}\right)$ be a ring.
Let $\circ$ be defined as:
: $\forall \lambda \in R: \forall x \in G: \lambda \circ x = e_G$
Then $\left({G, +_G, \circ}\right)_R$ is an $R$-module.
Such a module is called a '''trivial module'''.
Unless $R$ is a ring with unity and $G$ contains only one element, this is ''not'' a unitary module.
\end{theorem}
\begin{proof}
Checking the criteria for module in turn:
: $(1): \quad \lambda \circ \left({x +_G y}\right) = e_G = e_G +_G e_G = \left({\lambda \circ x}\right) +_G \left({\lambda \circ y}\right)$
: $(2): \quad \left({\lambda +_R \mu}\right) \circ x = e_G = e_G +_G e_G = \left({\lambda \circ x}\right) +_G \left({\mu \circ x}\right)$
: $(3): \quad \left({\lambda \times_R \mu}\right) \circ x = e_G = \lambda \circ e_G = \lambda \circ \left({\mu \circ x}\right)$
Thus the trivial module is indeed a module.
{{qed|lemma}}
By definition, for the trivial module to be unitary, then $R$ needs to be a ring with unity.
For Module: $(4)$ to apply, we require that:
:$\forall x \in G: 1_R \circ x = x$
But for the trivial module:
:$\forall x \in G: 1_R \circ x = e_G$.
So Module: $(4)$ can apply only when:
:$\forall x \in G: x = e_G$.
Thus for the trivial module to be unitary, it is necessary that $G$ is the trivial group, and thus contains one element.
{{qed}}
\end{proof}
|
23593
|
\section{Definition:Trivial Ring}
Tags: Rings, Ring Theory, Definitions: Ring Theory, Commutative Rings
\begin{theorem}
A ring $\left({R, +, \circ}\right)$ is a '''trivial ring''' iff:
:$\forall x, y \in R: x \circ y = 0_R$
A trivial ring is a commutative ring.
\end{theorem}
\begin{proof}
To prove that a trivial ring is actually a ring in the first place, we need to check the ring axioms for a trivial ring $\left({R, +, \circ}\right)$:
Taking the ring axioms in turn:
\end{proof}
|
23594
|
\section{Definition:Trivial Subgroup}
Tags: Definitions: Group Theory, Subgroups, Definition: Group Theory, Definitions: Subgroups, Group Theory, Group Examples
\begin{theorem}
For any group $\left({G, \circ}\right)$, the group whose underlying set is $\left\{{e}\right\}$, where $e$ is the identity of $\left({G, \circ}\right)$, is a subgroup of $\left({G, \circ}\right)$.
The group $\left({\left\{{e}\right\}, \circ}\right)$ is called '''the trivial subgroup''' of $\left({G, \circ}\right)$.
\end{theorem}
\begin{proof}
Using the One-step Subgroup Test:
: $(1): \quad e \in \left\{{e}\right\} \implies \left\{{e}\right\} \ne \varnothing$
: $(2): \quad e \in \left\{{e}\right\} \implies e \circ e^{-1} = e \in \left\{{e}\right\}$
{{qed}}
\end{proof}
|
23595
|
\section{Definition:Vector Space of All Mappings}
Tags: Definitions: Examples of Vector Spaces, Linear Algebra
\begin{theorem}
Let $\struct {K, +, \circ}$ be a division ring.
Let $\struct {G, +_G, \circ}_K$ be a $K$-vector space.
Let $S$ be a set.
Let $G^S$ be the set of all mappings from $S$ to $G$.
Then $\struct {G^S, +_G', \circ}_K$ is a $K$-vector space, where:
:$+_G'$ is the operation induced on $G^S$ by $+_G$
:$\forall \lambda \in K: \forall f \in G^S: \forall x \in S: \map {\paren {\lambda \circ f} } x = \lambda \circ \paren {\map f x}$
This is the $K$-vector space $G^S$ of all mappings from $S$ to $G$.
The most important case of this example is when $\struct {G^S, +_G', \circ}_K$ is the $K$-vector space $\struct {K^S, +_K', \circ}_K$.
\end{theorem}
\begin{proof}
Follows directly from Module of All Mappings is Module and the definition of vector space.
\end{proof}
|
23596
|
\section{Definition:Vector Space on Cartesian Product}
Tags: Definitions: Examples of Vector Spaces, Direct Products, Linear Algebra
\begin{theorem}
Let $\struct {K, +_K, \times_K}$ be a division ring.
Let $n \in \N_{>0}$.
Let $+: K^n \times K^n \to K^n$ be defined as:
:$\tuple {\alpha_1, \ldots, \alpha_n} + \tuple {\beta_1, \ldots, \beta_n} = \tuple {\alpha_1 +_K \beta_1, \ldots, \alpha_n +_K \beta_n}$
Let $\times: K \times K^n \to K^n$ be defined as:
:$\lambda \times \tuple {\alpha_1, \ldots, \alpha_n} = \tuple {\lambda \times_K \alpha_1, \ldots, \lambda \times_K \alpha_n}$
Then $\struct {K^n, +, \times}_K$ is '''the $K$-vector space $K^n$'''.
\end{theorem}
\begin{proof}
{{refactor|Two separate proofs}}
This is a special case of the Vector Space of All Mappings, where $S$ is the set $\closedint 1 n \subset \N^*$.
It is also a special case of a direct product of vector spaces where each of the $G_k$ is the $K$-vector space $K$.
{{Qed}}
\end{proof}
|
23597
|
\section{Definition:Vector Space over Division Subring/Special Case}
Tags: Definitions: Examples of Vector Spaces, Vector Spaces, Linear Algebra, Examples of Vector Spaces
\begin{theorem}
Let $\struct {R, +, \circ}$ be a ring with unity whose unity is $1_R$.
Let $\struct {R, +, \circ}_R$ be the $R$-vector space.
Let $S$ be a division subring of $R$, such that $1_R \in S$.
Let $\circ_S$ denote the restriction of $\circ$ to $S \times R$.
Then $\struct {R, +, \circ_S}_S$ is the '''vector space on $R$ over the division subring $S$'''.
\end{theorem}
\begin{proof}
A vector space is by definition a unitary module over a division ring.
$S$ is a division ring by assumption.
$\struct {R, +, \circ_S}_S$ is a unitary module by Subring Module/Special Case.
{{qed}}
\end{proof}
|
23598
|
\section{Definition:Zero (Number)}
Tags: Definitions: Abstract Algebra, Definitions: Zero, Naturally Ordered Semigroup, Definitions: Numbers
\begin{theorem}
Let <math>\left({S, \circ, \preceq}\right)</math> be a naturally ordered semigroup.
Then <math>\left({S, \circ, \preceq}\right)</math> has a minimal element.
This minimal element of <math>\left({S, \circ, \preceq}\right)</math> is called '''zero''' and has the symbol <math>0</math>.
That is: <math>\forall n \in S: 0 \preceq n</math>.
This element <math>0</math> is the identity for <math>\circ</math>. That is:
:<math>\forall n \in S: n \circ 0 = n = 0 \circ n</math>
\end{theorem}
\begin{proof}
This follows immediately from naturally ordered semigroup: NO 1.
:<math>\left({S, \circ, \preceq}\right)</math> is well-ordered, so has a minimal element.
Now by the definition of the zero:
:<math>0 \preceq 0</math>
as zero precedes everything.
Thus from Naturally Ordered Semigroup: NO 3:
:<math>0 \preceq 0 \implies \exists p \in S: 0 \circ p = 0</math>
By the definition of the zero:
:<math>0 \preceq 0 \circ 0</math> and <math>0 \preceq p</math>
as zero precedes everything.
Thus from naturally ordered semigroup: NO 2:
:<math>0 \circ 0 \preceq 0 \circ p = 0</math>
Thus:
:<math>0 \circ 0 \preceq 0 \and 0 \preceq 0 \circ 0</math>
and by the antisymmetry of ordering, it follows that <math>0 \circ 0 = 0</math>.
Because <math>\left({S, \circ, \preceq}\right)</math> is a semigroup, <math>\circ</math> is associative.
So:
:<math>\forall n \in S: \left({n \circ 0}\right) \circ 0 = n \circ \left({0 \circ 0}\right) = n \circ 0</math>
Thus from naturally ordered semigroup: NO 2:
:<math>\forall n \in S: n \circ 0 = n</math>
Finally:
:<math>\forall n \in S: 0 \circ n = n</math>
because a naturally ordered semigroup is commutative.
Thus:
:<math>\forall n \in S: n \circ 0 = n = 0 \circ n</math>
and so <math>0</math> is the identity for <math>\circ</math>.
{{Qed}}
Category:Naturally Ordered Semigroup
35410
35408
2010-09-26T08:09:47Z
Prime.mover
59
35410
wikitext
text/x-wiki
\end{proof}
|
23599
|
\section{Definition:Zero Matrix/General Monoid}
Tags: Definitions: Zero Matrix, Matrix Algebra, Definitions: Matrix Algebra
\begin{theorem}
Let $\left({S, \circ}\right)$ be a monoid whose identity is $e$.
Let $\mathcal M_S \left({m, n}\right)$ be an $m \times n$ matrix space over $S$.
Then $\left({\mathcal M_S \left({m, n}\right), +}\right)$ has an identity.
This identity element, called the '''zero matrix''', has all elements equal to $e$, and can be written $\left[{e}\right]_{m n}$.
If the monoid $S$ is a number field in which the additive identity is represented as $0$, the zero matrix is usually written $\mathbf 0 = \left[{0}\right]_{m n}$.
\end{theorem}
\begin{proof}
Let $\left[{a}\right]_{m n} \in \mathcal M_S \left({m, n}\right)$, where $\left({S, \circ}\right)$ is a monoid.
Let $a_{i j}$ be an element of $\left[{a}\right]_{m n}$.
Then $\forall \left({i, j}\right) \in \left[{1 \,.\,.\, m}\right] \times \left[{1 \,.\,.\, n}\right]: a_{i j} \circ e = a_{i j} = e \circ a_{i j}$.
{{qed}}
\end{proof}
|
23600
|
\section{Nth Derivative of Natural Logarithm by Reciprocal}
Tags: Derivatives, Logarithms, Natural Logarithms
\begin{theorem}
:$\dfrac {\d^n} {\d x^n} \dfrac {\ln x} x = \paren {-1}^{n + 1} n! \dfrac {H_n - \ln x} {x^{n + 1} }$
where $H_n$ denotes the $n$th harmonic number:
:$H_n = \ds \sum_{r \mathop = 1}^n \dfrac 1 r = 1 + \dfrac 1 2 + \dfrac 1 3 + \cdots + \dfrac 1 r$
\end{theorem}
\begin{proof}
The proof proceeds by induction.
For all $n \in \Z_{> 0}$, let $\map P n$ be the proposition:
:$\dfrac {\d^n} {\d x^n} \dfrac {\ln x} x = \paren {-1}^{n + 1} n! \dfrac {H_n - \ln x} {x^{n + 1} }$
\end{proof}
|
23601
|
\begin{definition}[Keith.U/Whatever/Definition:Exponential/Real]
For all definitions of the '''real exponential function''':
:The domain of $\exp$ is $\R$
:The codomain of $\exp$ is $\R_{>0}$
For $x \in \R$, the real number $\exp x$ is called the '''exponential of $x$'''.
\end{definition}
|
23602
|
\begin{definition}[ProofWiki:Sandbox/Definition:Hilbert Space]
Let $V$ be an inner product space over $\Bbb F \in \set {\R, \C}$.
Let $d: V \times V \to \R_{\ge 0}$ be the inner product metric.
If $\struct {V, d}$ is a complete metric space, $V$ is said to be a '''Hilbert space'''.
\end{definition}
|
23603
|
\begin{definition}[Definition:(0, 1)-Matrix]
A '''$\tuple {0, 1}$-matrix''' is a matrix whose elements consist only of instances of $0$ and $1$.
\end{definition}
|
23604
|
\begin{definition}[Definition:(1-3) Parastrophe]
Let $S$ be a set.
Let $\struct {S, \circ}$ and $\struct {S, *}$ be magmas on $S$.
$\struct {S, *}$ is a '''$(1-3)$ parastrophe of $\struct {S, \circ}$''' {{iff}}:
:$\forall x_1, x_2, x_3 \in S: x_1 \circ x_2 = x_3 \iff x_3 * x_2 = x_1$
\end{definition}
|
23605
|
\begin{definition}[Definition:(2-3) Parastrophe]
Let $S$ be a set.
Let $\struct {S, \circ}$ and $\struct {S, *}$ be magmas on $S$.
$\struct {S, *}$ is a '''$(2-3)$ parastrophe of $\struct {S, \circ}$''' {{iff}}:
:$\forall x_1, x_2, x_3 \in S: x_1 \circ x_2 = x_3 \iff x_1 * x_3 = x_2$
\end{definition}
|
23606
|
\begin{definition}[Definition:A-/Not]
:'''a-'''
Prefix indicating '''not'''.
\end{definition}
|
23607
|
\begin{definition}[Definition:ARIMA Model/ARIMA Operator]
Let $S$ be a stochastic process based on an equispaced time series.
Let the values of $S$ at timestamps $t, t - 1, t - 2, \dotsc$ be $z_t, z_{t - 1}, z_{t - 2}, \dotsc$
Let $a_t, a_{t - 1}, a_{t - 2}, \dotsc$ be a sequence of independent shocks at timestamps $t, t - 1, t - 2, \dotsc$
Let:
:$w_t = \nabla^d z_t$
where $\nabla^d$ denotes the $d$th iteration of the backward difference operator.
Let $M$ be an '''ARIMA process''' on $S$:
:$\tilde w_t = \phi_1 w_{t - 1} + \phi_2 w_{t - 2} + \dotsb + \phi_p w_{t - p} + a_t - \theta_1 a_{t - 1} - \theta_2 a_{t - 2} - \dotsb - \theta_q a_{t - q}$
Using the autoregressive operator:
:$\map \phi B = 1 - \phi_1 B - \phi_2 B^2 - \dotsb - \phi_p B^p$
and the moving average operator:
:$\map \theta B = 1 - \theta_1 B - \theta_2 B^2 - \dotsb - \theta_q B^q$
the '''ARIMA model''' can be written in the following compact manner:
:$\map \phi B w_t = \map \theta B a_t$
where $B$ denotes the backward shift operator.
Hence:
:$\map \varphi B z_t = \map \phi B \paren {1 - B}^d z_t = \map \theta B a_t$
where:
:$\map \varphi B = \map \phi B \paren {1 - B}^d$
In practice, $d$ is usually $0$ or $1$, or at most $2$.
\end{definition}
|
23608
|
\begin{definition}[Definition:ARMA Model/ARMA Operator]
Let $S$ be a stochastic process based on an equispaced time series.
Let the values of $S$ at timestamps $t, t - 1, t - 2, \dotsc$ be $z_t, z_{t - 1}, z_{t - 2}, \dotsc$
Let $\tilde z_t, \tilde z_{t - 1}, \tilde z_{t - 2}, \dotsc$ be deviations from a constant mean level $\mu$:
:$\tilde z_t = z_t - \mu$
Let $a_t, a_{t - 1}, a_{t - 2}, \dotsc$ be a sequence of independent shocks at timestamps $t, t - 1, t - 2, \dotsc$
Let $M$ be an '''ARMA model''' on $S$ of order $p$:
:$\tilde z_t = \phi_1 \tilde z_{t - 1} + \phi_2 \tilde z_{t - 2} + \dotsb + \phi_p \tilde z_{t - p} + a_t - \theta_1 a_{t - 1} - \theta_2 a_{t - 2} - \dotsb - \theta_q a_{t - q}$
Using the autoregressive operator:
:$\map \phi B = 1 - \phi_1 B - \phi_2 B^2 - \dotsb - \phi_p B^p$
and the moving average operator:
:$\map \theta B = 1 - \theta_1 B - \theta_2 B^2 - \dotsb - \theta_q B^q$
the '''ARMA model''' can be written in the following compact manner:
:$\map \phi B \tilde z_t = \map \theta B a_t$
where $B$ denotes the backward shift operator.
Hence:
:$\tilde z_t = \map {\phi^{-1} } B \map \theta B a_t$
\end{definition}
|
23609
|
\begin{definition}[Definition:ARMA Model/Parameter]
Let $S$ be a stochastic process based on an equispaced time series.
Let the values of $S$ at timestamps $t, t - 1, t - 2, \dotsc$ be $z_t, z_{t - 1}, z_{t - 2}, \dotsc$
Let $\tilde z_t, \tilde z_{t - 1}, \tilde z_{t - 2}, \dotsc$ be deviations from a constant mean level $\mu$:
:$\tilde z_t = z_t - \mu$
Let $a_t, a_{t - 1}, a_{t - 2}, \dotsc$ be a sequence of independent shocks at timestamps $t, t - 1, t - 2, \dotsc$
Let $M$ be an '''ARMA model''' on $S$ of order $p$:
:$\tilde z_t = \phi_1 \tilde z_{t - 1} + \phi_2 \tilde z_{t - 2} + \dotsb + \phi_p \tilde z_{t - p} + a_t - \theta_1 a_{t - 1} - \theta_2 a_{t - 2} - \dotsb - \theta_q a_{t - q}$
The '''parameters''' of $M$ consist of:
:the constant mean level $\mu$
:the variance $\sigma_a^2$ of the underlying (usually white noise) process of the independent shocks $a_t$
:the coefficients $\phi_1$ to $\phi_p$
:the coefficients $\theta_1$ to $\theta_q$.
In practice, each of these '''parameters''' needs to be estimated from the data.
It is often the case that an ARMA model can be effectively used in real-world applications where $p$ and $q$ are no greater than $2$, and often less.
\end{definition}
|
23610
|
\begin{definition}[Definition:A Fortiori]
'''A fortiori''' knowledge arises from stronger facts already established.
An '''a fortiori''' argument is most commonly used by applying a general fact to a particular case.
\end{definition}
|
23611
|
\begin{definition}[Definition:A Posteriori]
'''A posteriori''' knowledge is the sort of knowledge which cannot be known without experience.
That is, it cannot be determined by using logic and reasoning from definitions alone.
\end{definition}
|
23612
|
\begin{definition}[Definition:A Priori]
'''A priori''' knowledge is the sort of knowledge which comes from reason alone.
That is, it does not require the exercise of experience to know it.
Thus the truth value of a statement can be decided by a logical argument whose premises are definitions.
\end{definition}
|
23613
|
\begin{definition}[Definition:Abacism]
'''Abacism''' means '''the process of doing arithmetic using an abacus'''.
\end{definition}
|
23614
|
\begin{definition}[Definition:Abacism/Abacist]
An '''abacist''' is an arithmetician who uses an abacus to do arithmetic, as opposed to an algorist who calculates using algorism.
\end{definition}
|
23615
|
\begin{definition}[Definition:Abacus]
An '''abacus''' (plural: '''abacuses''' or '''abaci''') is a tool for performing arithmetical calculations.
It consists of:
: a series of lines (for example: grooves in sand, or wires on a frame), upon which are:
: a number of items (for example: pebbles in the grooves, or beads on the wires),
which are manipulated by hand so as to represent numbers.
As such, it is the earliest known machine for mathematics, and can be regarded as the earliest ancestor of the electronic computer.
\end{definition}
|
23616
|
\begin{definition}[Definition:Abbreviation of WFFs of Propositional Logic]
The WFFs of propositional logic can be made more readable by allowing them to be abbreviated. The resulting strings are not actually WFFs as such, but can be translated back uniquely into full WFFs.
\end{definition}
|
23617
|
\begin{definition}[Definition:Abbreviation of WFFs of Propositional Logic/Standard Abbreviation]
The string obtained by applying as many of the rules for abbreviation of WFFs to a WFF $\mathbf A$ as possible is known as the '''standard abbreviation of $\mathbf A$'''.
Thus it can be seen that there may be several abbreviations of a WFF, but only one '''standard abbreviation'''.
\end{definition}
|
23618
|
\begin{definition}[Definition:Abel's Integral Equation]
'''Abel's integral equation''' is an integral equation whose purpose is to solve Abel's mechanical problem, which finds how long it will take a bead to slide down a wire.
The purpose of '''Abel's integral equation''' is to find the shape of the curve into which the wire is bent in order to yield that result:
Let $\map T y$ be a function which specifies the total time of descent for a given starting height.
:$\ds \map T {y_0} = \int_{y \mathop = y_0}^{y \mathop = 0} \rd t = \frac 1 {\sqrt {2 g} } \int_0^{y_0} \frac 1 {\sqrt {y_0 - y} } \frac {\d s} {\d y} \rd y$
where:
:$y$ is the height of the bead at time $t$
:$y_0$ is the height from which the bead is released
:$g$ is Acceleration Due to Gravity
:$\map s y$ is the distance along the curve as a function of height.
\end{definition}
|
23619
|
\begin{definition}[Definition:Abel Summation Method]
{{Help|It is difficult finding a concise and complete definition of exactly what the Abel Summation Method actually is. All and any advice as to how to implement this adequately is requested of anyone. This is what is said in the Spring encyclopedia on the page "Abel summation method":}}
The series:
:$\ds \sum a_n$
can be summed by the Abel method ($A$-method) to the number $S$ if, for any real $x$ such that $0 < x < 1$, the series:
:$\ds \sum_{k \mathop = 0}^\infty a_k x^k$
is convergent and:
:$\ds \lim_{x \mathop \to 1^-} \sum_{k \mathop = 0}^\infty a_k x^k = S$
{{help|This is what we have on Wikipedia page {{WP|Divergent_series|Divergent series}}: }}
:$\ds \map f x = \sum_{n \mathop = 0}^\infty a_n e^{-n x} = \sum_{n \mathop = 0}^\infty a_n z^n$
where $z = \map \exp {−x}$.
Then the limit of $\map f x$ as $x$ approaches $0$ through positive reals is the limit of the power series for $\map f z$ as $z$ approaches $1$ from below through positive reals.
The '''Abel sum''' $\map A s$ is defined as:
:$\ds \map A s = \lim_{z \mathop \to 1^-} \sum_{n \mathop = 0}^\infty a_n z^n$
{{NamedforDef|Niels Henrik Abel|cat = Abel}}
\end{definition}
|
23620
|
\begin{definition}[Definition:Abelian Category/Definition 1]
An '''abelian category''' is a pre-abelian category in which:
:every monomorphism is a kernel
:every epimorphism is a cokernel
\end{definition}
|
23621
|
\begin{definition}[Definition:Abelian Category/Definition 3]
An '''abelian category''' is a pre-abelian category in which
:for every morphism $f$, the canonical morphism from its coimage to its image $\map {\operatorname {coim} } f \to \Img f$ is an isomorphism.
\end{definition}
|
23622
|
\begin{definition}[Definition:Abelian Function]
An '''Abelian function''' is an inverse function of an Abelian integral.
{{NamedforDef|Niels Henrik Abel|cat = Abel}}
\end{definition}
|
23623
|
\begin{definition}[Definition:Abelian Group/Definition 1]
An '''abelian group''' is a group $G$ where:
: $\forall a, b \in G: a b = b a$
That is, every element in $G$ commutes with every other element in $G$.
\end{definition}
|
23624
|
\begin{definition}[Definition:Abelian Group/Definition 2]
An '''abelian group''' is a group $G$ {{iff}}:
: $G = \map Z G$
where $\map Z G$ is the center of $G$.
\end{definition}
|
23625
|
\begin{definition}[Definition:Abelian Integral]
An '''Abelian integral''' is a complex Riemann integral of the form
:$\ds \int_{z_0}^z \map R {x, w} \rd x$
where $\map R {x, w}$ is an arbitrary rational function of the two variables $x$ and $w$.
These variables are related by the equation:
:$\map F {x, w} = 0$
where $\map F {x, w}$ is an irreducible polynomial in $w$:
:$\map F {x, w} \equiv \map {\phi_n} x w^n + \cdots + \map {\phi_1} x w + \map {\phi_0} x$
whose coefficients $\map {\phi_j} x, j = 0, 1, \ldots, n$ are rational functions of $x$.
{{NamedforDef|Niels Henrik Abel|cat = Abel}}
\end{definition}
|
23626
|
\begin{definition}[Definition:Abelian Sheaf]
Let $X$ be a topological space.
\end{definition}
|
23627
|
\begin{definition}[Definition:Abelianization of Group]
Let $G$ be a group.
Its '''abelianization''' is the quotient by its commutator subgroup:
:$G^{\mathrm {ab} } = G / \sqbrk {G, G}$
\end{definition}
|
23628
|
\begin{definition}[Definition:Abnormal Set]
Let $S$ be a set.
Then $S$ is an '''abnormal set''' {{iff}}:
:$S \in S$
That is, $S$ is an element of itself.
\end{definition}
|
23629
|
\begin{definition}[Definition:Abnormal Subgroup]
Let $G$ be a group.
Let $H$ be a subgroup of $G$.
Then $H$ is an '''abnormal subgroup''' {{iff}}:
:$\forall g \in G: g \in \gen {H, H^g}$
where $\gen {H, H^g}$ is the subgroup of $G$ generated by $H$ and the conjugate of $H$ by $g$.
\end{definition}
|
23630
|
\begin{definition}[Definition:Above]
In the context of real numbers, '''above''' means '''greater than'''.
\end{definition}
|
23631
|
\begin{definition}[Definition:Abscissa]
Consider the graph $y = \map f x$ of a real function $f$ embedded in a Cartesian plane.
:500px
The $x$ coordinate of a point $P = \tuple {x, y}$ on $f$ is known as the '''abscissa''' of $P$.
\end{definition}
|
23632
|
\begin{definition}[Definition:Abscissa of Convergence]
Let $\map f s$ be a Dirichlet series
The '''abscissa of convergence''' of $f$ is the extended real number $\sigma_0 \in \overline \R$ defined by:
:$\ds \sigma_0 = \inf \set {\map \Re s: s \in \C, \map f s \text{ converges} }$
where $\inf \O = +\infty$.
\end{definition}
|
23633
|
\begin{definition}[Definition:Absolute Continuity/Complex Measure]
Let $\struct {X, \Sigma}$ be a measurable space.
Let $\mu$ be a measure on $\struct {X, \Sigma}$.
Let $\nu$ be a complex measure on $\struct {X, \Sigma}$.
Let $\size \nu$ be the variation of $\nu$.
We say that $\nu$ is '''absolutely continuous''' with respect to $\mu$ {{iff}}:
:$\size \nu$ is absolutely continuous with respect to $\mu$.
We write:
:$\nu \ll \mu$
\end{definition}
|
23634
|
\begin{definition}[Definition:Absolute Continuity/Measure]
Let $\struct {X, \Sigma}$ be a measurable space.
Let $\mu$ and $\nu$ be measures on $\struct {X, \Sigma}$.
We say that $\nu$ is '''absolutely continuous''' with respect to $\mu$ and write:
:$\nu \ll \mu$
{{iff}}:
:for all $A \in \Sigma$ with $\map \mu A = 0$, we have $\map \nu A = 0$.
\end{definition}
|
23635
|
\begin{definition}[Definition:Absolute Continuity/Real Function]
Let $I \subseteq \R$ be a real interval.
A real function $f: I \to \R$ is said to be '''absolutely continuous''' {{iff}} it satisfies the following property:
:For every $\epsilon > 0$ there exists $\delta > 0$ such that the following property holds:
::For every finite set of disjoint closed real intervals $\closedint {a_1} {b_1}, \dotsc, \closedint {a_n} {b_n} \subseteq I$ such that:
:::$\ds \sum_{i \mathop = 1}^n \size {b_i - a_i} < \delta$
::it holds that:
:::$\ds \sum_{i \mathop = 1}^n \size {\map f {b_i} - \map f {a_i} } < \epsilon$
\end{definition}
|
23636
|
\begin{definition}[Definition:Absolute Continuity/Signed Measure]
Let $\struct {X, \Sigma}$ be a measurable space.
Let $\mu$ be a measure on $\struct {X, \Sigma}$.
Let $\nu$ be a signed measure on $\struct {X, \Sigma}$.
Let $\size \nu$ be the variation of $\nu$.
We say that $\nu$ is '''absolutely continuous''' with respect to $\mu$ {{iff}}:
:$\size \nu$ is absolutely continuous with respect to $\mu$.
We write:
:$\nu \ll \mu$
\end{definition}
|
23637
|
\begin{definition}[Definition:Absolute Convergence of Product/Complex Numbers]
Let $\sequence {a_n}$ be a sequence in $\C$.
\end{definition}
|
23638
|
\begin{definition}[Definition:Absolute Convergence of Product/Complex Numbers/Definition 1]
Let $\sequence {a_n}$ be a sequence in $\C$.
The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + a_n}$ is '''absolutely convergent''' {{iff}} $\ds \prod_{n \mathop = 1}^\infty \paren {1 + \size {a_n} }$ is convergent.
\end{definition}
|
23639
|
\begin{definition}[Definition:Absolute Convergence of Product/Complex Numbers/Definition 2]
Let $\sequence {a_n}$ be a sequence of complex numbers.
The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + a_n}$ is '''absolutely convergent''' {{iff}} the series $\ds \sum_{n \mathop = 1}^\infty a_n$ is absolutely convergent.
\end{definition}
|
23640
|
\begin{definition}[Definition:Absolute Convergence of Product/Complex Numbers/Definition 3]
Let $\sequence {a_n}$ be a sequence in $\C$.
The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + a_n}$ is '''absolutely convergent''' {{iff}} there exists $n_0 \in \N$ such that:
:$a_n \ne -1$ for $n > n_0$
:The series $\ds \sum_{n \mathop = n_0 + 1}^\infty \log \paren {1 + a_n}$ is absolutely convergent
where $\log$ denotes the complex logarithm.
\end{definition}
|
23641
|
\begin{definition}[Definition:Absolute Convergence of Product/General Definition]
Let $\struct {\mathbb K, \norm{\,\cdot\,} }$ be a valued field.
Let $\sequence {a_n}$ be a sequence in $\mathbb K$.
\end{definition}
|
23642
|
\begin{definition}[Definition:Absolute Convergence of Product/General Definition/Definition 1]
Let $\struct {\mathbb K, \norm {\,\cdot\,} }$ be a valued field.
Let $\sequence {a_n}$ be a sequence in $\mathbb K$.
The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + a_n}$ is '''absolutely convergent''' {{iff}} $\ds \prod_{n \mathop = 1}^\infty \paren {1 + \norm {a_n} }$ is convergent.
\end{definition}
|
23643
|
\begin{definition}[Definition:Absolute Convergence of Product/General Definition/Definition 2]
Let $\struct {\mathbb K, \norm {\,\cdot\,} }$ be a valued field.
Let $\sequence {a_n}$ be a sequence in $\mathbb K$.
The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + a_n}$ is '''absolutely convergent''' {{iff}} the series $\ds \sum_{n \mathop = 1}^\infty a_n$ is absolutely convergent.
\end{definition}
|
23644
|
\begin{definition}[Definition:Absolute Difference]
Let $a$ and $b$ be real numbers.
The '''absolute difference''' between $a$ and $b$ is defined and denoted as:
:$\size {a - b}$
where $\size {\, \cdot \,}$ is the absolute value function.
\end{definition}
|
23645
|
\begin{definition}[Definition:Absolute Galois Group]
Let $K$ be a field.
\end{definition}
|
23646
|
\begin{definition}[Definition:Absolute Galois Group/Definition 1]
Let $K$ be a field.
The '''absolute Galois group''' of $K$ is the Galois group $\Gal {K^{\operatorname{sep} } \mid K}$ of its separable closure.
Category:Definitions/Galois Theory
\end{definition}
|
23647
|
\begin{definition}[Definition:Absolute Galois Group/Definition 2]
Let $K$ be a field.
The '''absolute Galois group''' of $K$ is the automorphism group $\Aut {\overline K \mid K}$ of its algebraic closure.
Category:Definitions/Galois Theory
\end{definition}
|
23648
|
\begin{definition}[Definition:Absolute Geometry]
'''Absolute geometry''' is the study of '''Euclidean geometry''' without the parallel postulate.
\end{definition}
|
23649
|
\begin{definition}[Definition:Absolute Measure of Dispersion]
An '''absolute measure of dispersion''' is a measure of dispersion that indicates how spread out or scattered a set of observations with respect to the actual values of those observations.
\end{definition}
|
23650
|
\begin{definition}[Definition:Absolute Number]
An '''absolute number''' is a number in an expression which has a single value.
It is either expressed using actual figures, in an agreed number system, or by a symbol which is understood to represent that specific number.
\end{definition}
|
23651
|
\begin{definition}[Definition:Absolute Real Vector Ordering]
Let $x$ and $y$ be elements of the real vector space $\R^n$.
The '''absolute real vector ordering''' is the partial ordering $\ge$ defined on the real vector space $\R^n$ as:
:$\forall x, y \in \R^n: x \ge y \iff \forall i \in \left\{ {1, 2, \ldots, n}\right\}: x_i \ge y_i$
\end{definition}
|
23652
|
\begin{definition}[Definition:Absolute Real Vector Strict Ordering]
Let $x$ and $y$ be elements of the real vector space $\R^n$.
The '''absolute real vector strict ordering''' is the strict partial ordering $\ge$ defined on the real vector space $\R^n$ as:
:$\forall x, y \in \R^n: x \ge y \iff \forall i \in \left\{ {1, 2, \ldots, n}\right\}: x_i \ge y_i$
\end{definition}
|
23653
|
\begin{definition}[Definition:Absolute Value/Definition 1]
Let $x \in \R$ be a real number.
The '''absolute value''' of $x$ is denoted $\size x$, and is defined using the usual ordering on the real numbers as follows:
:$\size x = \begin{cases}
x & : x > 0 \\
0 & : x = 0 \\
-x & : x < 0
\end{cases}$
\end{definition}
|
23654
|
\begin{definition}[Definition:Absolute Value/Definition 2]
Let $x \in \R$ be a real number.
The '''absolute value''' of $x$ is denoted $\size x$, and is defined as:
:$\size x = +\sqrt {x^2}$
where $+\sqrt {x^2}$ is the positive square root of $x^2$.
\end{definition}
|
23655
|
\begin{definition}[Definition:Absolute Value/Number Classes]
The absolute value function applies to the various number classes as follows:
: Natural numbers $\N$: All elements of $\N$ are greater than or equal to zero, so the concept is irrelevant.
: Integers $\Z$: As defined here.
: Rational numbers $\Q$: As defined here.
: Real numbers $\R$: As defined here.
: Complex numbers $\C$: As $\C$ is not an ordered set, the definition of the absolute value function based upon whether a complex number is greater than or less than zero cannot be applied.
The notation $\cmod z$, where $z \in \C$, is defined as the modulus of $z$ and has a different meaning.
{{expand|Incorporate Definition:Extended Absolute Value somewhere}}
Category:Definitions/Analysis
\end{definition}
|
23656
|
\begin{definition}[Definition:Absolute Value/Ordered Integral Domain]
Let $\struct {D, +, \times, \le}$ be an ordered integral domain whose zero is $0_D$.
Then for all $a \in D$, the '''absolute value''' of $a$ is defined as:
:$\size a = \begin{cases}
a & : 0_D \le a \\
-a & : a < 0_D
\end{cases}$
where $a > 0_D$ denotes that $\neg \paren {a \le 0_D}$.
\end{definition}
|
23657
|
\begin{definition}[Definition:Absolute Value of Cut]
Let $\alpha$ be a cut.
The '''absolute value of $\alpha$''' is denoted and defined as:
:$\size \alpha = \begin {cases} \alpha & : \alpha \ge 0^* \\ -\alpha & : \alpha < 0^* \end {cases}$
where:
:$0^*$ denotes the rational cut associated with the (rational) number $0$
:$\ge$ denotes the ordering on cuts.
\end{definition}
|
23658
|
\begin{definition}[Definition:Absolute Value of Mapping]
Let $D$ be an ordered integral domain.
Let $\size {\, \cdot \,}_D$ denote the absolute value function on $D$.
Let $S$ be a set.
Let $f: S \to D$ be a mapping.
Then the '''absolute value of $f$''', denoted $\size f_D: S \to D$, is defined as:
:$\forall s \in S: \map {\size f_D} s := \size {\map f s}_D$
'''Absolute value''' thence is an instance of a pointwise operation on a mapping.
\end{definition}
|
23659
|
\begin{definition}[Definition:Absolute Value of Mapping/Extended Real-Valued Function]
Let $S$ be a set, and let $f: S \to \overline \R$ be an extended real-valued function.
Then the '''absolute value of $f$''', denoted $\size f: S \to \overline \R$, is defined as:
:$\forall s \in S: \map {\size f} s := \size {\map f s}$
where $\size {\map f s}$ denotes the extended absolute value function on $\overline \R$.
'''Absolute value''' thence is an instance of a pointwise operation on extended real-valued functions.
Since extended absolute value coincides on $\R$ with the standard ordering, this definition incorporates the definition for real-valued functions.
\end{definition}
|
23660
|
\begin{definition}[Definition:Absolute Value of Mapping/Real-Valued Function]
Let $S$ be a set.
Let $f: S \to \R$ be a real-valued function.
Then the '''absolute value of $f$''', denoted $\size f: S \to \R$, is defined as:
:$\forall s \in S: \map {\size f} s := \size {\map f s}$
where $\size {\map f s}$ denotes the absolute value function on $\R$.
'''Absolute value''' thence is an instance of a pointwise operation on real-valued functions.
\end{definition}
|
23661
|
\begin{definition}[Definition:Absolute Zero]
'''Absolute zero''' is the lowest temperature which can theoretically be achieved.
It is the temperature where all motion due to thermal effects stops.
Before that temperature can be reached, quantum effects come into play.
\end{definition}
|
23662
|
\begin{definition}[Definition:Absolutely Convergent Series/Complex Numbers]
Let $S = \ds \sum_{n \mathop = 1}^\infty a_n$ be a series in the complex number field $\C$.
Then $S$ is '''absolutely convergent''' {{iff}}:
:$\ds \sum_{n \mathop = 1}^\infty \cmod {a_n}$ is convergent
where $\cmod {a_n}$ denotes the complex modulus of $a_n$.
\end{definition}
|
23663
|
\begin{definition}[Definition:Absolutely Convergent Series/General]
Let $V$ be a normed vector space with norm $\norm {\, \cdot \,}$.
Let $\ds \sum_{n \mathop = 1}^\infty a_n$ be a series in $V$.
Then the series $\ds \sum_{n \mathop = 1}^\infty a_n$ in $V$ is '''absolutely convergent''' {{iff}} $\ds \sum_{n \mathop = 1}^\infty \norm {a_n}$ is a convergent series in $\R$.
\end{definition}
|
23664
|
\begin{definition}[Definition:Absolutely Convergent Series/Real Numbers]
Let $\ds \sum_{n \mathop = 1}^\infty a_n$ be a series in the real number field $\R$.
Then $\ds \sum_{n \mathop = 1}^\infty a_n$ is '''absolutely convergent''' {{iff}}:
:$\ds \sum_{n \mathop = 1}^\infty \size {a_n}$ is convergent
where $\size {a_n}$ denotes the absolute value of $a_n$.
\end{definition}
|
23665
|
\begin{definition}[Definition:Absolutely Integrable Function]
Let $f$ be a real function.
$\map f x$ is '''absolutely integrable''' on $S \subseteq \R$ {{iff}} its absolute value of $f$ is an integrable function on $S$.
That is, {{iff}} the definite integral of the absolute value of $f$ over any interval $\openint \alpha \beta \subseteq S$ is bounded.
Category:Definitions/Integral Calculus
\end{definition}
|
23666
|
\begin{definition}[Definition:Absolutely Normal Real Number]
A real number $r$ is '''absolutely normal''' if it is normal with respect to ''every'' number base $b$.
That is, {{iff}} its basis expansion in every number base $b$ is such that:
:no finite sequence of digits of $r$ of length $n$ occurs more frequently than any other such finite sequence of length $n$.
In particular, for every number base $b$, all digits of $r$ have the same natural density in the basis expansion of $r$.
\end{definition}
|
23667
|
\begin{definition}[Definition:Absorbent Set]
Let $V$ be a vector space over a field $K$.
Let $W \subseteq V$ be a subset of $V$.
Let $a \in K$.
Let the set $a \cdot W$ be defined as:
:$a \cdot W := \left\{{a \cdot y: y \in W} \right\}$
Then $W$ is an '''absorbent set in $V$''' {{iff}}:
:$\ds \bigcup_{a \mathop \in K} a \cdot W = V$
which symbolically can be represented as:
:$K \cdot W = V$
\end{definition}
|
23668
|
\begin{definition}[Definition:Absorbing State]
Let $\sequence {X_n}_{n \mathop \ge 0}$ be a Markov chain on a state space $S$.
Let $i \in S$ be an element of the state space $S$.
Then $i$ is an '''absorbing state''' of $\sequence {X_n}$ {{iff}}:
:$X_k = i \implies X_{k + 1} = i$
That is, it is an element of $S$ such that if $\sequence {X_n}$ reaches $i$, it stays there.
\end{definition}
|
23669
|
\begin{definition}[Definition:Absorption Law]
Let $\struct {S, \circ, *}$ be an algebraic structure.
Let both $\circ$ and $*$ be commutative.
Then '''$\circ$ absorbs $*$''' {{iff}}:
:$\forall a, b \in S: a \circ \paren {a * b} = a$
This equality is called the '''absorption law of $\circ$ for $*$'''.
Category:Definitions/Abstract Algebra
\end{definition}
|
23670
|
\begin{definition}[Definition:Abstract Algebra]
'''Abstract algebra''' is a branch of mathematics which studies algebraic structures and algebraic systems.
It can be roughly described as the study of sets equipped with operations.
\end{definition}
|
23671
|
\begin{definition}[Definition:Abstract Geometry]
Let $P$ be a set and $L$ be a set of subsets of $P$.
Then $\left({P, L}\right)$ is an '''abstract geometry''' {{iff}}:
{{begin-axiom}}
{{axiom | n = 1
| q = \forall A, B \in P
| m = \exists l \in L: A, B \in l
}}
{{axiom | n = 2
| q = \forall l \in L
| m = \exists A, B \in P: A, B \in l \land A \ne B
}}
{{end-axiom}}
The elements of $P$ are referred to as '''points'''.
The elements of $L$ are referred to as '''lines'''.
The above axioms thus can be phrased in natural language as:
:$(1):\quad$ For every two '''points''' $A, B \in P$ there is a '''line''' $l \in L$ such that $A, B \in l$
:$(2):\quad$ Every '''line''' has at least two '''points'''
\end{definition}
|
23672
|
\begin{definition}[Definition:Abstract Machine]
An '''abstract machine''' is a hypothetical computing machine defined in terms of the operations it performs rather than its internal physical structure.
\end{definition}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.