text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage.
X-rays
After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics.
|
https://en.wikipedia.org/wiki/Electromagnetic_spectrum
|
passage: This includes those who are young without risk factors. In those at higher risk the evidence for screening with ECGs is inconclusive. Additionally echocardiography, myocardial perfusion imaging, and cardiac stress testing is not recommended in those at low risk who do not have symptoms. Some biomarkers may add to conventional cardiovascular risk factors in predicting the risk of future cardiovascular disease; however, the value of some biomarkers is questionable. Ankle-brachial index (ABI), high-sensitivity C-reactive protein (hsCRP), and coronary artery calcium, are also of unclear benefit in those without symptoms as of 2018.
The NIH recommends lipid testing in children beginning at the age of 2 if there is a family history of heart disease or lipid problems. It is hoped that early testing will improve lifestyle factors in those at risk such as diet and exercise.
Screening and selection for primary prevention interventions has traditionally been done through absolute risk using a variety of scores (ex. Framingham or Reynolds risk scores). This stratification has separated people who receive the lifestyle interventions (generally lower and intermediate risk) from the medication (higher risk). The number and variety of risk scores available for use has multiplied, but their efficacy according to a 2016 review was unclear due to lack of external validation or impact analysis. Risk stratification models often lack sensitivity for population groups and do not account for the large number of negative events among the intermediate and low risk groups. As a result, future preventative screening appears to shift toward applying prevention according to randomized trial results of each intervention rather than large-scale risk assessment.
|
https://en.wikipedia.org/wiki/Cardiovascular_disease
|
passage: For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties.
The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.
When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.
Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect.
|
https://en.wikipedia.org/wiki/Atom
|
passage: In this case, the vector bundle E is sometimes equipped with an additional piece of data besides its connection: a solder form. A solder form is a globally defined vector-valued one-form θ ∈ Ω1(M,E) such that the mapping
$$
\theta_x : T_xM \rightarrow E_x
$$
is a linear isomorphism for all x ∈ M. If a solder form is given, then it is possible to define the torsion of the connection (in terms of the exterior connection) as
$$
\Theta = D\theta.\,
$$
The torsion Θ is an E-valued 2-form on M.
A solder form and the associated torsion may both be described in terms of a local frame e of E. If θ is a solder form, then it decomposes into the frame components
$$
\theta = \sum_i \theta^i(\mathbf e) e_i.
$$
The components of the torsion are then
$$
\Theta^i(\mathbf e) = d\theta^i(\mathbf e) + \sum_j \omega_j^i(\mathbf e)\wedge \theta^j(\mathbf e).
$$
Much like the curvature, it can be shown that Θ behaves as a contravariant tensor under a change in frame:
$$
\Theta^i(\mathbf e\, g)=\sum_j g_j^i \Theta^j(\mathbf e).
$$
The frame-independent torsion may also be recovered from the frame components:
$$
\Theta = \sum_i e_i \Theta^i(\mathbf e).
$$
|
https://en.wikipedia.org/wiki/Connection_form
|
passage: ## Plotting a ternary plot
Cartesian coordinates are useful for plotting points in the triangle. Consider an equilateral ternary plot where is placed at and at . Then is
$$
(\frac{1}{2}, \frac{\sqrt{3}}{2}),
$$
and the triple is
$$
\left(\frac{1}{2}\cdot\frac{2b+c}{a+b+c},\frac{\sqrt{3}}{2}\cdot\frac{c}{a+b+c}\right) \,.
$$
## Example
This example shows how this works for a hypothetical set of three soil samples:
{| class="wikitable" style="text-align:center;"
!Sample||Clay||Silt||Sand||Notes
|-
|Sample 1||50%||20%||30%||align="left"|Because clay and silt together make up 70% of this sample, the proportion of sand must be 30% for the components to sum to 100%.
|-
|Sample 2||10%||60%||30%||align="left"|The proportion of sand is 30% as in Sample 1, but as the proportion of silt rises by 40%, the proportion of clay decreases correspondingly.
|-
|Sample 3||10%||30%||60%||align="left"|This sample has the same proportion of clay as Sample 2, but the proportions of silt and sand are swapped; the plot is reflected about its vertical axis.
|}
|
https://en.wikipedia.org/wiki/Ternary_plot
|
passage: Instead they are normally affected by one or several of the following effects:
- focal blur caused by a finite depth-of-field and finite point spread function.
- penumbral blur caused by shadows created by light sources of non-zero radius.
- shading at a smooth object
A number of researchers have used a Gaussian smoothed step edge (an error function) as the simplest extension of the ideal step edge model for modeling the effects of edge blur in practical applications. W. Zhang and F. Bergholm (1997) "Multi-scale blur estimation and edge type classification for scene analysis", International Journal of Computer Vision, vol 24, issue 3, Pages: 219–250.
Thus, a one-dimensional image
$$
f
$$
that has exactly one edge placed at
$$
x = 0
$$
may be modeled as:
$$
f(x) = \frac{I_r - I_\ell}{2} \left( \operatorname{erf}\left(\frac{x}{\sqrt{2}\sigma}\right) + 1\right) + I_\ell.
$$
At the left side of the edge, the intensity is
$$
I_\ell = \lim_{x \rightarrow -\infty} f(x)
$$
, and right of the edge it is
$$
I_r = \lim_{x \rightarrow \infty} f(x)
$$
. The scale parameter
$$
\sigma
$$
is called the blur scale of the edge. Ideally this scale parameter should be adjusted based on the quality of image to avoid destroying true edges of the image.
|
https://en.wikipedia.org/wiki/Edge_detection
|
passage: The stiffness matrix is for only one pair of contact springs. However, the global stiffness matrix is determined by summing up the stiffness matrices of individual pairs of springs around each element. Consequently, the developed stiffness matrix has total effects from all pairs of springs, according to the stress situation around the element. This technique can be used in both load and displacement control cases. The 3D stiffness matrix may be deduced similarly.
## Applications
The applied element method is currently being used in the following applications:
- Structural vulnerability assessment
- Progressive collapse
- Blast analysis
- Impact analysis
- Seismic analysis
- Forensic engineering
- Performance based design
- Demolition analysis
- Glass performance analysis
- Visual effects
|
https://en.wikipedia.org/wiki/Applied_element_method
|
passage: Then for every
$$
n \in \mathbb{N}
$$
there exists a uniquely determined positive element
$$
b \in \mathcal{A}_+
$$
with
$$
b^n =a
$$
, i.e. a unique
$$
n
$$
-th
Proof. For each
$$
n \in \mathbb{N}
$$
, the root function
$$
f_n \colon \R_0^+ \to \R_0^+, x \mapsto \sqrt[n]x
$$
is a continuous function on If
$$
b \; \colon = f_n (a)
$$
is defined using the continuous functional calculus, then
$$
b^n = (f_n(a))^n = (f_n^n)(a) = \operatorname{Id}_{\sigma(a)}(a)=a
$$
follows from the properties of the calculus. From the spectral mapping theorem follows
$$
\sigma(b) = \sigma(f_n(a)) = f_n(\sigma(a)) \subseteq [0,\infty)
$$
, i.e.
$$
b
$$
is If
$$
c \in \mathcal{A}_+
$$
is another positive element with
$$
c^n = a = b^n
$$
, then
$$
c = f_n (c^n) = f_n(b^n) = b
$$
holds, as the root function on the positive real numbers is an inverse function to the function
|
https://en.wikipedia.org/wiki/Continuous_functional_calculus
|
passage: In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.
Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.
In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position.
Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels).
## Etymology
The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el include the words voxel , and texel .
|
https://en.wikipedia.org/wiki/Pixel
|
passage: The restricted planes given in this manner more closely resemble the real projective plane.
## Perspectivity and projectivity
Given three non-collinear points, there are three lines connecting them, but with four points, no three collinear, there are six connecting lines and three additional "diagonal points" determined by their intersections. The science of projective geometry captures this surplus determined by four points through a quaternary relation and the projectivities which preserve the complete quadrangle configuration.
An harmonic quadruple of points on a line occurs when there is a complete quadrangle two of whose diagonal points are in the first and third position of the quadruple, and the other two positions are points on the lines joining two quadrangle points through the third diagonal point.
A spatial perspectivity of a projective configuration in one plane yields such a configuration in another, and this applies to the configuration of the complete quadrangle. Thus harmonic quadruples are preserved by perspectivity. If one perspectivity follows another the configurations follow along. The composition of two perspectivities is no longer a perspectivity, but a projectivity.
While corresponding points of a perspectivity all converge at a point, this convergence is not true for a projectivity that is not a perspectivity. In projective geometry the intersection of lines formed by corresponding points of a projectivity in a plane are of particular interest.
|
https://en.wikipedia.org/wiki/Projective_geometry
|
passage: ## Formulation
Generally, WKB theory is a method for approximating the solution of a differential equation whose highest derivative is multiplied by a small parameter . The method of approximation is as follows.
For a differential equation
$$
\varepsilon \frac{d^ny}{dx^n} + a(x)\frac{d^{n-1}y}{dx^{n-1}} + \cdots + k(x)\frac{dy}{dx} + m(x)y= 0,
$$
assume a solution of the form of an asymptotic series expansion
$$
y(x) \sim \exp\left[\frac{1}{\delta}\sum_{n=0}^{\infty} \delta^n S_n(x)\right]
$$
in the limit . The asymptotic scaling of in terms of will be determined by the equation – see the example below.
Substituting the above ansatz into the differential equation and cancelling out the exponential terms allows one to solve for an arbitrary number of terms in the expansion.
WKB theory is a special case of multiple scale analysis.
## An example
This example comes from the text of Carl M. Bender and Steven Orszag. Consider the second-order homogeneous linear differential equation
$$
\epsilon^2 \frac{d^2 y}{dx^2} = Q(x) y,
$$
where
$$
Q(x) \neq 0
$$
.
|
https://en.wikipedia.org/wiki/WKB_approximation
|
passage: For example, with a "context window" of four words, they compute the spamicity of "Viagra is good for", instead of computing the spamicities of "Viagra", "is", "good", and "for". This method gives more sensitivity to context and eliminates the Bayesian noise better, at the expense of a bigger database.
#### Disadvantages
Depending on the implementation, Bayesian spam filtering may be susceptible to Bayesian poisoning, a technique used by spammers in an attempt to degrade the effectiveness of spam filters that rely on Bayesian filtering. A spammer practicing Bayesian poisoning will send out emails with large amounts of legitimate text (gathered from legitimate news or literary sources). Spammer tactics include insertion of random innocuous words that are not normally associated with spam, thereby decreasing the email's spam score, making it more likely to slip past a Bayesian spam filter. However, with (for example) Paul Graham's scheme only the most significant probabilities are used, so that padding the text out with non-spam-related words does not affect the detection probability significantly.
Words that normally appear in large quantities in spam may also be transformed by spammers. For example, «Viagra» would be replaced with «Viaagra» or «V!agra» in the spam message. The recipient of the message can still read the changed words, but each of these words is met more rarely by the Bayesian filter, which hinders its learning process.
|
https://en.wikipedia.org/wiki/Naive_Bayes_classifier
|
passage: In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.
Although the title of this article specifies analytic functions, there are results for more general functions of a complex variable as well.
|
https://en.wikipedia.org/wiki/Infinite_compositions_of_analytic_functions
|
passage: Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by
$$
(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n),
$$
which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as
$$
\frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i.
$$
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that
$$
\zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right).
$$
### Consequences of Euler's proof
By the above results, we can conclude that
$$
\zeta(2k)
$$
is always a rational multiple of
$$
\pi^{2k}
$$
. In particular, since
$$
\pi
$$
and integer powers of it are transcendental, we can conclude at this point that
$$
\zeta(2k)
$$
is irrational, and more precisely, transcendental for all
$$
k \geq 1
$$
.
|
https://en.wikipedia.org/wiki/Basel_problem
|
passage: It now seems that segmentation can appear and disappear much more easily in the course of evolution than was previously thought. The 2007 study also noted that the ladder-like nervous system, which is associated with segmentation, is less universal than previously thought in both annelids and arthropods.
The updated phylogenetic tree of the annelid phylum is comprised by a grade of basal groups of polychaetes: Palaeoannelida, Chaetopteriformia and the Amphinomida/Sipuncula/Lobatocerebrum clade. This grade is followed by Pleistoannelida, the clade containing nearly all of annelid diversity, divided into two highly diverse groups: Sedentaria and Errantia. Sedentaria contains the clitellates, pogonophorans, echiurans and some archiannelids, as well as several polychaete groups. Errantia contains the eunicid and phyllodocid polychaetes, and several archiannelids. Some small groups, such as the Myzostomida, are more difficult to place due to long branching, but belong to either one of these large groups.
### External relationships
Annelids are members of the protostomes, one of the two major superphyla of bilaterian animals – the other is the deuterostomes, which includes vertebrates. Within the protostomes, annelids used to be grouped with arthropods under the super-group Articulata ("jointed animals"), as segmentation is obvious in most members of both phyla. However, the genes that drive segmentation in arthropods do not appear to do the same in annelids.
|
https://en.wikipedia.org/wiki/Annelid
|
passage: $$
$$
-\frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta^2} = \operatorname{var}[\ln (1-X)] = \psi_1(\beta) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\beta, \beta}= \operatorname{E} \left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta^2} \right ] = \ln(\operatorname{var_{G(1-X)}})
$$
$$
|
https://en.wikipedia.org/wiki/Beta_distribution
|
passage: - Mathcad uses the argument order `atan2(x, y)`. `atan2(0, 0)` is undefined.
- For systems implementing signed zero, infinities, or Not a Number (for example, IEEE floating point), it is common to implement reasonable extensions which may extend the range of values produced to include − and −0 when = −0. These also may return NaN or raise an exception when given a NaN argument.
- In the Intel x86 Architecture assembler code, `atan2` is known as the `FPATAN` (floating-point partial arctangent) instruction. It can deal with infinities and results lie in the closed interval , e.g. `atan2(∞, x)` = +/2 for finite x. Particularly, `FPATAN` is defined when both arguments are zero:
- : `atan2(+0, +0)` = +0;
- : `atan2(+0, −0)` = +;
- : `atan2(−0, +0)` = −0;
- : `atan2(−0, −0)` = −.
This definition is related to the concept of signed zero.
- In mathematical writings other than source code, such as in books and articles, the notations Arctan and Tan−1 have been utilized; these are capitalized variants of the regular arctan and tan−1. This usage is consistent with the complex argument notation, such that .
- On HP calculators, treat the coordinates as a complex number and then take the `ARG`.
|
https://en.wikipedia.org/wiki/Atan2
|
passage: During starvation, PDK increases in amount in most tissues, including skeletal muscle, via increased gene transcription. Under the same conditions, the amount of PDP decreases. The resulting inhibition of PDC prevents muscle and other tissues from catabolizing glucose and gluconeogenesis precursors. Metabolism shifts toward fat utilization, while muscle protein breakdown to supply gluconeogenesis precursors is minimized, and available glucose is spared for use by the brain.
Calcium ions have a role in regulation of PDC in muscle tissue, because it activates PDP, stimulating glycolysis on its release into the cytosol - during muscle contraction. Some products of these transcriptions release H2 into the muscles. This can cause calcium ions to decay over time.
## Localization of pyruvate decarboxylation
In eukaryotic cells the pyruvate decarboxylation occurs inside the mitochondrial matrix, after transport of the substrate, pyruvate, from the cytosol. The transport of pyruvate into the mitochondria is via the transport protein pyruvate translocase.
|
https://en.wikipedia.org/wiki/Pyruvate_dehydrogenase_complex
|
passage: ### Descriptive set theory
Descriptive set theory is the study of subsets of the real line and, more generally, subsets of Polish spaces. It begins with the study of pointclasses in the Borel hierarchy and extends to the study of more complex hierarchies such as the projective hierarchy and the Wadge hierarchy. Many properties of Borel sets can be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals.
The field of effective descriptive set theory is between set theory and recursion theory. It includes the study of lightface pointclasses, and is closely related to hyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable.
A recent area of research concerns Borel equivalence relations and more complicated definable equivalence relations. This has important applications to the study of invariants in many fields of mathematics.
### Fuzzy set theory
In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer
|
https://en.wikipedia.org/wiki/Set_theory
|
passage: This led Boyd to consider the set of values
$$
L_n:=\bigl\{m(P(z_1,\dots,z_n)):P\in\mathbb{Z}[z_1,\dots,z_n]\bigr\},
$$
and the union
$$
{L}_\infty = \bigcup^\infty_{n=1}L_n
$$
. He made the far-reaching conjecture that the set of
$$
{L}_\infty
$$
is a closed subset of
$$
\mathbb R
$$
. An immediate consequence of this conjecture would be the truth of Lehmer's conjecture, albeit without an explicit lower bound. As Smyth's result suggests that
$$
L_1\subsetneqq L_2
$$
, Boyd further conjectures that
$$
L_1\subsetneqq L_2\subsetneqq L_3\subsetneqq\ \cdots .
$$
### Mahler measure and entropy
An action
$$
\alpha_M
$$
of
$$
\mathbb{Z}^n
$$
by automorphisms of a compact metrizable abelian group may be associated via duality to any countable module
$$
N
$$
over the ring
$$
R=\mathbb{Z}[z_1^{\pm1},\dots,z_n^{\pm1}]
$$
.
|
https://en.wikipedia.org/wiki/Mahler_measure
|
passage: The sport underwent very rapid growth however, particularly in Europe after the sale of a sub-license sold to Ten Cate Sports in the Netherlands. In 1975 Ten Cate Sports sold 45,000 boards in Europe.
## Equipment
Windsurfing equipment has evolved in design over the years and are often classified as either shortboards or longboards. Longboards are usually longer than 3 meters, with a retractable daggerboard, and are optimized for lighter winds or course racing. Shortboards are less than 3 meters long and are designed for planing conditions.
While windsurfing is possible under a wide range of wind conditions, most intermediate and advanced recreational windsurfers prefer to sail in conditions that allow for consistent planing with multi-purpose, not overly specialized, free-ride equipment. Larger (100 to 140 liters) free-ride boards are capable of planing at wind speeds as low as if rigged with an adequate, well-tuned sail in the six to eight square meter range. The pursuit of planing in lower winds has driven the popularity of wider and shorter boards, with which planing is possible in wind as low as , if sails in the 10 to 12 square meter range are used.
Modern windsurfing boards can be classified into many categories: The original Windsurfer board had a body made out of polyethylene filled with PVC foam. Later, hollow glass-reinforced epoxy designs were used.
|
https://en.wikipedia.org/wiki/Windsurfing
|
passage: For
$$
S \subseteq X
$$
, let us figure out the left adjoint, which is defined via
$$
{\operatorname{Hom}}(\exists_f S,T)
\cong
{\operatorname{Hom}}(S,f^{*}T),
$$
which here just means
$$
\exists_f S\subseteq T
\leftrightarrow
S\subseteq f^{-1}[T]
$$
.
Consider
$$
f[S] \subseteq T
$$
. We see
$$
S\subseteq f^{-1}[f[S]]\subseteq f^{-1}[T]
$$
. Conversely, If for an
$$
x\in S
$$
we also have
$$
x\in f^{-1}[T]
$$
, then clearly
$$
f(x)\in T
$$
. So
$$
S \subseteq f^{-1}[T]
$$
implies
$$
f[S] \subseteq T
$$
. We conclude that left adjoint to the inverse image functor
$$
f^{*}
$$
is given by the direct image. Here is a characterization of this result, which matches more the logical interpretation: The image of
$$
S
$$
under
$$
\exists_f
$$
is the full set of
$$
y
$$
's, such that
$$
f^{-1} [\{y\}] \cap S
$$
is non-empty.
|
https://en.wikipedia.org/wiki/Adjoint_functors
|
passage: ## First publication
Finger trees were first published in 1977 by Leonidas J. Guibas, and periodically refined since (e.g. a version using AVL trees, non-lazy finger trees, simpler 2–3 finger trees shown here, B-Trees and so on)
## Implementations
Finger trees have since been used in the Haskell core libraries (in the implementation of Data.Sequence), and an implementation in OCaml exists which was derived from a proven-correct Coq implementation. There is also a verified implementation in Isabelle (proof assistant) from which programs in Haskell and other (functional) languages can be generated. Finger trees can be implemented with or without lazy evaluation, but laziness allows for simpler implementations.
|
https://en.wikipedia.org/wiki/Finger_tree
|
passage: Weyl groups
Many, but not all of these, are Weyl groups, and every Weyl group can be realized as a Coxeter group. The Weyl groups are the families
$$
A_n, B_n,
$$
and
$$
D_n,
$$
and the exceptions
$$
E_6, E_7, E_8, F_4,
$$
and
$$
I_2(6),
$$
denoted in Weyl group notation as
$$
G_2.
$$
The non-Weyl ones are the exceptions
$$
H_3
$$
and
$$
H_4,
$$
and those members of the family
$$
I_2(p)
$$
that are not exceptionally isomorphic to a Weyl group (namely
$$
I_2(3) \cong A_2, I_2(4) \cong B_2,
$$
and
$$
I_2(6) \cong G_2
$$
).
This can be proven by comparing the restrictions on (undirected) Dynkin diagrams with the restrictions on Coxeter diagrams of finite groups: formally, the Coxeter graph can be obtained from the Dynkin diagram by discarding the direction of the edges, and replacing every double edge with an edge labelled 4 and every triple edge by an edge labelled 6. Also note that every finitely generated Coxeter group is an automatic group. Dynkin diagrams have the additional restriction that the only permitted edge labels are 2, 3, 4, and 6, which yields the above.
|
https://en.wikipedia.org/wiki/Coxeter_group
|
passage: In the general case where
$$
R_a,R_b
$$
and
$$
t_a,t_b
$$
are the respective rotations and translations of camera a and b,
$$
R=R_a R_b^T
$$
and the homography matrix
$$
H_{ab}
$$
becomes
$$
H_{ab} = R_a R_b^T - \frac{(-R_a * R_b^T * t_b + t_a) n^T}{d}
$$
where d is the distance of the camera b to the plane.
## Affine homography
When the image region in which the homography is computed is small or the image has been acquired with a large focal length, an affine homography is a more appropriate model of image displacements. An affine homography is a special type of a general homography whose last row is fixed to
$$
h_{31}=h_{32}=0, \; h_{33}=1.
$$
|
https://en.wikipedia.org/wiki/Homography_%28computer_vision%29
|
passage: In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.
## Polynomial function models
A polynomial function is one that has the form
$$
y = a_{n}x^{n} + a_{n-1}x^{n-1} + \cdots + a_{2}x^{2} + a_{1}x + a_{0}
$$
where n is a non-negative integer that defines the degree of the polynomial. A polynomial with a degree of 0 is simply a constant function; with a degree of 1 is a line; with a degree of 2 is a quadratic; with a degree of 3 is a cubic, and so on.
Historically, polynomial models are among the most frequently used empirical models for curve fitting.
###
### Advantages
These models are popular for the following reasons.
1. Polynomial models have a simple form.
1. Polynomial models have well known and understood properties.
1. Polynomial models have moderate flexibility of shapes.
1. Polynomial models are a closed family. Changes of location and scale in the raw data result in a polynomial model being mapped to a polynomial model. That is, polynomial models are not dependent on the underlying metric.
1. Polynomial models are computationally easy to use.
###
### Disadvantages
However, polynomial models also have the following limitations.
1. Polynomial models have poor interpolatory properties.
|
https://en.wikipedia.org/wiki/Polynomial_and_rational_function_modeling
|
passage: ### Social media
Twitter implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking. Previously, tweets were simply shown in reverse chronological order. The photo sharing app VSCO used TensorFlow to help suggest custom filters for photos.
### Search Engine
Google officially released RankBrain on October 26, 2015, backed by TensorFlow.
### Education
InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms. Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student. TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities.
### Retail
The e-commerce platform Carousell used TensorFlow to provide personalized recommendations for customers. The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face.
### Research
TensorFlow is the foundation for the automated image-captioning software DeepDream.
|
https://en.wikipedia.org/wiki/TensorFlow
|
passage: Since
$$
\omega
$$
has only spatial components, the Lie derivative can be simplified using Cartan's magic formula, to
$$
\mathcal{L}_\Psi \omega
= \mathcal{L}_{\mathbf v} \omega
+ \mathcal{L}_{\frac{\partial}{\partial t}} \omega
= i_{\mathbf v} d\omega
+ d i_{\mathbf v} \omega
+ i_{\frac{\partial}{\partial t}} d \omega
= i_{\mathbf v} d_{x} \omega
+ d i_{\mathbf v} \omega
+ \dot\omega
$$
which, after integrating over
$$
\Omega(t)
$$
and using generalized Stokes' theorem on the second term, reduces to the three desired terms.
## Measure theory statement
Let
$$
X
$$
be an open subset of
$$
\mathbf{R}
$$
, and
$$
\Omega
$$
be a measure space. Suppose
$$
f\colon X \times \Omega \to \mathbf{R}
$$
satisfies the following conditions:
1.
$$
f(x,\omega)
$$
is a Lebesgue-integrable function of
$$
\omega
$$
for each
$$
x \in X
$$
.
1.
|
https://en.wikipedia.org/wiki/Leibniz_integral_rule
|
passage: Subsample size is some constant fraction
$$
f
$$
of the size of the training set. When
$$
f = 1
$$
, the algorithm is deterministic and identical to the one described above. Smaller values of
$$
f
$$
introduce randomness into the algorithm and help prevent overfitting, acting as a kind of regularization. The algorithm also becomes faster, because regression trees have to be fit to smaller datasets at each iteration. Friedman obtained that
$$
0.5 \leq f \leq 0.8
$$
leads to good results for small and moderate sized training sets. Therefore,
$$
f
$$
is typically set to 0.5, meaning that one half of the training set is used to build each base learner.
Also, like in bagging, subsampling allows one to define an out-of-bag error of the prediction performance improvement by evaluating predictions on those observations which were not used in the building of the next base learner. Out-of-bag estimates help avoid the need for an independent validation dataset, but often underestimate actual performance improvement and the optimal number of iterations. Learn Gradient Boosting Algorithm for better predictions (with codes in R)
### Number of observations in leaves
Gradient tree boosting implementations often also use regularization by limiting the minimum number of observations in trees' terminal nodes. It is used in the tree building process by ignoring any splits that lead to nodes containing fewer than this number of training set instances.
Imposing this limit helps to reduce variance in predictions at leaves.
|
https://en.wikipedia.org/wiki/Gradient_boosting
|
passage: A root systems is irreducible if and only if its Dynkin diagram is connected. The possible connected diagrams are as indicated in the figure. The subscripts indicate the number of vertices in the diagram (and hence the rank of the corresponding irreducible root system).
If
$$
\Phi
$$
is a root system, the Dynkin diagram for the dual root system
$$
\Phi^\vee
$$
is obtained from the Dynkin diagram of
$$
\Phi
$$
by keeping all the same vertices and edges, but reversing the directions of all arrows. Thus, we can see from their Dynkin diagrams that
$$
B_n
$$
and
$$
C_n
$$
are dual to each other.
## Weyl chambers and the Weyl group
If
$$
\Phi\subset E
$$
is a root system, we may consider the hyperplane perpendicular to each root
$$
\alpha
$$
. Recall that
$$
\sigma_\alpha
$$
denotes the reflection about the hyperplane and that the Weyl group is the group of transformations of
$$
E
$$
generated by all the
$$
\sigma_\alpha
$$
's. The complement of the set of hyperplanes is disconnected, and each connected component is called a Weyl chamber.
|
https://en.wikipedia.org/wiki/Root_system
|
passage: divide-by-zero returns infinity exactly, which will typically then divide a finite number and so give zero, or else will give an invalid exception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given by
$$
R_\text{tot}=1/(1/R_1+1/R_2+\cdots+1/R_n)
$$
. If a short-circuit develops with
$$
R_1
$$
set to 0,
$$
1/R_1
$$
will return +infinity which will give a final
$$
R_{tot}
$$
of 0, as expected (see the continued fraction example of IEEE 754 design rationale for another example).
Overflow and invalid exceptions can typically not be ignored, but do not necessarily represent errors: for example, a root-finding routine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and an invalid exception flag to be ignored until finding a useful start point.
Accuracy problems
The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers.
|
https://en.wikipedia.org/wiki/Floating-point_arithmetic
|
passage: ## Keyboard and mouse macros
Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders.
During the 1980s, macro programs – originally SmartKey, then SuperKey, KeyWorks, Prokey – were very popular, first as a means to automatically format screenplays, then for a variety of user-input tasks. These programs were based on the terminate-and-stay-resident mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interfaces and the availability of keyboard and mouse macros in applications, such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros.
Keyboard macros can be used in massively multiplayer online role-playing games (MMORPGs) to perform repetitive, but lucrative tasks, thus accumulating resources. As this is done without human effort, it can skew the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and their administrators spend considerable effort to suppress them.
### Application macros and scripting
Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros.
|
https://en.wikipedia.org/wiki/Macro_%28computer_science%29
|
passage: This is an example of a Boolean ring.
### Noncommutative rings
- For any ring and any natural number , the set of all square -by- matrices with entries from , forms a ring with matrix addition and matrix multiplication as operations. For , this matrix ring is isomorphic to itself. For (and not the zero ring), this matrix ring is noncommutative.
- If is an abelian group, then the endomorphisms of form a ring, the endomorphism ring of . The operations in this ring are addition and composition of endomorphisms. More generally, if is a left module over a ring , then the set of all -linear maps forms a ring, also called the endomorphism ring and denoted by .
- The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero.
- If is a group and is a ring, the group ring of over is a free module over having as basis. Multiplication is defined by the rules that the elements of commute with the elements of and multiply together as they do in the group .
- The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative.
|
https://en.wikipedia.org/wiki/Ring_%28mathematics%29
|
passage: (a) means that the segmentation must be complete; that is, every pixel must be in a region.
(b) requires that points in a region must be connected in some predefined sense.
(c) indicates that the regions must be disjoint.
(d) deals with the properties that must be satisfied by the pixels in a segmented region. For example,
$$
P(R_{i})=\text{TRUE}
$$
if all pixels in
$$
R_{i}
$$
have the same grayscale.
(e) indicates that region
$$
R_{i}
$$
and
$$
R_{j}
$$
are different in the sense of predicate
$$
P
$$
.
## Basic concept of seed points
The first step in region growing is to select a set of seed points. Seed point selection is based on some user criterion (for example, pixels in a certain grayscale range, pixels evenly spaced on a grid, etc.). The initial region begins as the exact location of these seeds.
The regions are then grown from these seed points to adjacent points depending on a region membership criterion. The criterion could be, for example, pixel intensity, grayscale texture, or colour.
Since the regions are grown on the basis of the criterion, the image information itself is important. For example, if the criterion were a pixel intensity threshold value, knowledge of the histogram of the image would be of use, as one could use it to determine a suitable threshold value for the region membership criterion.
|
https://en.wikipedia.org/wiki/Region_growing
|
passage: The quantity
$$
\Delta Q(P(t_1,t_2))\
$$
is properly said to be a functional of the continuous joint progression
$$
P(t_1,t_2)\
$$
of
$$
V(t)\
$$
and
$$
T(t)\
$$
, but, in the mathematical definition of a function,
$$
\Delta Q(P(t_1,t_2))\
$$
is not a function of
$$
(V,T)\
$$
. Although the fluxion
$$
\dot Q(t)\
$$
is defined here as a function of time
$$
t\
$$
, the symbols
$$
Q\
$$
and
$$
Q(V,T)\
$$
respectively standing alone are not defined here.
### Physical scope of the above rules of calorimetry
The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules.
The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation.
## Experimentally conveniently measured coefficients
Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions.
|
https://en.wikipedia.org/wiki/Calorimetry
|
passage: In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree in variables for each positive integer , and it is formed by adding together all distinct products of distinct variables.
|
https://en.wikipedia.org/wiki/Elementary_symmetric_polynomial
|
passage: If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices v[0], ..., v[k−1],
`v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight`
Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel, leaving
`0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight`
I.e., every cycle has nonnegative weight.
## Finding negative cycles
When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis.
## Applications in routing
A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP.
It consists of the following steps:
1.
|
https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
|
passage: Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
It lacks a diversity of interactions needed to form structures
Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation).
## Detection of dark matter particles
If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis.
|
https://en.wikipedia.org/wiki/Dark_matter
|
passage: In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.
Mathematically, for a continuous-time system, given two arbitrary inputs
$$
\begin{align} x_1(t) \\ x_2(t) \end{align}
$$
as well as their respective zero-state outputs
$$
\begin{align}
y_1(t) &= H \left \{ x_1(t) \right \} \\
y_2(t) &= H \left \{ x_2(t) \right \}
\end{align}
$$
then a linear system must satisfy
$$
\alpha y_1(t) + \beta y_2(t) = H \left \{ \alpha x_1(t) + \beta x_2(t) \right \}
$$
for any scalar values and , for any input signals and , and for all time .
The system is then defined by the equation , where is some arbitrary function of time, and is the system state. Given and the system can be solved for
The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation.
This mathematical property makes the solution of modelling equations simpler than many nonlinear systems.
|
https://en.wikipedia.org/wiki/Linear_system
|
passage: Surveys on the multi-faceted research aspects of DE can be found in journal articles.
## Algorithm
A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, let
$$
f: \mathbb{R}^n \to \mathbb{R}
$$
be the fitness function which must be minimized (note that maximization can be performed by considering the function
$$
h := -f
$$
instead). The function takes a candidate solution as argument in the form of a vector of real numbers. It produces a real number as output which indicates the fitness of the given candidate solution. The gradient of
$$
f
$$
is not known. The goal is to find a solution
$$
\mathbf{m}
$$
for which
$$
f(\mathbf{m}) \leq f(\mathbf{p})
$$
for all
$$
\mathbf{p}
$$
in the search-space, which means that
$$
\mathbf{m}
$$
is the global minimum.
Let
$$
\mathbf{x} \in \mathbb{R}^n
$$
designate a candidate solution (agent) in the population.
|
https://en.wikipedia.org/wiki/Differential_evolution
|
passage: Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s.
Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits.
Wafer bonding involves joining two or more substrates (usually having the same diameter) to one another to form a composite structure.
|
https://en.wikipedia.org/wiki/MEMS
|
passage: A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient.
Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, contract management software, and in other cases where it is important to detect forgery or tampering.
Digital signatures are often used to implement electronic signatures, which include any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures. National Archives of Australia Electronic signatures have legal significance in some countries, including Brazil, Canada, South Africa, Russia, the United States, Algeria, Turkey, India, Indonesia, Mexico, Saudi Arabia, Uruguay, Switzerland, Chile and the countries of the European Union.
Digital signatures employ asymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret.
|
https://en.wikipedia.org/wiki/Digital_signature
|
passage: Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole.
When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product:
$$
\mathbf{F} = q (\mathbf{v} \times \mathbf{B}) ,
$$
where
$$
q
$$
is the electric charge of the particle, and
v is the velocity vector of the particle
Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. The magnitude of the force is
$$
F=qvB\sin\theta\,
$$
where
$$
\theta
$$
is the angle between v and B.
One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right-hand rule.
## Magnetic dipoles
A very common source of magnetic field found in nature is a dipole, with a "South pole" and a "North pole", terms dating back to the use of magnets as compasses, interacting with the Earth's magnetic field to indicate North and South on the globe. Since opposite ends of magnets are attracted, the north pole of a magnet is attracted to the south pole of another magnet.
|
https://en.wikipedia.org/wiki/Magnetism
|
passage: This definition is sometimes used as a definition of uniform integrability. However, it differs from the definition of uniform integrability given above.
When
$$
\mu(X)<\infty
$$
, a set of functions
$$
\mathcal{F} \subset L^1(X,\mathcal{A},\mu)
$$
is uniformly integrable if and only if it is bounded in
$$
L^1(X,\mathcal{A},\mu)
$$
and has uniformly absolutely continuous integrals. If, in addition,
$$
\mu
$$
is atomless, then the uniform integrability is equivalent to the uniform absolute continuity of integrals.
## Finite measure case
Let
$$
(X,\mathcal{A},\mu)
$$
be a measure space with
$$
\mu(X)<\infty
$$
. Let
$$
(f_n)\subset L^p(X,\mathcal{A},\mu)
$$
and
$$
f
$$
be an
$$
\mathcal{A}
$$
-measurable function. Then, the following are equivalent :
1.
$$
f\in L^p(X,\mathcal{A},\mu)
$$
and
$$
(f_n)
$$
converges to
$$
f
$$
in
$$
L^p(X,\mathcal{A},\mu)
$$
;
1.
|
https://en.wikipedia.org/wiki/Vitali_convergence_theorem
|
passage: ### Pairings
A or pair over a field
$$
\mathbb{K}
$$
is a triple
$$
(X, Y, b),
$$
which may also be denoted by
$$
b(X, Y),
$$
consisting of two vector spaces
$$
X
$$
and
$$
Y
$$
over
$$
\mathbb{K}
$$
and a bilinear map
$$
b : X \times Y \to \mathbb{K}
$$
called the bilinear map associated with the pairing, or more simply called the pairing's map or its bilinear form. The examples here only describe when
$$
\mathbb{K}
$$
is either the real numbers or the complex numbers
$$
\Complex
$$
, but the mathematical theory is general.
For every
$$
x \in X
$$
, define
$$
\begin{alignat}{4}
b(x, \,\cdot\,) : \,& Y && \to &&\, \mathbb{K} \\
BLOCK0\end{alignat}
$$
and for every
$$
y \in Y,
$$
define
$$
\begin{alignat}{4}
b(\,\cdot\,, y) : \,& X && \to &&\, \mathbb{K} \\
BLOCK1\end{alignat}
$$
Every
$$
b(x, \,\cdot\,)
$$
is a linear functional on
$$
Y
$$
and every
$$
b(\,\cdot\,, y)
$$
is a linear functional on
$$
X
$$
.
|
https://en.wikipedia.org/wiki/Dual_system
|
passage: Then the stress will be localized to specific area where the necking appears.
Additionally, we can induce various relation based on true stress-strain curve.
1) True strain and stress curve can be expressed by the approximate linear relationship by taking a log on true stress and strain. The relation can be expressed as below:
$$
\sigma_T=K\times(\varepsilon_T)^n
$$
Where
$$
K
$$
is stress coefficient and
$$
n
$$
is strain-hardening coefficient. Usually, the value of
$$
n
$$
has range around 0.02 to 0.5 at room temperature. If
$$
n
$$
is 1, we can express this material as perfect elastic material.
2) In reality, stress is also highly dependent on the rate of strain variation. Thus, we can induce the empirical equation based on the strain rate variation.
$$
\sigma_T=K'\times(\dot{\varepsilon_T})^m
$$
Where
$$
K'
$$
is constant related to the material flow stress.
$$
\dot{\varepsilon_T}
$$
indicates the derivative of strain by the time, which is also known as strain rate.
$$
m
$$
is the strain-rate sensitivity. Moreover, value of
$$
m
$$
is related to the resistance toward the necking. Usually, the value of
$$
m
$$
is at the range of 0-0.1 at room temperature and as high as 0.8 when the temperature is increased.
|
https://en.wikipedia.org/wiki/Deformation_%28engineering%29
|
passage: The size of the fetus's head is more closely matched to the pelvis than in other primates. The reason for this is not completely understood, but it contributes to a painful labor that can last 24 hours or more. The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.
Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly done by the mother. Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 15 to 17 years of age. The human life span has been split into various stages ranging from three to twelve. Common stages include infancy, childhood, adolescence, adulthood and old age. The lengths of these stages have varied across cultures and time periods but is typified by an unusually rapid growth spurt during adolescence. Human females undergo menopause and become infertile at around the age of 50. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.
The life span of an individual depends on two major factors, genetics and lifestyle choices. For various reasons, including biological/genetic causes, women live on average about four years longer than men.
|
https://en.wikipedia.org/wiki/Human
|
passage: then for all
$$
z,
$$
either
$$
x < z \text{ or } z < y
$$
or both.
- If
$$
x
$$
is incomparable with
$$
y
$$
then for all
$$
z
$$
, either (
$$
x < z \text{ and } y < z
$$
) or (
$$
z < x \text{ and } z < y
$$
) or (
$$
z
$$
is incomparable with
$$
x
$$
and
$$
z
$$
is incomparable with
$$
y
$$
).
### Total preorders
Strict weak orders are very closely related to total preorders or (non-strict) weak orders, and the same mathematical concepts that can be modeled with strict weak orderings can be modeled equally well with total preorders. A total preorder or weak order is a preorder in which any two elements are comparable. A total preorder
$$
\,\lesssim\,
$$
satisfies the following properties:
- : For all
$$
x, y, \text{ and } z,
$$
if
$$
x \lesssim y \text{ and } y \lesssim z
$$
then
$$
x \lesssim z.
$$
- : For all
$$
x \text{ and } y,
$$
$$
x \lesssim y \text{ or } y \lesssim x.
$$
- Which implies : for all
$$
x,
$$
$$
x \lesssim x.
$$
A total order is a total preorder which is antisymmetric, in other words, which is also a partial order.
|
https://en.wikipedia.org/wiki/Weak_ordering
|
passage: ### Projection and rejection
For any vector
$$
a
$$
and any invertible vector ,
$$
a = amm^{-1} = (a\cdot m + a \wedge m)m^{-1} = a_{\| m} + a_{\perp m} ,
$$
where the projection of
$$
a
$$
onto
$$
m
$$
(or the parallel part) is
$$
a_{\| m} = (a \cdot m)m^{-1}
$$
and the rejection of
$$
a
$$
from
$$
m
$$
(or the orthogonal part) is
$$
a_{\perp m} = a - a_{\| m} = (a\wedge m)m^{-1} .
$$
Using the concept of a -blade as representing a subspace of and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible -blade as
$$
\mathcal{P}_B (A) = (A \;\rfloor\; B) \;\rfloor\; B^{-1} ,
$$
with the rejection being defined as
$$
\mathcal{P}_B^\perp (A) = A - \mathcal{P}_B (A) .
$$
The projection and rejection generalize to null blades
$$
B
$$
by replacing the inverse
$$
B^{-1}
$$
with the pseudoinverse
$$
B^{+}
$$
with respect to the contractive product.
|
https://en.wikipedia.org/wiki/Geometric_algebra
|
passage: Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic.
## Classical density functional theory
uses a similar formalism to calculate the properties of non-uniform classical fluids.
Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional.
|
https://en.wikipedia.org/wiki/Density_functional_theory
|
passage: Computer graphics techniques such as normal and bump mapping have been designed to compensate for the decrease of polygons by making a low poly object appear to contain more detail than it does. This is done by altering the shading of polygons to contain internal detail which is not in the mesh.
## Polygon budget
A combination of the game engine or rendering method and the computer being used defines the polygon budget; the number of polygons which can appear in a scene and still be rendered at an acceptable frame rate. Therefore, the use of low poly meshes are mostly confined to computer games and other software a user must manipulate 3D objects in real time because processing power is limited to that of a typical personal computer or games console and the frame rate must be high. Computer generated imagery, for example, for films or still images have a higher polygon budget because rendering does not need to be done in real-time, which requires higher frame rates. In addition, computer processing power in these situations is typically less limited, often using a large network of computers or what is known as a render farm. Each frame can take hours to create, despite the enormous computer power involved. A common example of the difference this makes is full motion video sequences in computer games which, because they can be pre-rendered, look much smoother than the games themselves.
## Aesthetic
Models that are said to be low poly often appear blocky and simple while still maintaining the basic shape of what they are meant to represent. With computer graphics getting more powerful, it has become increasingly computationally cheap to render low poly graphics.
|
https://en.wikipedia.org/wiki/Low_poly
|
passage: 1&0&0\\
0.5&1&0\\
0.5&0&1\end{bmatrix}\begin{bmatrix}
2.5\\
2.5\\
0\end{bmatrix}=\begin{bmatrix}
2.5\\
3.75\\
1.25\end{bmatrix}
$$
According to the definition of
$$
\mathbf{Y}
$$
, we have:
$$
P(Y_1=0)=\left(\sum_{n=0}^\infty \left(\frac{2.5}{15}\right)^n\right)^{-1}=\frac{5}{6}
$$
$$
P(Y_2=0)=\left(\sum_{n=0}^\infty \left(\frac{3.75}{12}\right)^n\right)^{-1}=\frac{11}{16}
$$
$$
P(Y_3=0)=\left(\sum_{n=0}^\infty \left(\frac{1.25}{10}\right)^n\right)^{-1}=\frac{7}{8}
$$
Hence the probability that there is one job at each node is:
$$
\pi(1,1,1)=\frac{5}{6}\cdot\frac{2.5}{15}\cdot\frac{11}{16}\cdot\frac{3.75}{12}\cdot\frac{7}{8}\cdot\frac{1.25}{10}\approx 0.00326
$$
Since
|
https://en.wikipedia.org/wiki/Jackson_network
|
passage: ## Relationship to the Kronecker delta
The Kronecker delta is the quantity defined by
$$
\delta_{ij} = \begin{cases} 1 & i=j\\ 0 &i\not=j \end{cases}
$$
for all integers , . This function then satisfies the following analog of the sifting property: if (for in the set of all integers) is any doubly infinite sequence, then
$$
\sum_{i=-\infty}^\infty a_i \delta_{ik}=a_k.
$$
Similarly, for any real or complex valued continuous function on , the Dirac delta satisfies the sifting property
$$
\int_{-\infty}^\infty f(x)\delta(x-x_0)\,dx=f(x_0).
$$
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
## Applications
### Probability theory
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions).
|
https://en.wikipedia.org/wiki/Dirac_delta_function
|
passage: Nagle's algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It was defined by John Nagle while working for Ford Aerospace. It was published in 1984 as a Request for Comments (RFC) with title Congestion Control in IP/TCP Internetworks in .
The RFC describes what Nagle calls the "small-packet problem", where an application repeatedly emits data in small chunks, frequently only 1 byte in size. Since TCP packets have a 40-byte header (20 bytes for TCP, 20 bytes for IPv4), this results in a 41-byte packet for 1 byte of useful information, a huge overhead. This situation often occurs in Telnet sessions, where most keypresses generate a single byte of data that is transmitted immediately. Worse, over slow links, many such packets can be in transit at the same time, potentially leading to congestion collapse.
Nagle's algorithm works by combining a number of small outgoing messages and sending them all at once. Specifically, as long as there is a sent packet for which the sender has received no acknowledgment, the sender should keep buffering its output until it has a full packet's worth of output, thus allowing output to be sent all at once.
## Algorithm
The RFC defines the algorithm as
inhibit the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.
|
https://en.wikipedia.org/wiki/Nagle%27s_algorithm
|
passage: When that is done for each possible key length, the highest average index of coincidence then corresponds to the most-likely key length. Such tests may be supplemented by information from the Kasiski examination.
### Frequency analysis
Once the length of the key is known, the ciphertext can be rewritten into that many columns, with each column corresponding to a single letter of the key. Each column consists of plaintext that has been encrypted by a single Caesar cipher. The Caesar key (shift) is just the letter of the Vigenère key that was used for that column. Using methods similar to those used to break the Caesar cipher, the letters in the ciphertext can be discovered.
An improvement to the Kasiski examination, known as Kerckhoffs' method, matches each column's letter frequencies to shifted plaintext frequencies to discover the key letter (Caesar shift) for that column. Once every letter in the key is known, all the cryptanalyst has to do is to decrypt the ciphertext and reveal the plaintext. Kerckhoffs' method is not applicable if the Vigenère table has been scrambled, rather than using normal alphabetic sequences, but Kasiski examination and coincidence tests can still be used to determine key length.
### Key elimination
The Vigenère cipher, with normal alphabets, essentially uses modulo arithmetic, which is commutative.
|
https://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher
|
passage: A number of deep sea creatures are bioluminescent; this serves a variety of functions including predation, protection and social recognition. In general, the bodies of animals living at great depths are adapted to high pressure environments by having pressure-resistant biomolecules and small organic molecules present in their cells known as piezolytes, which give the proteins the flexibility they need. There are also unsaturated fats in their membranes which prevent them from solidifying at low temperatures.
Hydrothermal vents were first discovered in the ocean depths in 1977. They result from seawater becoming heated after seeping through cracks to places where hot magma is close to the seabed. The under-water hot springs may gush forth at temperatures of over and support unique communities of organisms in their immediate vicinity. The basis for this teeming life is chemosynthesis, a process by which microbes convert such substances as hydrogen sulfide or ammonia into organic molecules. These bacteria and Archaea are the primary producers in these ecosystems and support a diverse array of life. About 350 species of organism, dominated by molluscs, polychaete worms and crustaceans, had been discovered around hydrothermal vents by the end of the twentieth century, most of them being new to science and endemic to these habitat types.
Besides providing locomotion opportunities for winged animals and a conduit for the dispersal of pollen grains, spores and seeds, the atmosphere can be considered to be a habitat-type in its own right.
|
https://en.wikipedia.org/wiki/Habitat
|
passage: A classically truncated HOSVD is obtained by replacing step 2 in the classic computation by
- Compute a rank-
$$
\bar R_m
$$
truncated SVD
$$
\mathcal{A}_{[m]} \approx U_m \Sigma_m V^T_m
$$
, and store the top
$$
\bar R_m
$$
left singular vectors
$$
U_m \in F^{I_m \times \bar R_m}
$$
;
while a sequentially truncated HOSVD (or successively truncated HOSVD) is obtained by replacing step 2 in the interlaced computation by
- Compute a rank-
$$
\bar R_m
$$
truncated SVD
$$
\mathcal{A}_{[m]}^{m-1} \approx U_m \Sigma_m V^T_m
$$
, and store the top
$$
\bar R_m
$$
left singular vectors
$$
U_m \in F^{I_m \times \bar R_m}
$$
. Unfortunately, truncation does not result in an optimal solution for the best low multilinear rank optimization problem,. However, both the classically and interleaved truncated HOSVD result in a quasi-optimal solution: if
$$
\mathcal{\bar A}_t
$$
denotes the classically or sequentially truncated HOSVD and
$$
\mathcal{\bar A}^*
$$
denotes the optimal solution to the best low multilinear rank approximation problem, then
$$
\| \mathcal{A} - \mathcal{\bar A}_t \|_F \le \sqrt{M} \| \mathcal{A} - \mathcal{\bar A}^* \|_F;
$$
in practice this means that if there exists an optimal solution with a small error, then a truncated HOSVD will for many intended purposes also yield a sufficiently good solution.
|
https://en.wikipedia.org/wiki/Higher-order_singular_value_decomposition
|
passage: If, when a vertex is colored, there exists an edge connecting it to a previously-colored vertex with the same color, then this edge together with the paths in the breadth-first search forest connecting its two endpoints to their lowest common ancestor forms an odd cycle. If the algorithm terminates without finding an odd cycle in this way, then it must have found a proper coloring, and can safely conclude that the graph is bipartite.
For the intersection graphs of
$$
n
$$
line segments or other simple shapes in the Euclidean plane, it is possible to test whether the graph is bipartite and return either a two-coloring or an odd cycle in time
$$
O(n\log n)
$$
, even though the graph itself may have up to
$$
O(n^2)
$$
edges.
### Odd cycle transversal
Odd cycle transversal is an NP-complete algorithmic problem that asks, given a graph G = (V,E) and a number k, whether there exists a set of k vertices whose removal from G would cause the resulting graph to be bipartite. The problem is fixed-parameter tractable, meaning that there is an algorithm whose running time can be bounded by a polynomial function of the size of the graph multiplied by a larger function of k. The name odd cycle transversal comes from the fact that a graph is bipartite if and only if it has no odd cycles. Hence, to delete vertices from a graph in order to obtain a bipartite graph, one needs to "hit all odd cycle", or find a so-called odd cycle transversal set.
|
https://en.wikipedia.org/wiki/Bipartite_graph
|
passage: Landslide Susceptibility Zonation through ratingsChauhan, S., Sharma, M., Arora, M. K., & Gupta, N. K. (2010). Landslide susceptibility zonation through ratings derived from artificial neural network. International Journal of Applied Earth Observation and Geoinformation, 12(5), 340-350.Spatial data layers with slope, aspect, relative relief, lithology, structural features, land use, land cover, drainage densityParts of Chamoli and Rudraprayag districts of the State of Uttarakhand, IndiaArtificial Neural Network (ANN)The AUC of this approach reaches 0.88. This approach generated an accurate assessment of landslide risks. Regional Landslide Hazard AnalysisTopographic slope, aspect, and curvature; distance from drainage, lithology, distance from lineament, land cover from TM satellite images, vegetation index (NDVI), precipitation dataEastern Selangor state, MalaysiaArtificial Neural Network (ANN)The approach achieved 82.92% accuracy of prediction.
### Feature identification and detection
#### Discontinuity analyses
Discontinuities such as fault planes and bedding planes have important implications in civil engineering. Rock fractures can be recognized automatically by machine learning through photogrammetric analysis, even with the presence of interfering objects such as vegetation. In ML training for classifying images, data augmentation is a common practice to avoid overfitting and increase the training dataset size and variability.
|
https://en.wikipedia.org/wiki/Machine_learning_in_earth_sciences
|
passage: For example, compact operators on Banach spaces have many spectral properties similar to that of matrices.
## Physical background
The background in the physics of vibrations has been explained in this way:
Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz. It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory.
## A definition of spectrum
Consider a bounded linear transformation T defined everywhere over a general Banach space.
|
https://en.wikipedia.org/wiki/Spectral_theory
|
passage: There are many peer-to-peer network protocols for open-source distributed file systems for cloud or closed-source clustered file systems, e. g.: 9P, AFS, Coda, CIFS/SMB, DCE/DFS, WekaFS, Lustre, PanFS, Google File System, Mnet, Chord Project.
Examples
- Alluxio
- BeeGFS (Fraunhofer)
- CephFS (Inktank, Red Hat, SUSE)
- Windows Distributed File System (DFS) (Microsoft)
- Infinit (acquired by Docker)
- GfarmFS
- GlusterFS (Red Hat)
- GFS (Google Inc.)
- GPFS (IBM)
- HDFS (Apache Software Foundation)
- IPFS (Inter Planetary File System)
- iRODS
- LizardFS (Skytechnology)
- Lustre
- MapR FS
- MooseFS (Core Technology / Gemius)
- ObjectiveFS
- OneFS (EMC Isilon)
- OrangeFS (Clemson University, Omnibond Systems), formerly Parallel Virtual File System
- PanFS (Panasas)
- Parallel Virtual File System (Clemson University, Argonne National Laboratory, Ohio Supercomputer Center)
- RozoFS (Rozo Systems)
- SMB/CIFS
- Torus (CoreOS)
- WekaFS (WekaIO)
- XtreemFS
## Network-attached storage
Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN).
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
passage: So the uncountable
$$
2^{\mathbb N}
$$
is also not enumerable and it can also be mapped onto
$$
{\mathbb N}
$$
. Classically, the Schröder–Bernstein theorem is valid and says that any two sets which are in the injective image of one another are in bijection as well. Here, every unbounded subset of
$$
{\mathbb N}
$$
is then in bijection with
$$
{\mathbb N}
$$
itself, and every subcountable set (a property in terms of surjections) is then already countable, i.e. in the surjective image of
$$
{\mathbb N}
$$
. In this context the possibilities are then exhausted, making "
$$
\le
$$
" a non-strict partial order, or even a total order when assuming choice. The diagonal argument thus establishes that, although both sets under consideration are infinite, there are actually more infinite sequences of ones and zeros than there are natural numbers.
Cantor's result then also implies that the notion of the set of all sets is inconsistent: If
$$
S
$$
were the set of all sets, then
$$
{\mathcal P}(S)
$$
would at the same time be bigger than
$$
S
$$
and a subset of
$$
S
$$
.
####
|
https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument
|
passage: If is equal to the sum of its Taylor series for all in the complex plane, it is called entire. The polynomials, exponential function , and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if is far from . That is, the Taylor series diverges at if the distance between and is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point.
Uses of the Taylor series for analytic functions include:
1. The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included.
1. Differentiation and integration of power series can be performed term by term and is hence particularly easy.
1. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available.
1. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm).
1.
|
https://en.wikipedia.org/wiki/Taylor_series
|
passage: For other bases, difficulties appear already with the apparently simple case of th roots, that is, of exponents
$$
1/n,
$$
where is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to th roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand.
### th roots of a complex number
Every nonzero complex number may be written in polar form as
$$
z=\rho e^{i\theta}=\rho(\cos \theta +i \sin \theta),
$$
where
$$
\rho
$$
is the absolute value of , and
$$
\theta
$$
is its argument. The argument is defined up to an integer multiple of ; this means that, if
$$
\theta
$$
is the argument of a complex number, then
$$
\theta +2k\pi
$$
is also an argument of the same complex number for every integer
$$
k
$$
.
The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an th root of a complex number can be obtained by taking the th root of the absolute value and dividing its argument by :
$$
\left(\rho e^{i\theta}\right)^\frac 1n=\sqrt[n]\rho \,e^\frac{i\theta}n.
$$
If
|
https://en.wikipedia.org/wiki/Exponentiation
|
passage: An application, applying a function
$$
M
$$
to an argument
$$
N
$$
. Both
$$
M
$$
and
$$
N
$$
are lambda terms.
The reduction operations include:
-
$$
(\lambda x.M[x])\rightarrow(\lambda y.M[y])
$$
:
### α-conversion
, renaming the bound variables in the expression. Used to avoid name collisions.
-
$$
((\lambda x.M)\ N)\rightarrow (M[x:=N])
$$
:
####
### β-reduction
, replacing the bound variables with the argument expression in the body of the abstraction.
If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form.
Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations.
## Explanation and applications
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be untyped or typed.
|
https://en.wikipedia.org/wiki/Lambda_calculus
|
passage: ### Diffusivity constant
The diffusivity constant is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let be a function with
$$
\frac{\partial u}{\partial t}=\alpha\Delta u.
$$
Define a new function
$$
v(t,x)=u(t/\alpha,x)
$$
. Then, according to the chain rule, one has
Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of and solutions of the heat equation with . As such, for the sake of mathematical analysis, it is often sufficient to only consider the case .
Since
$$
\alpha>0
$$
there is another option to define a
$$
v
$$
satisfying
$$
\frac{\partial}{\partial t} v = \Delta v
$$
as in () above by setting
$$
v(t,x) = u(t, \alpha^{1/2} x)
$$
. Note that the two possible means of defining the new function
$$
v
$$
discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length.
|
https://en.wikipedia.org/wiki/Heat_equation
|
passage: When the line is an exterior line (or passant), if a tangent line and if the line is a secant line.
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
- For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
A set of points in an affine plane satisfying the above definition is called an affine oval.
An affine oval is always a projective oval in the projective closure (adding a line at infinity) of the underlying affine plane.
An oval can also be considered as a special quadratic set.
## Examples
### Conic sections
In any pappian projective plane there exist nondegenerate projective conic sections
and any nondegenerate projective conic section is an oval. This statement can be verified by a straightforward calculation for any of the conics (such as the parabola or hyperbola).
Non-degenerate conics are ovals with special properties:
- Pascal's Theorem and its various degenerations are valid.
- There are many projectivities which leave a conic invariant.
### Ovals, which are not conics
in the real plane
1. If one glues one half of a circle and a half of an ellipse smoothly together, one gets a non-conic oval.
1.
|
https://en.wikipedia.org/wiki/Oval_%28projective_plane%29
|
passage: ## Related distributions
- The Erlang distribution is the distribution of the sum of k independent and identically distributed random variables, each having an exponential distribution. The long-run rate at which events occur is the reciprocal of the expectation of
$$
X,
$$
that is,
$$
\lambda/k.
$$
The (age specific event) rate of the Erlang distribution is, for
$$
k>1,
$$
monotonic in
$$
x,
$$
increasing from 0 at
$$
x=0,
$$
to
$$
\lambda
$$
as
$$
x
$$
tends to infinity.
- That is: if
$$
X_i \sim \operatorname{Exponential}(\lambda),
$$
then
$$
\sum_{i=1}^k{X_i} \sim \operatorname{Erlang}(k, \lambda)
$$
- Because of the factorial function in the denominator of the PDF and CDF, the Erlang distribution is only defined when the parameter k is a positive integer. In fact, this distribution is sometimes called the Erlang-k distribution (e.g., an Erlang-2 distribution is an Erlang distribution with
$$
k=2
$$
). The gamma distribution generalizes the Erlang distribution by allowing k to be any positive real number, using the gamma function instead of the factorial function.
- That is: if k is an integer and
$$
X \sim \operatorname{Gamma}(k, \lambda),
$$
then
$$
X \sim \operatorname{Erlang}(k, \lambda)
$$
-
|
https://en.wikipedia.org/wiki/Erlang_distribution
|
passage: In the C++ programming language, special member functions are functions which the compiler will automatically generate if they are used, but not declared explicitly by the programmer.
The automatically generated special member functions are:
- Default constructor if no other constructor is explicitly declared.
- Copy constructor if no move constructor and move assignment operator are explicitly declared.
If a destructor is declared generation of a copy constructor is deprecated (C++11, proposal N3242).
- Move constructor if no copy constructor, copy assignment operator, move assignment operator and destructor are explicitly declared.
- Copy assignment operator if no move constructor and move assignment operator are explicitly declared.
If a destructor is declared, generation of a copy assignment operator is deprecated.
- Move assignment operator if no copy constructor, copy assignment operator, move constructor and destructor are explicitly declared.
- Destructor
In these cases the compiler generated versions of these functions perform a memberwise operation. For example, the compiler generated destructor will destroy each sub-object (base class or member) of the object.
The compiler generated functions will be `public`, non-virtual and the copy constructor and assignment operators will receive `const&` parameters (and not be of the alternative legal forms).
## Example
The following example depicts two classes: for which all special member functions are explicitly declared and for which none are declared.
|
https://en.wikipedia.org/wiki/Special_member_functions
|
passage: One or both of these fields changes as the rotor turns. This is done by switching the poles on and off at the right time, or varying the strength of the pole.
Motors can be designed to operate on DC current, on AC current, or some types can work on either.
AC motors can be either asynchronous or synchronous.
### Synchronous motor
s require the rotor to turn at the same speed as the stator's rotating field. Asynchronous rotors relax this constraint.
A fractional-horsepower motor either has a rating below about 1 horsepower (0.746 kW), or is manufactured with a frame size smaller than a standard 1 HP motor. Many household and industrial motors are in the fractional-horsepower class.
|
https://en.wikipedia.org/wiki/Electric_motor
|
passage: Galois correspondence
Let
$$
X
$$
be a connected and locally simply connected space, then for every subgroup
$$
H\subseteq \pi_1(X)
$$
there exists a path-connected covering
$$
\alpha:X_H \rightarrow X
$$
with
$$
\alpha_{\#}(\pi_1(X_H))=H
$$
.
Let
$$
p_1:E \rightarrow X
$$
and
$$
p_2: E' \rightarrow X
$$
be two path-connected coverings, then they are equivalent iff the subgroups
$$
H = p_{1\#}(\pi_1(E))
$$
and
$$
H'=p_{2\#}(\pi_1(E'))
$$
are conjugate to each other.
|
https://en.wikipedia.org/wiki/Covering_space
|
passage: The total of all the dark matter out to the orbit of Neptune would add up about 1017 kg, the same as a large asteroid.
Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes.
Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles.
Although the astrophysics community generally accepts the existence of dark matter, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required.
## History
### Early history
The hypothesis of dark matter has an elaborate history.
Wm.
|
https://en.wikipedia.org/wiki/Dark_matter
|
passage: The stretching energy is
$$
U_{E}=\frac{1}{2}\lambda \sum_{({\bf w}_i,{\bf w}_j) \in E} \|{\bf w}_i -{\bf w}_j\|^2
$$
,
The bending energy is
$$
U_G=\frac{1}{2}\mu \sum_{({\bf w}_i,{\bf w}_j,{\bf w}_k) \in G} \|{\bf w}_i -2{\bf w}_j+{\bf w}_k\|^2
$$
,
where
$$
\lambda
$$
and
$$
\mu
$$
are the stretching and bending moduli respectively. The stretching energy is sometimes referred to as the membrane, while the bending energy is referred to as the thin plate term.
For example, on the 2D rectangular grid the elastic edges are just vertical and horizontal edges (pairs of closest vertices) and the bending ribs are the vertical or horizontal triplets of consecutive (closest) vertices.
The total energy of the elastic map is thus
$$
U=D+U_E+U_G.
$$
The position of the nodes
$$
\{{\bf w}_j\}
$$
is determined by the mechanical equilibrium of the elastic map, i.e. its location is such that it minimizes the total energy
$$
U
$$
.
## Expectation-maximization algorithm
|
https://en.wikipedia.org/wiki/Elastic_map
|
passage: The cecum while similar to dogs, does not have a coiled cecum.
The stomach of the cat can be divided into distinct regions of motor activity. The proximal end of the stomach relaxes when food is digested. While food is being digested this portion of the stomach either has rapid stationary contractions or a sustained tonic contraction of muscle. These different actions result in either the food being moved around or the food moving towards the distal portion of the stomach. The distal portion of the stomach undergoes rhythmic cycles of partial depolarization. This depolarization sensitizes muscle cells so they are more likely to contract. The stomach is not only a muscular structure, it also serves a chemical function by releasing hydrochloric acid and other digestive enzymes to break down food.
Food moves from the stomach into the small intestine. The first part of the small intestine is the duodenum. As food moves through the duodenum, it mixes with bile, a fluid that neutralizes stomach acid and emulsifies fat. The pancreas releases enzymes that aid in digestion so that nutrients can be broken down and pass through the intestinal mucosa into the blood and travel to the rest of the body. The pancreas does not produce starch processing enzymes because cats do not eat a diet high in carbohydrates. Since the cat digests low amounts of glucose, the pancreas uses amino acids to trigger insulin release instead.
Food then moves on to the jejunum. This is the most nutrient absorptive section of the small intestine.
|
https://en.wikipedia.org/wiki/Cat_anatomy
|
passage: Since there is a finite number of numbers m satisfying
$$
m<2^{n_1}
$$
, we may choose the same number of steps for all of them: there is a number
$$
m_1
$$
, such that
$$
T'
$$
halts after
$$
m_1
$$
steps precisely on those inputs
$$
m<2^{n_1}
$$
for which it halts at all.
Moving to prenex normal form, we get that the oracle machine halts on input
$$
n
$$
if and only if the following formula is satisfied:
$$
\varphi(n) =\exists n_1\exists m_1 \forall m_2 :(\psi_H(m,m_2)\rightarrow (O_m=1)) \land(\lnot\psi_H(m,m_1)\rightarrow (O_m=0))) \land {\varphi_O}_1(n,n_1)
$$
(informally, there is a "maximal number of steps"
$$
m_1
$$
such every oracle that does not halt within the first
$$
m_1
$$
steps does not stop at all; however, for every
$$
m_2
$$
, each oracle that halts after
$$
m_2
$$
steps does halt).
Note that we may replace both
$$
n_1
$$
and
$$
m_1
$$
by a single number - their maximum - without changing the truth value of
$$
\varphi(n)
$$
. Thus we may write:
|
https://en.wikipedia.org/wiki/Post%27s_theorem
|
passage: The principle is straightforward, but in practice finding a reliable method of determining longitude took centuries and required the effort of some of the greatest scientific minds.
A location's north-south position along a meridian is given by its latitude, which is approximately the angle between the equatorial plane and the normal from the ground at that location.
Longitude is generally given using the geodetic normal or the gravity direction. The astronomical longitude can differ slightly from the ordinary longitude because of vertical deflection, small variations in Earth's gravitational field (see astronomical latitude).
## History
The concept of longitude was first developed by ancient Greek astronomers. Hipparchus (2nd century BCE) used a coordinate system that assumed a spherical Earth, and divided it into 360° as we still do today. His prime meridian passed through Alexandria. He also proposed a method of determining longitude by comparing the local time of a lunar eclipse at two different places, thus demonstrating an understanding of the relationship between longitude and time. Claudius Ptolemy (2nd century CE) developed a mapping system using curved parallels that reduced distortion. He also collected data for many locations, from Britain to the Middle East. He used a prime meridian through the Canary Islands, so that all longitude values would be positive. While Ptolemy's system was sound, the data he used were often poor, leading to a gross over-estimate (by about 70%) of the length of the Mediterranean.
After the fall of the Roman Empire, interest in geography greatly declined in Europe.
|
https://en.wikipedia.org/wiki/Longitude
|
passage: For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).
For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.
### Degree two
Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.
For proving that there is no solution, one may reduce the equation modulo. For example, the Diophantine equation
$$
x^2+y^2=3z^2,
$$
does not have any other solution than the trivial solution . In fact, by dividing , and by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if , and are all even, and are thus not coprime. Thus the only solution is the trivial solution . This shows that there is no rational point on a circle of radius
$$
\sqrt{3}
$$
, centered at the origin.
|
https://en.wikipedia.org/wiki/Diophantine_equation
|
passage: ## Metric
A Riemann surface does not come equipped with any particular Riemannian metric. The Riemann surface's conformal structure does, however, determine a class of metrics: all those whose subordinate conformal structure is the given one. In more detail: The complex structure of the Riemann surface does uniquely determine a metric up to conformal equivalence. (Two metrics are said to be conformally equivalent if they differ by multiplication by a positive smooth function.) Conversely, any metric on an oriented surface uniquely determines a complex structure, which depends on the metric only up to conformal equivalence. Complex structures on an oriented surface are therefore in one-to-one correspondence with conformal classes of metrics on that surface.
Within a given conformal class, one can use conformal symmetry to find a representative metric with convenient properties. In particular, there is always a complete metric with constant curvature in any given conformal class.
In the case of the Riemann sphere, the Gauss–Bonnet theorem implies that a constant-curvature metric
$$
\gamma
$$
must have positive curvature
$$
K
$$
. It follows that the metric must be isometric to the sphere of radius
$$
1/\sqrt{K}
$$
in
$$
\mathbf{R}^3
$$
via stereographic projection.
|
https://en.wikipedia.org/wiki/Riemann_sphere
|
passage: Key electrical/electronic components might include:
- precision resistors and capacitors
- operational amplifiers
- multipliers
- potentiometers
- fixed-function generators
The core mathematical operations used in an electric analog computer are:
- addition
- integration with respect to time
- inversion
- multiplication
- exponentiation
- logarithm
- division
In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier.
Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability.
## Limitations
In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit.
## Decline
In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers.
|
https://en.wikipedia.org/wiki/Analog_computer
|
passage: This then allows for the roots to generate over 0.05 mPa of pressure, and that is capable of destroying the blockage and refilling the xylem with water, reconnecting the vascular system. If a plant is unable to generate enough pressure to eradicate the blockage it must prevent the blockage from spreading with the use of pit pears and then create new xylem that can re-connect the vascular system of the plant.
Scientists have begun using magnetic resonance imaging (MRI) to monitor the internal status of the xylem during transpiration, in a non invasive manner. This method of imaging allows for scientists to visualize the movement of water throughout the entirety of the plant. It also is capable of viewing what phase the water is in while in the xylem, which makes it possible to visualize cavitation events. Scientists were able to see that over the course of 20 hours of sunlight more than 10 xylem vessels began filling with gas particles becoming cavitated. MRI technology also made it possible to view the process by which these xylem structures are repaired in the plant. After three hours in darkness it was seen that the vascular tissue was resupplied with liquid water. This was possible because in darkness the stomates of the plant are closed and transpiration no longer occurs. When transpiration is halted the cavitation bubbles are destroyed by the pressure generated by the roots. These observations suggest that MRIs are capable of monitoring the functional status of xylem and allows scientists to view cavitation events for the first time.
## Effects on the environment
|
https://en.wikipedia.org/wiki/Transpiration
|
passage: The heat capacity of the reactants (and the vessel) are measured by introducing a known amount of heat using a heater element (voltage and current) and measuring the temperature change.
Adiabatic calorimeters most commonly used in materials science research to study reactions that occur at a constant pressure and volume. They are particularly useful for determining the heat capacity of substances, measuring the enthalpy changes of chemical reactions, and studying the thermodynamic properties of materials.
Differential scanning calorimeter
In a differential scanning calorimeter (DSC), heat flow into a sample—usually contained in a small aluminium capsule or 'pan'—is measured differentially, i.e., by comparing it to the flow into an empty reference pan.
In a heat flux DSC, both pans sit on a small slab of material with a known (calibrated) heat resistance K. The temperature of the calorimeter is raised linearly with time (scanned), i.e., the heating rate
dT/dt = β
is kept constant. This time linearity requires good design and good (computerized) temperature control. Of course, controlled cooling and isothermal experiments are also possible.
Heat flows into the two pans by conduction. The flow of heat into the sample is larger because of its heat capacity Cp. The difference in flow dq/dt induces a small temperature difference ΔT across the slab. This temperature difference is measured using a thermocouple.
|
https://en.wikipedia.org/wiki/Calorimeter
|
passage: A structure that satisfies all the axioms of the formal system is known as a model of the logical system.
A logical system is:
- Sound, if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system.
- Semantically complete, if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms.
An example of a logical system is Peano arithmetic. The standard model of arithmetic sets the domain of discourse to be the nonnegative integers and gives the symbols their usual meaning. There are also non-standard models of arithmetic.
## History
Early logic systems includes Indian logic of Pāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic of Gongsun Long (c. 325–250 BCE). In more recent times, contributors include George Boole, Augustus De Morgan, and Gottlob Frege. Mathematical logic was developed in 19th century Europe.
David Hilbert instigated a formalist movement called Hilbert’s program as a proposed solution to the foundational crisis of mathematics, that was eventually tempered by Gödel's incompleteness theorems. The QED manifesto represented a subsequent, as yet unsuccessful, effort at formalization of known mathematics.
|
https://en.wikipedia.org/wiki/Formal_system
|
passage: Using tensor notation, we can write all this more compactly. The term
$$
- \rho \phi (\mathbf{x},t) + \mathbf{j} \cdot \mathbf{A}
$$
is actually the inner product of two four-vectors. We package the charge density into the current 4-vector and the potential into the potential 4-vector. These two new vectors are
$$
j^\mu = (\rho,\mathbf{j})\quad\text{and}\quad A_\mu = (-\phi,\mathbf{A})
$$
We can then write the interaction term as
$$
- \rho \phi + \mathbf{j} \cdot \mathbf{A} = j^\mu A_\mu
$$
Additionally, we can package the E and B fields into what is known as the electromagnetic tensor
$$
F_{\mu\nu}
$$
.
|
https://en.wikipedia.org/wiki/Lagrangian_%28field_theory%29
|
passage: In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series
$$
\sum_{n=1}^\infty a_n
$$
.
## List of tests
### Limit of the summand
If the limit of the summand is undefined or nonzero, that is
$$
\lim_{n \to \infty}a_n \ne 0
$$
, then the series must diverge. In this sense, the partial sums are Cauchy only if this limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as the nth-term test, test for divergence, or the divergence test.
### Ratio test
This is also known as d'Alembert's criterion.
Consider two limits
$$
\ell=\liminf_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|
$$
and
$$
L=\limsup_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|
$$
. If
$$
\ell>1
$$
, the series diverges. If
$$
L<1
$$
then the series converges absolutely. If
$$
\ell\le1\le L
$$
then the test is inconclusive, and the series may converge absolutely, conditionally or diverge.
|
https://en.wikipedia.org/wiki/Convergence_tests
|
passage: For -almost all
$$
x \in X
$$
, one has
$$
\varphi(Tx) = S(\varphi x)
$$
.
The system
$$
(Y, \mathcal{B}, \nu, S)
$$
is then called a factor of
$$
(X, \mathcal{A}, \mu, T)
$$
.
The map
$$
\varphi\;
$$
is an isomorphism of dynamical systems if, in addition, there exists another mapping
$$
\psi:Y \to X
$$
that is also a homomorphism, which satisfies
1. for
$$
\mu
$$
-almost all
$$
x \in X
$$
, one has
$$
x = \psi(\varphi x)
$$
;
1. for
$$
\nu
$$
-almost all
$$
y \in Y
$$
, one has
$$
y = \varphi(\psi y)
$$
.
Hence, one may form a category of dynamical systems and their homomorphisms.
## Generic points
A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure.
## Symbolic names and generators
Consider a dynamical system
$$
(X, \mathcal{B}, T, \mu)
$$
, and let Q = {Q1, ..., Qk} be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X, clearly x belongs to only one of the Qi.
|
https://en.wikipedia.org/wiki/Measure-preserving_dynamical_system
|
passage: But if the program is in SSA form, both of these are immediate:
y1 := 1
y2 := 2
x1 := y2
Compiler optimization algorithms that are either enabled or strongly enhanced by the use of SSA include:
- Constant propagation – conversion of computations from runtime to compile time, e.g. treat the instruction `a=3*4+5;` as if it were `a=17;`
- Value range propagation – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
- Sparse conditional constant propagation – range-check some values, allowing tests to predict the most likely branch
- Dead-code elimination – remove code that will have no effect on the results
- Global value numbering – replace duplicate calculations producing the same result
- Partial-redundancy elimination – removing duplicate calculations previously performed in some branches of the program
- Strength reduction – replacing expensive operations by less expensive but equivalent ones, e.g. replace integer multiply or divide by powers of 2 with the potentially less expensive shift left (for multiply) or shift right (for divide).
- Register allocation – optimize how the limited number of machine registers may be used for calculations
## Converting to SSA
Converting ordinary code into SSA form is primarily a matter of replacing the target of each assignment with a new variable, and replacing each use of a variable with the "version" of the variable reaching that point. For example, consider the following control-flow graph:
Changing the name on the left hand side of "x
$$
\leftarrow
$$
x - 3" and changing the following uses of x to that new name would leave the program unaltered.
|
https://en.wikipedia.org/wiki/Static_single-assignment_form
|
passage: If ξ = 0 is chosen at the wave crest η(0) = η1 integration gives
with F(ψ|m) the incomplete elliptic integral of the first kind. The Jacobi elliptic functions cn and sn are inverses of F(ψ|m) given by
$$
\cos\, \psi = \operatorname{cn} \left( \begin{array}{c|c} \displaystyle \frac{\xi}{\Delta} & m \end{array} \right)
$$
and
$$
\sin\, \psi = \operatorname{sn} \left( \begin{array}{c|c} \displaystyle \frac{\xi}{\Delta} & m \end{array} \right).
$$
With the use of equation (), the resulting cnoidal-wave solution of the KdV equation is found
$$
\eta(\xi) =
BLOCK1$$
What remains, is to determine the parameters: η1, η2, Δ and m.
Relationships between the cnoidal-wave parameters
First, since η1 is the crest elevation and η2 is the trough elevation, it is convenient to introduce the wave height, defined as H = η1 − η2. Consequently, we find for m and for Δ:
$$
m = \frac{H}{\eta_1-\eta_3}
$$
and
$$
\frac{\Delta^2}{m}=\frac{4}{3\,H}
$$
|
https://en.wikipedia.org/wiki/Cnoidal_wave
|
passage: A digital radio switchover would maintain FM as a platform, while moving some services to DAB-only distribution.
DAB+ devices in the UK have been available to the public since 2010.
#### United States
The United States has opted for the proprietary HD Radio technology, a type of in-band on-channel (IBOC) technology. According to iBiquity, "HD Radio" is the company's trade name for its proprietary digital radio system, but the name does not imply either high definition or "hybrid digital" as it is commonly incorrectly referenced.
Transmissions use orthogonal frequency-division multiplexing, a technique which is also used for European terrestrial digital TV broadcast (DVB-T). HD Radio technology was developed and is licensed by iBiquity Digital Corporation. It is widely believed that a major reason for HD radio technology is to offer some limited digital radio services while preserving the relative "stick values" of the stations involved and to ensure that new programming services will be controlled by existing licensees.
The FM digital schemes in the U.S. provide audio at rates from 96 to 128 kilobits per second (kbit/s), with auxiliary "subcarrier" transmissions at up to 64 kbit/s. The AM digital schemes have data rates of about 48 kbit/s, with auxiliary services provided at a much lower data rate. Both the FM and AM schemes use lossy compression techniques to make the best use of the limited bandwidth.
|
https://en.wikipedia.org/wiki/Digital_radio
|
passage: For instance, in a multivariate normal distribution the covariance matrix
$$
\, \Sigma \,
$$
must be positive-definite; this restriction can be imposed by replacing
$$
\; \Sigma = \Gamma^{\mathsf{T}} \Gamma \;,
$$
where
$$
\Gamma
$$
is a real upper triangular matrix and
$$
\Gamma^{\mathsf{T}}
$$
is its transpose.
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations
$$
\frac{\partial \ell}{\partial \theta} - \frac{\partial h(\theta)^\mathsf{T}}{\partial \theta} \lambda = 0
$$
and
$$
h(\theta) = 0 \;,
$$
where
$$
~ \lambda = \left[ \lambda_{1}, \lambda_{2}, \ldots, \lambda_{r}\right]^\mathsf{T} ~
$$
is a column-vector of Lagrange multipliers and
$$
\; \frac{\partial h(\theta)^\mathsf{T}}{\partial \theta} \;
$$
is the Jacobian matrix of partial derivatives. Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.
|
https://en.wikipedia.org/wiki/Maximum_likelihood_estimation
|
passage: Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.
In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.
### Mathematical biophysics
The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.
The following is a list of mathematical descriptions and their assumptions.
#### Deterministic processes (dynamical systems)
A fixed mapping between an initial state and a final state.
|
https://en.wikipedia.org/wiki/Mathematical_and_theoretical_biology
|
passage: The set of measurable functions is closed under algebraic operations, but more importantly it is closed under various kinds of point-wise sequential limits:
$$
\sup_{k \in \N} f_k, \quad \liminf_{k \in \N} f_k, \quad \limsup_{k \in \N} f_k
$$
are measurable if the original sequence , where , consists of measurable functions.
There are several approaches for defining an integral for measurable real-valued functions defined on , and several notations are used to denote such an integral.
$$
\int_E f \,d\mu = \int_E f(x)\,d\mu(x) = \int_E f(x)\,\mu(dx).
$$
Following the identification in Distribution theory of measures with distributions of order , or with Radon measures, one can also use a dual pair notation and write the integral with respect to in the form
$$
\langle \mu, f\rangle.
$$
## Definition
The theory of the Lebesgue integral requires a theory of measurable sets and measures on these sets, as well as a theory of measurable functions and integrals on these functions.
### Via simple functions
One approach to constructing the Lebesgue integral is to make use of so-called simple functions: finite, real linear combinations of indicator functions.
|
https://en.wikipedia.org/wiki/Lebesgue_integral
|
passage: The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution.
### Free, projective, injective, and flat resolutions
In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules.
Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors.
Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them.
Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module.
|
https://en.wikipedia.org/wiki/Resolution_%28algebra%29
|
passage: ### Magnetic dipole in a magnetic field
For a magnetic dipole moment
$$
\boldsymbol{\mu}
$$
in a uniform, magnetostatic field (time-independent)
$$
\mathbf{B}
$$
, positioned in one place, the potential is:
$$
V = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
For a spin- particle, the corresponding spin magnetic moment is:
$$
\boldsymbol{\mu}_S = \frac{g_s e}{2m} \mathbf{S}
$$
where
$$
g_s
$$
is the "spin g-factor" (not to be confused with the gyromagnetic ratio),
$$
e
$$
is the electron charge,
$$
\mathbf{S}
$$
is the spin operator vector, whose components are the Pauli matrices, hence
$$
\hat{H} = \frac{g_s e}{2m} \mathbf{S} \cdot\mathbf{B}
$$
### Charged particle in an electromagnetic field
For a particle with mass
$$
m
$$
and charge
$$
q
$$
in an electromagnetic field, described by the scalar potential
$$
\phi
$$
and vector potential
$$
\mathbf{A}
$$
, there are two parts to the Hamiltonian to substitute for. The canonical momentum operator
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
passage: The Rathke's pouch, a cavity of ectodermal cells of the oropharynx, forms between the fourth and fifth week of gestation and upon full development, it gives rise to the anterior pituitary gland. By seven weeks of gestation, the anterior pituitary vascular system begins to develop. During the first 12 weeks of gestation, the anterior pituitary undergoes cellular differentiation. At 20 weeks of gestation, the hypophyseal portal system has developed. The Rathke's pouch grows towards the third ventricle and fuses with the diverticulum. This eliminates the lumen and the structure becomes Rathke's cleft. The posterior pituitary lobe is formed from the diverticulum. Portions of the pituitary tissue may remain in the nasopharyngeal midline. In rare cases this results in functioning ectopic hormone-secreting tumors in the nasopharynx.
The functional development of the anterior pituitary involves spatiotemporal regulation of transcription factors expressed in pituitary stem cells and dynamic gradients of local soluble factors. The coordination of the dorsal gradient of pituitary morphogenesis is dependent on neuroectodermal signals from the infundibular bone morphogenetic protein 4 (BMP4). This protein is responsible for the development of the initial invagination of the Rathke's pouch. Other essential proteins necessary for pituitary cell proliferation are Fibroblast growth factor 8 (FGF8), Wnt4, and Wnt5.
|
https://en.wikipedia.org/wiki/Endocrine_system
|
passage: Thus, if we let n be the total number of observations and k be the total number of bins, the histogram data mi meet the following conditions:
$$
n = \sum_{i=1}^k{m_i}.
$$
A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently.
An alternative to kernel density estimation is the average shifted histogram,
which is fast to compute and gives a smooth curve estimate of the density without using kernels.
### Cumulative histogram
A cumulative histogram: a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj can be defined as:
$$
M_i = \sum_{j=1}^i{m_j}.
$$
|
https://en.wikipedia.org/wiki/Histogram
|
passage: The fine topology in this case is strictly finer than the usual topology, since there are discontinuous subharmonic functions.
Cartan observed in correspondence with Marcel Brelot that it is equally possible to develop the theory of the fine topology by using the concept of 'thinness'. In this development, a set
$$
U
$$
is thin at a point
$$
\zeta
$$
if there exists a subharmonic function
$$
v
$$
defined on a neighbourhood of
$$
\zeta
$$
such that
$$
v(\zeta)>\limsup_{z\to\zeta, z\in U} v(z).
$$
Then, a set
$$
U
$$
is a fine neighbourhood of
$$
\zeta
$$
if and only if the complement of
$$
U
$$
is thin at
$$
\zeta
$$
.
## Properties of the fine topology
The fine topology is in some ways much less tractable than the usual topology in euclidean space, as is evidenced by the following (taking
$$
n \ge 2
$$
):
- A set
$$
F
$$
in
$$
\R^n
$$
is fine compact if and only if
$$
F
$$
is finite.
- The fine topology on
$$
\R^n
$$
is not locally compact (although it is Hausdorff).
- The fine topology on
$$
\R^n
$$
is not first-countable, second-countable or metrisable.
The fine topology does at least have a few 'nicer' properties:
- The fine topology has the Baire property.
|
https://en.wikipedia.org/wiki/Fine_topology_%28potential_theory%29
|
passage: In general, in order to fully test a module, all execution paths through the module should be exercised. This implies a module with a high complexity number requires more testing effort than a module with a lower value since the higher complexity number indicates more pathways through the code. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways.
Unfortunately, it is not always practical to test all possible paths through a program. Considering the example above, each time an additional if-then-else statement is added, the number of possible paths grows by a factor of 2. As the program grows in this fashion, it quickly reaches the point where testing all of the paths becomes impractical.
One common testing strategy, espoused for example by the NIST Structured Testing methodology, is to use the cyclomatic complexity of a module to determine the number of white-box tests that are required to obtain sufficient coverage of the module. In almost all cases, according to such a methodology, a module should have at least as many tests as its cyclomatic complexity. In most cases, this number of tests is adequate to exercise all the relevant paths of the function.
As an example of a function that requires more than mere branch coverage to test accurately, reconsider the above function. However, assume that to avoid a bug occurring, any code that calls either `f1()` or `f3()` must also call the other. Assuming that the results of `c1()` and `c2()` are independent, the function as presented above contains a bug.
|
https://en.wikipedia.org/wiki/Cyclomatic_complexity
|
passage: When this is the case, the prior is called an improper prior. However, the posterior distribution need not be a proper distribution if the prior is improper. This is clear from the case where event B is independent of all of the Aj.
Statisticians sometimes use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(m, v) ~ 1/v (for v > 0) which would suggest that any value for the mean is "equally likely" and that a value for the positive variance becomes "less likely" in inverse proportion to its value. Many authors (Lindley, 1973; De Groot, 1937; Kass and Wasserman, 1996) warn against the danger of over-interpreting those priors since they are not probability densities. The only relevance they have is found in the corresponding posterior, as long as it is well-defined for all observations. (The Haldane prior is a typical counterexample.)
By contrast, likelihood functions do not need to be integrated, and a likelihood function that is uniformly 1 corresponds to the absence of data (all models are equally likely, given no data): Bayes' rule multiplies a prior by the likelihood, and an empty product is just the constant likelihood 1. However, without starting with a prior probability distribution, one does not end up getting a posterior probability distribution, and thus cannot integrate or compute expected values or loss. See for details.
###
|
https://en.wikipedia.org/wiki/Prior_probability
|
passage: More generally, the complete n-types correspond to the prime ideals of the polynomial ring Q[x1,...,xn], in other words to the points of the prime spectrum of this ring. (The Stone space topology can in fact be viewed as the Zariski topology of a Boolean ring induced in a natural way from the Boolean algebra. While the Zariski topology is not in general Hausdorff, it is in the case of Boolean rings.) For example, if q(x,y) is an irreducible polynomial in two variables, there is a 2-type whose realizations are (informally) pairs (x,y) of elements with q(x,y)=0.
## Omitting types theorem
Given a complete n-type p one can ask if there is a model of the theory that omits p, in other words there is no n-tuple in the model that realizes p.
If p is an isolated point in the Stone space, i.e. if {p} is an open set, it is easy to see that every model realizes p (at least if the theory is complete). The omitting types theorem says that conversely if p is not isolated then there is a countable model omitting p (provided that the language is countable).
Example: In the theory of algebraically closed fields of characteristic 0, there is a 1-type represented by elements that are transcendental over the prime field Q. This is a non-isolated point of the Stone space (in fact, the only non-isolated point).
|
https://en.wikipedia.org/wiki/Type_%28model_theory%29
|
passage: For clarity, define
$$
\mathbf f_n = \mathbf f(\mathbf x_n),
$$
$$
\Delta \mathbf x_n = \mathbf x_n - \mathbf x_{n - 1},
$$
$$
\Delta \mathbf f_n = \mathbf f_n - \mathbf f_{n - 1},
$$
so the above may be rewritten as
$$
\mathbf J_n \Delta \mathbf x_n \simeq \Delta \mathbf f_n.
$$
The above equation is underdetermined when is greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix, , and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification to :
$$
\mathbf J_n = \mathbf J_{n - 1} + \frac{\Delta \mathbf f_n - \mathbf J_{n - 1} \Delta \mathbf x_n}{\|\Delta \mathbf x_n\|^2} \Delta \mathbf x_n^{\mathrm T}.
$$
This minimizes the Frobenius norm
$$
\|\mathbf J_n - \mathbf J_{n - 1}\|_{\rm F} .
$$
One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach.
$$
\mathbf x_{n + 1} = \mathbf x_n - \alpha \mathbf J_n^{-1} \mathbf f(\mathbf x_n) .
$$
If
$$
\alpha = 1
$$
|
https://en.wikipedia.org/wiki/Broyden%27s_method
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.