text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: ## Basic properties
Many properties of the real logarithm also apply to the logarithmic derivative, even when the function does not take values in the positive reals. For example, since the logarithm of a product is the sum of the logarithms of the factors, we have
$$
(\log uv)' = (\log u + \log v)' = (\log u)' + (\log v)' .
$$
So for positive-real-valued functions, the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors. But we can also use the Leibniz law for the derivative of a product to get
$$
\frac{(uv)'}{uv} = \frac{u'v + uv'}{uv} = \frac{u'}{u} + \frac{v'}{v} .
$$
Thus, it is true for any function that the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors (when they are defined).
A corollary to this is that the logarithmic derivative of the reciprocal of a function is the negation of the logarithmic derivative of the function:
$$
\frac{(1/u)'}{1/u} = \frac{-u'/u^{2}}{1/u} = -\frac{u'}{u} ,
$$
just as the logarithm of the reciprocal of a positive real number is the negation of the logarithm of the number.
|
https://en.wikipedia.org/wiki/Logarithmic_derivative
|
passage: In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph.
## Models
A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise. Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert but often called the Erdős–Rényi model, denoted G(n,p). In it, every possible edge occurs independently with probability 0 < p < 1.
|
https://en.wikipedia.org/wiki/Random_graph
|
passage: All graduates have to pass the Nepal Medical Council Licensing Exam to become a registered ophthalmologists in Nepal. The concurrent residency training is in the form of a PG student (resident) at a medical college, eye hospital, or institution according to the degree providing university's rules and regulations. Nepal Ophthalmic Society holds regular conferences and actively promotes continuing medical education.
### Ireland
In Ireland, the Royal College of Surgeons of Ireland grants membership (MRCSI (Ophth)) and fellowship (FRCSI (Ophth)) qualifications in conjunction with the Irish College of Ophthalmologists. Total postgraduate training involves an intern year, a minimum of three years of basic surgical training, and a further 4.5 years of higher surgical training. Clinical training takes place within public, Health Service Executive-funded hospitals in Dublin, Sligo, Limerick, Galway, Waterford, and Cork. A minimum of 8.5 years of training is required before eligibility to work in consultant posts. Some trainees take extra time to obtain MSc, MD or PhD degrees and to undertake clinical fellowships in the UK, Australia, and the United States.
Pakistan
In Pakistan, after MBBS, a four-year full-time residency program leads to an exit-level FCPS examination in ophthalmology, held under the auspices of the College of Physicians and Surgeons, Pakistan. The tough examination is assessed by both highly qualified Pakistani and eminent international ophthalmic consultants.
|
https://en.wikipedia.org/wiki/Ophthalmology
|
passage: Since
$$
S'
$$
is simply connected,
$$
\varphi
$$
is a homeomorphism, and hence, a (global) isometry. Therefore,
$$
H
$$
and
$$
S'
$$
are globally isometric, and because
$$
H
$$
has an infinite area, then
$$
S'=T_p(S)
$$
has an infinite area, as well.
$$
\square
$$
Lemma 2: For each
$$
p\in S'
$$
exists a parametrization
$$
x:U \subset \mathbb{R}^{2} \longrightarrow S', \qquad p \in x(U)
$$
, such that the coordinate curves of
$$
x
$$
are asymptotic curves of
$$
x(U) = V'
$$
and form a Tchebyshef net.
Lemma 3: Let
$$
V' \subset S'
$$
be a coordinate neighborhood of
$$
S'
$$
such that the coordinate curves are asymptotic curves in
$$
V'
$$
. Then the area A of any quadrilateral formed by the coordinate curves is smaller than
$$
2\pi
$$
.
The next goal is to show that
$$
x
$$
is a parametrization of
$$
S'
$$
.
Lemma 4: For a fixed
$$
t
$$
, the curve
$$
x(s,t), -\infty < s < +\infty
$$
, is an asymptotic curve with
$$
s
$$
as arc length.
|
https://en.wikipedia.org/wiki/Hilbert%27s_theorem_%28differential_geometry%29
|
passage: The demoscene took off on home computers such as the Commodore 64 and the Amiga, which had relatively advanced and very "hackable" custom chips and CPUs. Before the widespread use of advanced computer aided design for integrated circuits, chips were designed by hand and so often had many undocumented or unintended features. A lack of standardisation also meant that hardware design tended to reflect the designers' own ideas and creative flair. For this reason, most "old school" demo effects were based on the creative exploitation of the features of particular hardware. A lot of effort was put into the reverse-engineering of the hardware in order to find undocumented possibilities usable for new effects.
The IBM PC compatibles of the 1990s, however, lacked many of the special features typical for the home computers, instead using standard parts. This was compensated for with a greater general-purpose computing power. The possibility of advanced hardware trickery was also limited by the great variability of PC hardware. For these reasons, the PC democoders of the DOS era preferred to focus on pixel-level software rendering algorithms.
Democoders have often looked for challenge and respect by "porting" effects from one platform to another. For example, during the "golden age" of the Amiga demos, many well-known Amiga effects were remade with Atari ST, Commodore 64 and PC, some of which were considered inferior in the key features required in the effects in question. Since the mid-1990s, when the PC had become a major platform, demos for the Amiga and the C-64 started to feature PC-like "pixel effects" as well.
|
https://en.wikipedia.org/wiki/Demo_effect
|
passage: ### Complex domain
Circular functions:
$$
\tan(z)= \frac{e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})}
$$
$$
\sin(z)= \frac{e^{iz}-e^{-iz}}{2i}
$$
Inverse circular functions:
$$
\arctan(z)= \int_0^z\frac{dt}{1+t^2}
$$
$$
\arccos(z)= \int_0^z\frac{dt}{(1-t^2)^{1/2}}
$$
Hyperbolic functions:
$$
\tanh(z)= \frac{e^z-e^{-z}}{e^z+e^{-z}}
$$
$$
\sinh(z)= \frac{e^z-e^{-z}}{2}
$$
Inverse hyperbolic functions:
$$
\text{arctanh}(z)=\int_0^z\frac{dt}{1-t^2}
$$
$$
\text{arcsinh}(z)=\int_0^z\frac{dt}{(1+t^2)^{1/2}}
$$
## Reliability
The black-box character of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input.
|
https://en.wikipedia.org/wiki/Extreme_learning_machine
|
passage: In mathematics, the Hilbert cube, named after David Hilbert, is a topological space that provides an instructive example of some ideas in topology. Furthermore, many interesting topological spaces can be embedded in the Hilbert cube; that is, can be viewed as subspaces of the Hilbert cube (see below).
## Definition
The Hilbert cube is best defined as the topological product of the intervals
$$
[0, 1/n]
$$
for
$$
n = 1, 2, 3, 4, \ldots.
$$
That is, it is a cuboid of countably infinite dimension, where the lengths of the edges in each orthogonal direction form the sequence
$$
\left( 1/n \right)_{n \in \N}.
$$
The Hilbert cube is homeomorphic to the product of countably infinitely many copies of the unit interval
$$
[0, 1].
$$
In other words, it is topologically indistinguishable from the unit cube of countably infinite dimension. Some authors use the term "Hilbert cube" to mean this Cartesian product instead of the product of the
$$
\left[0, \tfrac{1}{n}\right]
$$
.
|
https://en.wikipedia.org/wiki/Hilbert_cube
|
passage: A desktop computer, often abbreviated as desktop, is a personal computer designed for regular use at a stationary location on or near a desk (as opposed to a portable computer) due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit, memory, bus, certain peripherals and other electronic components), disk storage (usually one or more hard disk drives, solid-state drives, optical disc drives, and in early models floppy disk drives); a keyboard and mouse for input; and a monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.
Desktop computers with their cases oriented vertically are referred to as towers. As the majority of cases offered since the mid 1990s are in this form factor, the term desktop has been retronymically used to refer to modern cases offered in the traditional horizontal orientation.
## History
### Origins
Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small; the type of computers most commonly used were minicomputers, which, despite the name, were rather large and were "mini" only compared to the so-called "big iron". Early computers, and later the general purpose high throughput "mainframes", took up the space of a whole room.
|
https://en.wikipedia.org/wiki/Desktop_computer
|
passage: ### Parametric subspace decomposition methods
#### Eigenvector method
This subspace decomposition method separates the eigenvectors of the autocovariance matrix into those corresponding to signals and to clutter. The amplitude of the image at a point (
$$
\omega_x, \omega_y
$$
) is given by:
$$
\hat{\phi}_{EV}\left(\omega_x, \omega_y\right) = \frac{1}
BLOCK0$$
where
$$
\hat{\phi}_{EV}
$$
is the amplitude of the image at a point
$$
\left(\omega_x, \omega_y\right)
$$
,
$$
\underline{v_i}
$$
is the coherency matrix and
$$
\underline{v_i}^\mathsf{H}
$$
is the Hermitian of the coherency matrix,
$$
\frac{1}{\lambda_i}
$$
is the inverse of the eigenvalues of the clutter subspace,
$$
W\left(\omega_x, \omega_y\right)
$$
are vectors defined as
$$
W\left(\omega_x, \omega_y\right) =
\left[1 \exp\left(-j\omega_x\right) \ldots \exp\left(-j(M - 1)\omega_x\right)\right] \otimes \left[1 \exp\left(-j\omega_y\right) \ldots \exp\left(-j(M - 1)\omega_y\right)\right]
$$
where ⊗ denotes the Kronecker product of the two vectors.
|
https://en.wikipedia.org/wiki/Synthetic-aperture_radar
|
passage: It can then be proven that the probability of the gambler's eventual ruin tends to 1 even in the scenario where the game is fair or what mathematically is defined as a martingale.
## Reasons for the four results
Let
$$
d
$$
be the amount of money a gambler has at their disposal at any moment, and let
$$
N
$$
be any positive integer. Suppose that they raise their stake to
$$
\frac{d}{N}
$$
when they win, but do not reduce their stake when they lose (a not uncommon pattern among real gamblers). Under this betting scheme, it will take at most N losing bets in a row to bankrupt them. If their probability of winning each bet is less than 1 (if it is 1, then they are no gambler), they are virtually certain to eventually lose N bets in a row, however big N is. It is not necessary that they follow the precise rule, just that they increase their bet fast enough as they win. This is true even if the expected value of each bet is positive.
The gambler playing a fair game (with probability
$$
\frac{1}{2}
$$
of winning) will eventually either go broke or double their wealth. By symmetry, they have a
$$
\frac{1}{2}
$$
chance of going broke before doubling their money. If they double their money, they repeat this process and they again have a
$$
\frac{1}{2}
$$
chance of doubling their money before going broke.
|
https://en.wikipedia.org/wiki/Gambler%27s_ruin
|
passage: The consequence of this is that, compared to the sampling-theory calculation, the Bayesian calculation puts more weight on larger values of σ2, properly taking into account (as the sampling-theory calculation cannot) that under this squared-loss function the consequence of underestimating large values of σ2 is more costly in squared-loss terms than that of overestimating small values of σ2.
The worked-out Bayesian calculation gives a scaled inverse chi-squared distribution with n − 1 degrees of freedom for the posterior probability distribution of σ2. The expected loss is minimised when cnS2 = <σ2>; this occurs when c = 1/(n − 3).
Even with an uninformative prior, therefore, a Bayesian calculation may not give the same expected-loss minimising result as the corresponding sampling-theory calculation.
|
https://en.wikipedia.org/wiki/Bias_of_an_estimator
|
passage: The day before the test of the Cordouan lens in Paris, a committee of the Academy of Sciences reported on Fresnel's memoir and supplements on double refraction—which, although less well known to modern readers than his earlier work on diffraction, struck a more decisive blow for the wave theory of light. Between the test and the reassembly at Cordouan, Fresnel submitted his papers on photoelasticity (16 September 1822), elliptical and circular polarization and optical rotation (9 December), and partial reflection and total internal reflection (7 January 1823), essentially completing his reconstruction of physical optics on the transverse wave hypothesis. Shortly after the Cordouan lens was lit, Fresnel started coughing up blood.
In May 1824, Fresnel was promoted to Secretary of the , becoming the first member of that body to draw a salary, albeit in the concurrent role of Engineer-in-Chief. Late that year, being increasingly ill, he curtailed his fundamental research and resigned his seasonal job as an examiner at the , in order to save his remaining time and energy for his lighthouse work.
In the same year he designed the first fixed lens—for spreading light evenly around the horizon while minimizing waste above or below. Ideally the curved refracting surfaces would be segments of toroids about a common vertical axis, so that the dioptric panel would look like a cylindrical drum.
|
https://en.wikipedia.org/wiki/Fresnel_lens
|
passage: As a result, he was dismissed from the Royal Military Academy but retained his post at Turin University.
In 1903 Peano announced his work on an international auxiliary language called Latino sine flexione ("Latin without inflexion," later called Interlingua, and the precursor of the Interlingua of the IALA). This was an important project for him (along with finding contributors for 'Formulario'). The idea was to use Latin vocabulary, since this was widely known, but simplify the grammar as much as possible and remove all irregular and anomalous forms to make it easier to learn. On 3 January 1908, he read a paper to the Academia delle Scienze di Torino in which he started speaking in Latin and, as he described each simplification, introduced it into his speech so that by the end he was talking in his new language.
The year 1908 was important for Peano. The fifth and final edition of the Formulario project, titled Formulario mathematico, was published. It contained 4200 formulae and theorems, all completely stated and most of them proved. The book received little attention since much of the content was dated by this time. However, it remains a significant contribution to mathematical literature. The comments and examples were written in Latino sine flexione.
Also in 1908, Peano took over the chair of higher analysis at Turin (this appointment was to last for only two years). He was elected the director of Academia pro Interlingua. Having previously created Idiom Neutral, the Academy effectively chose to abandon it in favour of Peano's Latino sine flexione.
|
https://en.wikipedia.org/wiki/Giuseppe_Peano
|
passage: This made AM radio broadcasting possible, which began in about 1920. Practical frequency modulation (FM) transmission was invented by Edwin Armstrong in 1933, who showed that it was less vulnerable to noise and static than AM. The first FM radio station was licensed in 1937. Experimental television transmission had been conducted by radio stations since the late 1920s, but practical television broadcasting didn't begin until the late 1930s. The development of radar during World War II motivated the evolution of high frequency transmitters in the UHF and microwave ranges, using new active devices such as the magnetron, klystron, and traveling wave tube.
The invention of the transistor allowed the development in the 1960s of small portable transmitters such as wireless microphones, garage door openers and walkie-talkies. The development of the integrated circuit (IC) in the 1970s made possible the current proliferation of wireless devices, such as cell phones and Wi-Fi networks, in which integrated digital transmitters and receivers (wireless modems) in portable devices operate automatically, in the background, to exchange data with wireless networks.
The need to conserve bandwidth in the increasingly congested radio spectrum is driving the development of new types of transmitters such as spread spectrum, trunked radio systems and cognitive radio. A related trend has been an ongoing transition from analog to digital radio transmission methods. Digital modulation can have greater spectral efficiency than analog modulation; that is it can often transmit more information (data rate) in a given bandwidth than analog, using data compression algorithms.
|
https://en.wikipedia.org/wiki/Transmitter
|
passage: If and are both prime (meaning that is a Sophie Germain prime), and is congruent to , then divides .
1. Example: 11 and 23 are both prime, and , so 23 divides .
1. Proof: Let be . By Fermat's little theorem, , so either or . Supposing latter true, then , so −2 would be a quadratic residue mod . However, since is congruent to , is congruent to and therefore 2 is a quadratic residue mod . Also since is congruent to , −1 is a quadratic nonresidue mod , so −2 is the product of a residue and a nonresidue and hence it is a nonresidue, which is a contradiction. Hence, the former congruence must be true and divides .
1. All composite divisors of prime-exponent Mersenne numbers are strong pseudoprimes to the base 2.
1. With the exception of 1, a Mersenne number cannot be a perfect power. That is, and in accordance with Mihăilescu's theorem, the equation has no solutions where , , and are integers with and .
1. The Mersenne number sequence is a member of the family of Lucas sequences. It is (3, 2). That is, Mersenne number with and .
|
https://en.wikipedia.org/wiki/Mersenne_prime
|
passage: 197° 237° 277° 317° 357° 38° 78° 118° 158° 198° 238° 278° 318° 358° 39° 79° 119° 159° 199° 239° 279° 319° 359°
|
https://en.wikipedia.org/wiki/Gray_code
|
passage: Cell lists (also sometimes referred to as cell linked-lists) is a data structure in molecular dynamics simulations to find all atom pairs within a given cut-off distance of each other. These pairs are needed to compute the short-range non-bonded interactions in a system, such as Van der Waals forces or the short-range part of the electrostatic interaction when using Ewald summation.
## Algorithm
Cell lists work by subdividing the simulation domain into cells with an edge length greater than or equal to the cut-off radius of the interaction to be computed. The particles are sorted into these cells and the interactions are computed between particles in the same or neighbouring cells.
In its most basic form, the non-bonded interactions for a cut-off distance
$$
r_c
$$
are computed as follows:
for all neighbouring cell pairs
$$
(C_\alpha, C_\beta)
$$
do
for all
$$
p_\alpha \in C_\alpha
$$
do
for all
$$
p_\beta \in C_\beta
$$
do
$$
r^2 = \| \mathbf x[p_\alpha] - \mathbf x[p_\beta] \|_2^2
$$
if
$$
r^2 \le r_c^2
$$
then
Compute the interaction between
$$
p_\alpha
$$
and
$$
p_\beta
$$
.
|
https://en.wikipedia.org/wiki/Cell_lists
|
passage: It has the following property, which defines it completely:
$$
\alpha \wedge ({\star} \beta) = \langle \alpha,\beta \rangle \,\omega
$$
for all -vectors
$$
\alpha,\beta\in {\textstyle\bigwedge}^{\!k}V .
$$
Dually, in the space
$$
{\textstyle\bigwedge}^{\!n}V^*
$$
of -forms (alternating -multilinear functions on
$$
V^n
$$
), the dual to
$$
\omega
$$
is the volume form
$$
\det
$$
, the function whose value on
$$
v_1\wedge\cdots\wedge v_n
$$
is the determinant of the
$$
n\times n
$$
matrix assembled from the column vectors of
$$
v_j
$$
in
$$
e_i
$$
-coordinates.
|
https://en.wikipedia.org/wiki/Hodge_star_operator
|
passage: It is also very unstable, decaying into other particles almost immediately via several possible pathways.
The Higgs field is a scalar field, with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU(2) symmetry. Unlike any other known quantum field, it has a sombrero potential. This shape means that below extremely high cross-over temperature of such as those seen during the first picosecond (10−12 s) of the Big Bang, the Higgs field in its ground state has less energy when it is nonzero, resulting in a nonzero vacuum expectation value. Therefore, in today's universe the Higgs field has a nonzero value everywhere (including in otherwise empty space). This nonzero value in turn breaks the weak isospin SU(2) symmetry of the electroweak interaction everywhere. (Technically the non-zero expectation value converts the Lagrangian's Yukawa coupling terms into mass terms.) When this happens, three components of the Higgs field are "absorbed" by the SU(2) and U(1) gauge bosons (the Higgs mechanism) to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component either manifests as a Higgs boson, or may couple separately to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well.
## Significance
Evidence for the Higgs field and its properties has been extremely significant for many reasons.
|
https://en.wikipedia.org/wiki/Higgs_boson
|
passage: Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.
Likewise, materials have several band gaps throughout their band structure.
The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level.
The bands and band gaps near the Fermi level are given special names, depending on the material:
- In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals.
- In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
passage: ### Coboundary monoidal categories
A coboundary or “cactus” monoidal category is a monoidal category
$$
(C, \otimes, \text{Id})
$$
together with a family of natural isomorphisms
$$
\gamma_{A,B}: A\otimes B \to B\otimes A
$$
with the following properties:
-
$$
\gamma_{B,A} \circ \gamma_{A,B} = \text{Id}
$$
for all pairs of objects
$$
A
$$
and
$$
B
$$
.
-
$$
\gamma_{B \otimes A, C} \circ (\gamma_{A,B} \otimes \text{Id}) = \gamma_{A, C \otimes B} \circ (\text{Id} \otimes \gamma_{B,C})
$$
The first property shows us that
$$
\gamma^{-1}_{A,B} = \gamma_{B,A}
$$
, thus allowing us to omit the analog to the second defining diagram of a braided monoidal category and ignore the associator maps as implied.
## Examples
- The category of representations of a group (or a Lie algebra) is a symmetric monoidal category where
$$
\gamma (v \otimes w) = w \otimes v
$$
.
-
|
https://en.wikipedia.org/wiki/Braided_monoidal_category
|
passage: Among these are normalized variants and generalizations to more than two variables.
### Metric
Many applications require a metric, that is, a distance measure between pairs of points. The quantity
$$
\begin{align}
d(X,Y) &= \Eta(X,Y) - \operatorname{I}(X;Y) \\
BLOCK0\end{align}
$$
satisfies the properties of a metric (triangle inequality, non-negativity, indiscernability and symmetry), where equality
$$
X=Y
$$
is understood to mean that
$$
X
$$
can be completely determined from
$$
Y
$$
.
This distance metric is also known as the variation of information.
If
$$
X, Y
$$
are discrete random variables then all the entropy terms are non-negative, so
$$
0 \le d(X,Y) \le \Eta(X,Y)
$$
and one can define a normalized distance
$$
D(X,Y) = \frac{d(X, Y)}{\Eta(X, Y)} \le 1.
$$
Plugging in the definitions shows that
$$
D(X,Y) = 1 - \frac{\operatorname{I}(X; Y)}{\Eta(X, Y)}.
$$
This is known as the Rajski Distance. In a set-theoretic interpretation of information (see the figure for Conditional entropy), this is effectively the Jaccard distance between
$$
X
$$
and
$$
Y
$$
.
|
https://en.wikipedia.org/wiki/Mutual_information
|
passage: Hence, it is usual to work with more definite statements, either asserting or denying, the existence of an infinite family of such zeros, such as in:
- Conjecture ("no Siegel zeros"): If denotes the largest real zero of
$$
L(s,\chi_D)
$$
, then
$$
1-\beta_D \gg \frac{1}{\log|D|}.
$$
The possibility of existence or non-existence of Siegel zeros has a large impact in closely related subjects of number theory, with the "no Siegel zeros" conjecture serving as a weaker (although powerful, and sometimes fully sufficient) substitute for GRH (see below for an example involving Siegel–Tatsuzawa's Theorem and the idoneal number problem). An equivalent formulation of "no Siegel zeros" that does not reference zeros explicitly is the statement:
$$
\frac{L'}{L}(1,\chi_D) = O(\log|D|).
$$
The equivalence can be deduced for example by using the zero-free regions and classical estimates for the number of non-trivial zeros of
$$
L(s,\chi)
$$
up to a certain height.
## Landau–Siegel estimates
The first breakthrough in dealing with these zeros came from Landau, who showed that there exists an effectively computable constant
$$
B>0
$$
such that, for any
$$
\chi_D
$$
and
$$
\chi_{D'}
$$
real primitive characters to distinct moduli, if
$$
\beta, \beta'
$$
are real zeros of
$$
L(s,\chi_D), L(s,\chi_{D'})
$$
respectively, then
$$
\min\{\beta,\beta'\} < 1- \frac{B}{\log|DD'|}.
$$
This is saying that, if Siegel zeros exist, then they cannot be too numerous.
|
https://en.wikipedia.org/wiki/Siegel_zero
|
passage: I if and only if its group C*-algebra is type I.
However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.
## C*-algebras and quantum field theory
In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).
This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra.
|
https://en.wikipedia.org/wiki/C%2A-algebra
|
passage: Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067 years.
The power emitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-charged Schwarzschild black hole of mass . Combining the formulas for the Schwarzschild radius of the black hole, the Stefan–Boltzmann law of blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of a sphere (the black hole's event horizon), several equations can be derived.
The Hawking radiation temperature is:
$$
T_\mathrm{H} = \frac{\hbar c^3}{8 \pi G M k_\mathrm{B}}
$$
The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is:
$$
P = \frac{\hbar c^6}{15360 \pi G^2 M^2}
$$
where is the luminosity, i.e., the radiated power, is the reduced Planck constant, is the speed of light, is the gravitational constant and is the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework of semiclassical gravity.
|
https://en.wikipedia.org/wiki/Hawking_radiation
|
passage: The Tsirelson bounds are named after Boris S. Tsirelson (or Cirel'son, in a different transliteration), the author of the article in which the first one was derived.
## Bound for the CHSH inequality
The first Tsirelson bound was derived as an upper bound on the correlations measured in the CHSH inequality. It states that if we have four (Hermitian) dichotomic observables
$$
A_0
$$
,
$$
A_1
$$
,
$$
B_0
$$
,
$$
B_1
$$
(i.e., two observables for Alice and two for Bob) with outcomes
$$
+1, -1
$$
such that
$$
[A_i, B_j] = 0
$$
for all
$$
i, j
$$
, then
$$
\langle A_0 B_0 \rangle + \langle A_0 B_1 \rangle + \langle A_1 B_0 \rangle - \langle A_1 B_1 \rangle \le 2\sqrt{2}.
$$
For comparison, in the classical case (or local realistic case) the upper bound is 2, whereas if any arbitrary assignment of
$$
+1, -1
$$
is allowed, it is 4. The Tsirelson bound is attained already if Alice and Bob each make measurements on a qubit, the simplest non-trivial quantum system.
Several proofs of this bound exist, but perhaps the most enlightening one is based on the Khalfin–Tsirelson–Landau identity.
|
https://en.wikipedia.org/wiki/Tsirelson%27s_bound
|
passage: The ensemble
### Kalman filter
(EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as an ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter.
## Introduction
The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Bayesian update problem: given a probability density function (PDF) of the state of the modeled system (the prior, called often the forecast in geosciences) and the data likelihood, Bayes' theorem is used to obtain the PDF after the data likelihood has been taken into account (the posterior, often called the analysis). This is called a Bayesian update. The Bayesian update is combined with advancing the model in time, incorporating new data from time to time. The original Kalman filter, introduced in 1960, assumes that all PDFs are Gaussian (the Gaussian assumption) and provides algebraic formulas for the change of the mean and the covariance matrix by the Bayesian update, as well as a formula for advancing the mean and covariance in time provided the system is linear.
|
https://en.wikipedia.org/wiki/Ensemble_Kalman_filter
|
passage: ## Hardware exceptions
There is no clear consensus as to the exact meaning of an exception with respect to hardware. From the implementation point of view, it is handled identically to an interrupt: the processor halts execution of the current program, looks up the interrupt handler in the interrupt vector table for that exception or interrupt condition, saves state, and switches control.
## IEEE 754 floating-point exceptions
Exception handling in the IEEE 754 floating-point standard refers in general to exceptional conditions and defines an exception as "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal one or more exceptions by invoking the default or, if explicitly requested, a language-defined alternate handling. "
By default, an IEEE 754 exception is resumable and is handled by substituting a predefined value for different exceptions, e.g. infinity for a divide by zero exception, and providing status flags for later checking of whether the exception occurred (see C99 programming language for a typical example of handling of IEEE 754 exceptions). An exception-handling style enabled by the use of status flags involves: first computing an expression using a fast, direct implementation; checking whether it failed by testing status flags; and then, if necessary, calling a slower, more numerically robust, implementation.
The IEEE 754 standard uses the term "trapping" to refer to the calling of a user-supplied exception-handling routine on exceptional conditions, and is an optional feature of the standard.
|
https://en.wikipedia.org/wiki/Exception_handling
|
passage: Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence.
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity.
|
https://en.wikipedia.org/wiki/Zero-point_energy
|
passage: We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce
$$
\tilde{f}: \R^{2n^2} \times \R^{n+1} \to \R^{n+1}
$$
, with
-
$$
\tilde{f} (K,M, \lambda,x)= \binom{f(K,M,\lambda,x)}{f_{n+1}(x)}
$$
with
$$
f(K,M, \lambda,x) =Kx -\lambda x, f_{n+1}(M,x)=x^T Mx -1
$$
. In order to use the Implicit function theorem, we study the invertibility of the Jacobian
$$
J_{\tilde{f};\lambda,x} (K,M;\lambda_{0i},x_{0i})
$$
with
$$
J_{\tilde{f};\lambda,x} (K,M;\lambda_i,x_i)(\delta \lambda,\delta x)=\binom{-Mx_i}{0} \delta \lambda +\binom{K-\lambda M}{2 x_i^T M} \delta x_i
$$
. Indeed, the solution of
$$
J_{\tilde{f};\lambda_{0i},x_{0i} } (K,M;\lambda_{0i},x_{0i})(\delta \lambda_i,\delta x_i)=
$$
$$
\binom{y}{y_{n+1}}
$$
may be derived with computations similar to the derivation of the expansion.
|
https://en.wikipedia.org/wiki/Eigenvalue_perturbation
|
passage: Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe wavelet admits no multiresolution analysis.
From the mother and father wavelets one constructs the subspaces
$$
V_m=\operatorname{span}(\phi_{m,n}:n\in\Z),\text{ where }\phi_{m,n}(t)=2^{-m/2}\phi(2^{-m}t-n)
$$
$$
W_m=\operatorname{span}(\psi_{m,n}:n\in\Z),\text{ where }\psi_{m,n}(t)=2^{-m/2}\psi(2^{-m}t-n).
$$
The father wavelet
$$
V_{i}
$$
keeps the time domain properties, while the mother wavelets
$$
W_{i}
$$
keeps the frequency domain properties.
From these it is required that the sequence
$$
\{0\}\subset\dots\subset V_{1}\subset V_{0}\subset V_{-1}\subset V_{-2}\subset\dots\subset L^2(\R)
$$
forms a multiresolution analysis of L2 and that the subspaces
$$
\dots,W_1,W_0,W_{-1},\dots
$$
are the orthogonal "differences" of the above sequence, that is, Wm is the orthogonal complement of Vm inside the subspace Vm−1,
$$
V_m\oplus W_m=V_{m-1}.
$$
In analogy to the sampling theorem one may conclude that the space Vm with sampling distance 2m more or less covers the frequency baseband from 0 to 1/2m-1.
|
https://en.wikipedia.org/wiki/Wavelet
|
passage: The Coxeter groups of type Dn, E6, E7, and E8 are the symmetry groups of certain semiregular polytopes.
## Affine Coxeter groups
The affine Coxeter groups form a second important series of Coxeter groups. These are not finite themselves, but each contains a normal abelian subgroup such that the corresponding quotient group is finite. In each case, the quotient group is itself a Coxeter group, and the Coxeter graph of the affine Coxeter group is obtained from the Coxeter graph of the quotient group by adding another vertex and one or two additional edges. For example, for n ≥ 2, the graph consisting of n+1 vertices in a circle is obtained from An in this way, and the corresponding Coxeter group is the affine Weyl group of An (the affine symmetric group). For n = 2, this can be pictured as a subgroup of the symmetry group of the standard tiling of the plane by equilateral triangles.
In general, given a root system, one can construct the associated Stiefel diagram, consisting of the hyperplanes orthogonal to the roots along with certain translates of these hyperplanes. The affine Coxeter group (or affine Weyl group) is then the group generated by the (affine) reflections about all the hyperplanes in the diagram.
|
https://en.wikipedia.org/wiki/Coxeter_group
|
passage: In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used
$$
v_i=-\left(\sum_{j=1}^N D_{ij} \mathbf{d}_j + D_i^{(T)} \, \nabla (\ln T) \right)\, ;
$$
$$
\mathbf{d}_j=\nabla X_j + (X_j-Y_j)\,\nabla (\ln P) + \mathbf{g}_j\, ;
$$
$$
\mathbf{g}_j=\frac{\rho}{P} \left( Y_j \sum_{k=1}^N Y_k (f_k-f_j) \right)\, .
$$
Here,
$$
D_{ij}
$$
is the diffusion coefficient matrix,
$$
D_i^{(T)}
$$
is the thermal diffusion coefficient,
$$
f_i
$$
is the body force per unit mass acting on the ith species,
$$
X_i=P_i/P
$$
is the partial pressure fraction of the ith species (and
$$
P_i
$$
is the partial pressure),
$$
Y_i=\rho_i/\rho
$$
is the mass fraction of the ith species, and
$$
\sum_i X_i=\sum_i Y_i=1.
$$
|
https://en.wikipedia.org/wiki/Diffusion
|
passage: These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher average energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of for the antihelium to helium flux ratio.
Secondary cosmic rays
When cosmic rays enter the Earth's atmosphere, they collide with atoms and molecules, mainly oxygen and nitrogen. The interaction produces a cascade of lighter particles, a so-called air shower secondary radiation that rains down, including x-rays, protons, alpha particles, pions, muons, electrons, neutrinos, and neutrons. All of the secondary particles produced by the collision continue onward on paths within about one degree of the primary particle's original path.
Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons.
|
https://en.wikipedia.org/wiki/Cosmic_ray
|
passage: ## In amorphous solids – glasses – optical fibers
Rayleigh scattering is an important component of the scattering of optical signals in optical fibers. Silica fibers are glasses, disordered materials with microscopic variations of density and refractive index. These give rise to energy losses due to the scattered light, with the following coefficient:
$$
\alpha_\text{scat} = \frac{8 \pi^3}{3 \lambda^4} n^8 p^2 k T_\text{f} \beta
$$
where n is the refraction index, p is the photoelastic coefficient of the glass, k is the Boltzmann constant, and β is the isothermal compressibility. Tf is a fictive temperature, representing the temperature at which the density fluctuations are "frozen" in the material.
## In porous materials
Rayleigh-type λ−4 scattering can also be exhibited by porous materials. An example is the strong optical scattering by nanoporous materials. The strong contrast in refractive index between pores and solid parts of sintered alumina results in very strong scattering, with light completely changing direction each five micrometers on average. The λ−4-type scattering is caused by the nanoporous structure (a narrow pore size distribution around ~70 nm) obtained by sintering monodispersive alumina powder.
|
https://en.wikipedia.org/wiki/Rayleigh_scattering
|
passage: Hensel lifting is a similar method that allows to "lift" the factorization modulo of a polynomial with integer coefficients to a factorization modulo
$$
p^n
$$
for large values of . This is commonly used by polynomial factorization algorithms.
## Notation
There are several different conventions for writing -adic expansions. So far this article has used a notation for -adic expansions in which powers of increase from right to left. With this right-to-left notation the 3-adic expansion of
$$
\tfrac15,
$$
for example, is written as
$$
\frac15 = \dots 121012102_3.
$$
When performing arithmetic in this notation, digits are carried to the left. It is also possible to write -adic expansions so that the powers of increase from left to right, and digits are carried to the right. With this left-to-right notation the 3-adic expansion of
$$
\tfrac15
$$
is
$$
\frac15 = 2.01210121\dots_3 \mbox{ or }
\frac1{15} = 20.1210121\dots_3.
$$
-adic expansions may be written with other sets of digits instead of }.
|
https://en.wikipedia.org/wiki/P-adic_number
|
passage: For example, a massive supercomputer executing a scientific simulation may offer impressive performance, yet it is not executing a real-time computation. Conversely, once the hardware and software for an anti-lock braking system have been designed to meet its required deadlines, no further performance gains are obligatory or even useful. Furthermore, if a network server is highly loaded with network traffic, its response time may be slower, but will (in most cases) still succeed before it times out (hits its deadline). Hence, such a network server would not be considered a real-time system: temporal failures (delays, time-outs, etc.) are typically small and compartmentalized (limited in effect), but are not catastrophic failures. In a real-time system, such as the FTSE 100 Index, a slow-down beyond limits would often be considered catastrophic in its application context. The most important requirement of a real-time system is consistent output, not high throughput.
Some kinds of software, such as many chess-playing programs, can fall into either category. For instance, a chess program designed to play in a tournament with a clock will need to decide on a move before a certain deadline or lose the game, and is therefore a real-time computation, but a chess program that is allowed to run indefinitely before moving is not. In both of these cases, however, high performance is desirable: the more work a tournament chess program can do in the allotted time, the better its moves will be, and the faster an unconstrained chess program runs, the sooner it will be able to move.
|
https://en.wikipedia.org/wiki/Real-time_computing
|
passage: - Symmetry requires that, if the agents are permuted and the procedure is re-executed, then each agent receives the same value as in the original execution. This is weaker than anonymity; currently, a symmetric and proportional procedure is known for any number of agents, and it takes O(n3) queries. A symmetric and envy-free procedure is known for any number of agents, but it takes much longer – it requires n! executions of an existing envy-free procedure.
- Aristotelianity requires that, if two agents have an identical value-measure, then they receive the same value. This is weaker than symmetry; it is satisfied by any envy-free procedure. Moreover, an aristotelian and proportional procedure is known for any number of agents, and it takes O(n3) queries.
See symmetric fair cake-cutting for details and references.
A third family of procedural requirements is monotonicity: when a division procedure is re-applied with a smaller/larger cake and a smaller/larger set of agents, the utility of all agents should change in the same direction. See resource monotonicity for more details.
## Efficiency requirements
In addition to justice, it is also common to consider the economic efficiency of the division; see efficient cake-cutting. There are several levels of efficiency:
- The weaker notion is Pareto efficiency. It can be easily satisfied by just giving the entire cake to a single person; the challenge is to satisfy it in conjunction with fairness. See Efficient envy-free division.
|
https://en.wikipedia.org/wiki/Fair_cake-cutting
|
passage: But in measuring an infinitely "wiggly" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter.
## History
The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way.
A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses.
According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense).
In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them.
|
https://en.wikipedia.org/wiki/Fractal
|
passage: 4, 5, 6, 7, 8, 9, A, B, C, D, E, 78, 88, C3A, D87, 1774, E819, E829, 7995C, 829BB, A36BC, ... 203 16 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 156, 173, 208, 248, 285, 4A5, 5B0, 5B1, 60B, 64B, 8C0, 8C1, 99A, AA9, AC3, CA8, E69, EA0, EA1, B8D2, 13579, 2B702, 2B722, 5A07C, 5A47C, C00E0, C00E1, C04E0, C04E1, C60E7, C64E7, C80E0, C80E1, C84E0, C84E1, ... 294
|
https://en.wikipedia.org/wiki/Narcissistic_number
|
passage: Central limit theorem : Under regularity conditions, for a sufficiently large sample,
$$
\sqrt{n}\{M_f(X_1, \dots, X_n) - f^{-1}(E_f(X_1, \dots, X_n))\}
$$
is approximately normal.
A similar result is available for Bajraktarević means and deviation means, which are generalizations of quasi-arithmetic means.
Scale-invariance: The quasi-arithmetic mean is invariant with respect to offsets and scaling of
$$
f
$$
:
$$
\forall a\ \forall b\ne0 ((\forall t\ g(t)=a+b\cdot f(t)) \Rightarrow \forall x\ M_f (x) = M_g (x)
$$
.
## Characterization
There are several different sets of properties that characterize the quasi-arithmetic mean (i.e., each function that satisfies these properties is an f-mean for some function f).
- Mediality is essentially sufficient to characterize quasi-arithmetic means.
- Self-distributivity is essentially sufficient to characterize quasi-arithmetic means.
- Replacement: Kolmogorov proved that the five properties of symmetry, fixed-point, monotonicity, continuity, and replacement fully characterize the quasi-arithmetic means.
- Continuity is superfluous in the characterization of two variables quasi-arithmetic means. See [10] for the details.
- Balancing:
|
https://en.wikipedia.org/wiki/Quasi-arithmetic_mean
|
passage: ## Early commercial fully transistorized large-scale computers
The Philco Transac models S-1000 scientific computer and S-2000 electronic data processing computer were early commercially produced large-scale all-transistor computers; they were announced in 1957 but did not ship until sometime after the fall of 1958. The Philco computer name "Transac" stands for Transistor-Automatic-Computer. Both of these Philco computer models used the surface-barrier transistor in their circuitry designs, the world's first high-frequency transistor suitable for high-speed computers. Chicago Tribune, March 23, 1958, Article: "All Transistor Computer Put on Market by Philco", page A11 The surface-barrier transistor was developed by Philco in 1953.
RCA shipped the RCA 501 its first all-transistor computer in 1958.
In Italy, Olivetti's first commercial fully transistorized computer was the Olivetti Elea 9003, sold from 1959.
IBM
IBM, which dominated the data processing industry through most of the 20th century, introduced its first commercial transistorized computers beginning in 1958, with the IBM 7070, a ten-digit-word decimal machine. It was followed in 1959 by the IBM 7090, a 36-bit scientific machine, the highly popular IBM 1401 designed to replace punched card tabulating machines, and the desk-sized 1620, a variable length decimal machine.
|
https://en.wikipedia.org/wiki/Transistor_computer
|
passage: Researchers have established scenarios that demonstrate the threat of biohacking, such as a hacker reaching a biological sample by hiding malicious DNA on common surfaces, such as lab coats, benches, or rubber gloves, which would then contaminate the genetic data.
However, the threat of biohacking may be mitigated by using similar techniques that are used to prevent conventional injection attacks. Clinicians and researchers may mitigate a bio-hack by extracting genetic information from biological samples, and comparing the samples to identify material unknown materials. Studies have shown that comparing genetic information with biological samples, to identify bio-hacking code, has been up to 95% effective in detecting malicious DNA inserts in bio-hacking attacks.
### Genetic samples as personal data
Privacy concerns in genomic research have arises around the notion of whether or not genomic samples contain personal data, or should be regarded as physical matter. Moreover, concerns arise as some countries recognize genomic data as personal data (and apply data protection rules) while other countries regard the samples in terms of physical matter and do not apply the same data protection laws to genomic samples. The forthcoming General Data Protection Regulation (GDPR) has been cited as a potential legal instrument that may better enforce privacy regulations in bio-banking and genomic research.
However, ambiguity surrounding the definition of "personal data" in the text of the GDPR, especially regarding biological data, has led to doubts on whether regulation will be enforced for genetic samples.
|
https://en.wikipedia.org/wiki/Biological_data
|
passage: For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points
$$
(x, y, z)
$$
with
$$
x^2+y^2+z^2-1=0.\,
$$
A "slanted" circle in R3 can be defined as the set of all points
$$
(x, y, z)
$$
which satisfy the two polynomial equations
$$
x^2+y^2+z^2-1=0,\,
$$
$$
x+y+z=0.\,
$$
### Affine varieties
First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries.
A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An.
|
https://en.wikipedia.org/wiki/Algebraic_geometry
|
passage: Historically anesthesia providers were almost solely utilized during surgery to administer general anesthesia in which a person is placed in a pharmacologic coma. This is performed to permit surgery without the individual responding to pain (analgesia) during surgery or remembering (amnesia) the surgery.
## Investigations
Effective practice of anesthesiology requires several areas of knowledge by the practitioner, some of which are:
- Pharmacology of commonly used drugs including inhalational anaesthetics, topical anesthetics, and vasopressors as well as numerous other drugs used in association with anesthetics (e.g., ondansetron, glycopyrrolate)
- Monitors: electrocardiography, electroencephalography, electromyography, entropy monitoring, neuromuscular monitoring, cortical stimulation mapping, and neuromorphology
- Mechanical ventilation
- Anatomical knowledge of the nervous system for nerve blocks, etc.
- Other areas of medicine (e.g., cardiology, pulmonology, obstetrics) to assess the risk of anesthesia to adequately have informed consent, and knowledge of anesthesia regarding how it affects certain age groups (neonates, pediatrics, geriatrics)
## Treatments
Many procedures or diagnostic tests do not require "general anesthesia" and can be performed using various forms of sedation or regional anesthesia, which can be performed to induce analgesia in a region of the body. For example, epidural administration of a local anesthetic is commonly performed on the mother during childbirth to reduce labor pain while permitting the mother to be awake and active in labor and delivery.
In the
|
https://en.wikipedia.org/wiki/Anesthesiology
|
passage: Considering as a subgroup of the symmetric group, , conjugation by any odd permutation is an outer automorphism of or more precisely "represents the class of the (non-trivial) outer automorphism of ", but the outer automorphism does not correspond to conjugation by any particular odd element, and all conjugations by odd elements are equivalent up to conjugation by an even element.
## Structure
The Schreier conjecture asserts that is always a solvable group when is a finite simple group. This result is now known to be true as a corollary of the classification of finite simple groups, although no simpler proof is known.
## As dual of the center
The outer automorphism group is dual to the center in the following sense: conjugation by an element of is an automorphism, yielding a map . The kernel of the conjugation map is the center, while the cokernel is the outer automorphism group (and the image is the inner automorphism group). This can be summarized by the exact sequence
$$
Z(G) \hookrightarrow G \, \overset{\sigma}{\longrightarrow} \, \mathrm{Aut}(G) \twoheadrightarrow \mathrm{Out}(G)
$$
## Applications
The outer automorphism group of a group acts on conjugacy classes, and accordingly on the character table. See details at character table: outer automorphisms.
|
https://en.wikipedia.org/wiki/Outer_automorphism_group
|
passage: Bremer support (also known as branch support) is simply the difference in number of steps between the score of the MPT(s), and the score of the most parsimonious tree that does not contain a particular clade (node, branch). It can be thought of as the number of steps you have to add to lose that clade; implicitly, it is meant to suggest how great the error in the estimate of the score of the MPT must be for the clade to no longer be supported by the analysis, although this is not necessarily what it does. Branch support values are often fairly low for modestly-sized data sets (one or two steps being typical), but they often appear to be proportional to bootstrap percentages. As data matrices become larger, branch support values often continue to increase as bootstrap values plateau at 100%. Thus, for large data matrices, branch support values may provide a more informative means to compare support for strongly-supported branches. However, interpretation of decay values is not straightforward, and they seem to be preferred by authors with philosophical objections to the bootstrap (although many morphological systematists, especially paleontologists, report both). Double-decay analysis is a decay counterpart to reduced consensus that evaluates the decay index for all possible subtree relationships (n-taxon statements) within a tree.
## Problems with maximum parsimony phylogenetic inference
### Statistical inconsistency: long-branch attraction
Maximum parsimony is an epistemologically straightforward approach that makes few mechanistic assumptions, and is popular for this reason.
|
https://en.wikipedia.org/wiki/Maximum_parsimony
|
passage: ## Stability
The stability of numerical methods for solving stiff equations is indicated by their region of absolute stability. For the BDF methods, these regions are shown in the plots below.
Ideally, the region contains the left half of the complex plane, in which case the method is said to be A-stable. However, linear multistep methods with an order greater than 2 cannot be A-stable. The stability region of the higher-order BDF methods contain a large part of the left half-plane and in particular the whole of the negative real axis. The BDF methods are the most efficient linear multistep methods of this kind.
## References
### Citations
### Referred works
- .
- .
- .
## Further reading
- BDF Methods at the SUNDIALS wiki (SUNDIALS is a library implementing BDF methods and similar algorithms).
Category:Numerical differential equations
|
https://en.wikipedia.org/wiki/Backward_differentiation_formula
|
passage: In algebraic geometry, the Chow groups (named after Wei-Liang Chow by ) of an algebraic variety over any field are algebro-geometric analogs of the homology of a topological space. The elements of the Chow group are formed out of subvarieties (so-called algebraic cycles) in a similar way to how simplicial or cellular homology groups are formed out of subcomplexes. When the variety is smooth, the Chow groups can be interpreted as cohomology groups (compare Poincaré duality) and have a multiplication called the intersection product. The Chow groups carry rich information about an algebraic variety, and they are correspondingly hard to compute in general.
## Rational equivalence and Chow groups
For what follows, define a variety over a field
$$
k
$$
to be an integral scheme of finite type over
$$
k
$$
. For any scheme
$$
X
$$
of finite type over
$$
k
$$
, an algebraic cycle on
$$
X
$$
means a finite linear combination of subvarieties of
$$
X
$$
with integer coefficients. (Here and below, subvarieties are understood to be closed in
$$
X
$$
, unless stated otherwise.) For a natural number
$$
i
$$
, the group
$$
Z_i(X)
$$
of
$$
i
$$
-dimensional cycles (or
$$
i
$$
-cycles, for short) on
$$
X
$$
is the free abelian group on the set of
$$
i
$$
-dimensional subvarieties of
$$
X
$$
.
|
https://en.wikipedia.org/wiki/Chow_group
|
passage: Otherwise, A is called undecidable. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set.
## Example: the halting problem in computability theory
In computability theory, the halting problem is a decision problem which can be stated as follows:
Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.
Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.
## Relationship with Gödel's incompleteness theorem
The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form.
|
https://en.wikipedia.org/wiki/Undecidable_problem
|
passage: The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
They determined , which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
A single-term lower bound is
where the parameter can be picked to minimize error on the desired interval of approximation.
Another approximation is given by Sergei Winitzki using his "global Padé approximations":
where
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the relative error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
An approximation with a maximal error of for any real argument is:
with
and
An approximation of with a maximum relative error less than in absolute value is:
for
and for
A simple approximation for real-valued arguments could be done through Hyperbolic functions:
which keeps the absolute difference
Since the error function and the Gaussian Q-function are closely related through the identity or equivalently , bounds developed for the Q-function can be adapted to approximate the complementary error function.
|
https://en.wikipedia.org/wiki/Error_function
|
passage: ### Octonion representations
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (, , , or ) depending on the details.
$$
\psi=
\begin{bmatrix}
a & e & \mu \\
\overline{e} & b & \tau \\
\overline{\mu} & \overline{\tau} & c
\end{bmatrix}
$$
$$
\ [\psi_A,\psi_B] \subset \mathrm{J}_3(\mathbb{O})\
$$
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that has subgroup and so is big enough to include the Standard Model. An gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of , these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
|
https://en.wikipedia.org/wiki/Grand_Unified_Theory
|
passage: A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion.
## Complex valued wave functions
Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include:
The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; an optical or quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative.
Any two different real waves in the same medium interfere; complex waves must be coherent to interfere. In practice this means the wave must come from the same source and have similar frequencies
Real wave interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In complex wave interference, we measure the modulus of the wavefunction squared.
### Optical wave interference
Because the frequency of light waves (~1014 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave.
|
https://en.wikipedia.org/wiki/Wave_interference
|
passage: The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber
$$
\scriptstyle\left(\omega,k\right)
$$
of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time.
The air-water interface is now endowed with a surface roughness due to the capillary-gravity waves, and a second phase of wave growth takes place. A wave established on the surface either spontaneously as described above, or in laboratory conditions, interacts with the turbulent mean flow in a manner described by Miles. This is the so-called critical-layer mechanism. A critical layer forms at a height where the wave speed c equals the mean turbulent flow U. As the flow is turbulent, its mean profile is logarithmic, and its second derivative is thus negative. This is precisely the condition for the mean flow to impart its energy to the interface through the critical layer. This supply of energy to the interface is destabilizing and causes the amplitude of the wave on the interface to grow in time. As in other examples of linear instability, the growth rate of the disturbance in this phase is exponential in time.
|
https://en.wikipedia.org/wiki/Gravity_wave
|
passage: ## Software implementations
Several heat map software implementations are freely available:
- R, a free software environment for statistical computing and graphics, contains several functions to trace heat maps,
- Gnuplot, a universal and free command-line plotting program, can trace 2D and 3D heat maps.
- Google Fusion Tables can generate a heat map from a Google Sheets spreadsheet limited to 1000 points of geographic data.
- Dave Green's 'cubehelix' colour scheme provides resources for a colour scheme that prints as a monotonically increasing greyscale on black and white postscript devices.
- Openlayers3 can render a heat map layer of a selected property of all geographic features in a vector layer.
- D3.js, AnyChart and Highcharts are JavaScript libraries for data visualization that provide the ability to create interactive heat map charts, from basic to highly customized, as part of their solutions.
- Python, a widely used language for data analysis and visualization, supports several libraries for creating heat maps:
- Matplotlib’s `imshow()` function visualizes 2D numerical arrays as color-coded images, with control over color mapping and axes.
- Seaborn’s `heatmap()` function provides an aesthetically refined heat map with minimal code, often used with Pandas DataFrames.
- Plotly’s `go. Heatmap()` function creates interactive HTML-based heat maps. It allows for x- and y-axis labels, 2D matrices, custom color scales, and detailed hover information.
|
https://en.wikipedia.org/wiki/Heat_map
|
passage: Mammals are also hunted and raced for sport, kept as pets and working animals of various types, and are used as model organisms in science. Mammals have been depicted in art since Paleolithic times, and appear in literature, film, mythology, and religion. Decline in numbers and extinction of many mammals is primarily driven by human poaching and habitat destruction, primarily deforestation.
## Classification
Mammal classification has been through several revisions since Carl Linnaeus initially defined the class, and at present, no classification system is universally accepted. McKenna & Bell (1997) and Wilson & Reeder (2005) provide useful recent compendiums. Simpson (1945) provides systematics of mammal origins and relationships that had been taught universally until the end of the 20th century.
However, since 1945, a large amount of new and more detailed information has gradually been found: The paleontological record has been recalibrated, and the intervening years have seen much debate and progress concerning the theoretical underpinnings of systematisation itself, partly through the new concept of cladistics. Though fieldwork and lab work progressively outdated Simpson's classification, it remains the closest thing to an official classification of mammals, despite its known issues.
Most mammals, including the six most species-rich orders, belong to the placental group. The three largest orders in numbers of species are Rodentia: mice, rats, porcupines, beavers, capybaras, and other gnawing mammals; Chiroptera: bats; and Eulipotyphla: shrews, moles, and solenodons.
|
https://en.wikipedia.org/wiki/Mammal
|
passage: The notion of setwise convergence formalizes the assertion that the measure of each measurable set should converge:
$$
\mu_n(A) \to \mu(A)
$$
Again, no uniformity over the set is required.
Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded
variation on a Polish space, setwise convergence implies the convergence
$$
\int f\, d\mu_n \to \int f\, d\mu
$$
for any bounded measurable function .
As before, this convergence is non-uniform in .
The notion of total variation convergence formalizes the assertion that the measure of all measurable sets should converge uniformly, i.e. for every there exists such that
$$
|\mu_n(A) - \mu(A)| < \varepsilon
$$
for every and for every measurable set . As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant.
## Total variation convergence of measures
This is the strongest notion of convergence shown on this page and is defined as follows. Let
$$
(X, \mathcal{F})
$$
be a measurable space.
|
https://en.wikipedia.org/wiki/Convergence_of_measures%23Weak_convergence_of_measures
|
passage: The Dirac matrices are a representation of , showing the equivalence with matrix representations used by physicists.
### Homogeneous models
Homogeneous models generally refer to a projective representation in which the elements of the one-dimensional subspaces of a vector space represent points of a geometry.
In a geometric algebra of a space of
$$
n
$$
dimensions, the rotors represent a set of transformations with
$$
n(n-1)/2
$$
degrees of freedom, corresponding to rotations – for example,
$$
3
$$
when
$$
n=3
$$
and
$$
6
$$
when . Geometric algebra is often used to model a projective space, i.e. as a homogeneous model: a point, line, plane, etc. is represented by an equivalence class of elements of the algebra that differ by an invertible scalar factor.
The rotors in a space of dimension
$$
n+1
$$
have
$$
n(n-1)/2+n
$$
degrees of freedom, the same as the number of degrees of freedom in the rotations and translations combined for an -dimensional space.
This is the case in Projective Geometric Algebra (PGA), which is used to represent Euclidean isometries in Euclidean geometry (thereby covering the large majority of engineering applications of geometry). In this model, a degenerate dimension is added to the three Euclidean dimensions to form the algebra .
|
https://en.wikipedia.org/wiki/Geometric_algebra
|
passage: ### Erlang distribution
For Z with Erlang distribution (which is the sum of n exponential distributions) we use the fact that the probability distribution of the sum of independent random variables is equal to the convolution of their probability distributions. So if
$$
Z = Y_1 + \cdots + Y_n
$$
with the Yi independent then
$$
\widetilde Z(s) = \widetilde Y_1(s) \cdots \widetilde Y_n(s)
$$
therefore in the case where Z has an Erlang distribution,
$$
\widetilde Z(s) = \left( \frac{\lambda}{\lambda+s} \right)^n.
$$
### Uniform distribution
For U with uniform distribution on the interval (a,b) , the transform is given by
$$
\widetilde U(s) = \int_a^b e^{-st} \frac{1}{b-a} dt = \frac{e^{-sa}-e^{-sb}}{s(b-a)}.
$$
## References
- ; 2nd ed (1974) .
- .
- .
- .
- .
Category:Integral transforms
Stieltjes
|
https://en.wikipedia.org/wiki/Laplace%E2%80%93Stieltjes_transform
|
passage: Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.
Example implementation of binary search in C:
```C
/*
Call binary_search with proper initial conditions.
INPUT:
data is an array of integers SORTED in ASCENDING order,
toFind is the integer to search for,
count is the total number of elements in the array
OUTPUT:
result of binary_search
- /
int search(int *data, int toFind, int count)
{
// Start = 0 (beginning index)
// End = count - 1 (top index)
return binary_search(data, toFind, 0, count-1);
}
/*
Binary Search Algorithm.
INPUT:
data is a array of integers SORTED in ASCENDING order,
toFind is the integer to search for,
start is the minimum array index,
end is the maximum array index
OUTPUT:
position of the integer toFind within array data,
-1 if not found
- /
int binary_search(int *data, int toFind, int start, int end)
{
//Get the midpoint.
int mid = start + (end - start)/2; //Integer division
if (start > end) //Stop condition (base case)
return -1;
else if (data[mid] == toFind) //Found, return index
return mid;
else if (data[mid] > toFind) //Data is greater than toFind, search lower half
return binary_search(data, toFind, start, mid-1);
else //Data is less than toFind, search upper half
return binary_search(data, toFind, mid+1, end);
}
```
|
https://en.wikipedia.org/wiki/Recursion_%28computer_science%29
|
passage: The Cartan distribution is spanned by all tangent planes to graphs of holonomic sections; that is, sections of the form jrφ for φ a section of π.
The annihilator of the Cartan distribution is a space of differential one-forms called contact forms, on Jr(π). The space of differential one-forms on Jr(π) is denoted by and the space of contact forms is denoted by . A one form is a contact form provided its pullback along every prolongation is zero. In other words, is a contact form if and only if
for all local sections σ of π over M.
The Cartan distribution is the main geometrical structure on jet spaces and plays an important role in the geometric theory of partial differential equations. The Cartan distributions are completely non-integrable. In particular, they are not involutive. The dimension of the Cartan distribution grows with the order of the jet space. However, on the space of infinite jets J∞ the Cartan distribution becomes involutive and finite-dimensional: its dimension coincides with the dimension of the base manifold M.
Example
Consider the case (E, π, M), where E ≃ R2 and M ≃ R. Then, (J1(π), π, M) defines the first jet bundle, and may be coordinated by (x, u, u1), where
for all p ∈ M and σ in Γp(π).
|
https://en.wikipedia.org/wiki/Jet_bundle
|
passage: - Plywood, 3400 BC, by the Ancient Mesopotamians; gluing wood at different angles gives better properties than natural wood.
- Cartonnage, layers of linen or papyrus soaked in plaster dates to the First Intermediate Period of Egypt c. 2181–2055 BC and was used for death masks.
- Cob mud bricks, or mud walls, (using mud (clay) with straw or gravel as a binder) have been used for thousands of years.
- Concrete was described by Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of aggregate appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana, which were volcanic sands from the sandlike beds of Pozzuoli brownish-yellow-gray in colour near Naples and reddish-brown at Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for cements used in buildings and a 1:2 ratio of lime to pulvis Puteolanus for underwater work, essentially the same ratio mixed today for concrete used at sea. Natural cement-stones, after burning, produced cements used in concretes from post-Roman times into the 20th century, with some properties superior to manufactured Portland cement.
- Papier-mâché, a composite of paper and glue, has been used for hundreds of years.
-
|
https://en.wikipedia.org/wiki/Composite_material
|
passage: u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients:
$$
\begin{align}
\frac{\partial F}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial u} & = -\frac{\partial F}{\partial u} \\[6pt]
\frac{\partial G}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial u} & = -\frac{\partial G}{\partial u} \\[6pt]
\frac{\partial F}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial v} & = -\frac{\partial F}{\partial v} \\[6pt]
\frac{\partial G}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial v} & = -\frac{\partial G}{\partial v}.
\end{align}
$$
Now, by Cramer's rule, we see that:
$$
|
https://en.wikipedia.org/wiki/Cramer%27s_rule
|
passage: G. H. Hardy in A Mathematician's Apology expressed the belief that the aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to mathematical aesthetics. Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. The 1998 book Proofs from THE BOOK, inspired by Erdős, is a collection of particularly succinct and revelatory mathematical arguments. Some examples of particularly elegant results included are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis.
Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science). The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.
## Cultural impact
### Artistic expression
Notes that sound well together to a Western ear are sounds whose fundamental frequencies of vibration are in simple ratios. For example, an octave doubles the frequency and a perfect fifth multiplies it by
$$
\frac{3}{2}
$$
.
Humans, as well as some other animals, find symmetric patterns to be more beautiful.
|
https://en.wikipedia.org/wiki/Mathematics
|
passage: Rapidly changing requirements demanded shorter product life-cycles, and often clashed with traditional methods of software development.
The Chrysler Comprehensive Compensation System (C3) started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the data access layer. Chrysler brought in Kent Beck, a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several problems with the development process. He took this opportunity to propose and implement some changes in development practices - based on his work with his frequent collaborator, Ward Cunningham. Beck describes the early conception of the methods:
Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a coach to instill the practices as habits in the C3 team.
Information about the principles and practices behind XP disseminated to the wider world through discussions on the original wiki, Cunningham's WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hypertext system map on the XP website at http://www.extremeprogramming.org .
Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ), spreading his ideas to a much larger audience. Authors in the series went through various aspects attending XP and its practices. The series included a book critical of the practices.
|
https://en.wikipedia.org/wiki/Extreme_programming
|
passage: It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written:
$$
\pi = d^{-1}t^1v^1 = tv/d.
$$
Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant.
Dimensional analysis has thus provided a general equation relating the three physical variables:
$$
F(\pi)=0,
$$
or, letting
$$
C
$$
denote a zero of function
$$
F,
$$
$$
\pi=C,
$$
which can be written in the desired form (which recall was
$$
t = \operatorname{Duration}(v, d)
$$
) as
$$
t = C\frac{d}{v}.
$$
The actual relationship between the three variables is simply
$$
d = vt.
$$
In other words, in this case
$$
F
$$
has one physically relevant root, and it is unity. The fact that only a single value of
$$
C
$$
will do and that it is equal to 1 is not revealed by the technique of dimensional analysis.
### The simple pendulum
We wish to determine the period
$$
T
$$
of small oscillations in a simple pendulum. It will be assumed that it is a function of the length
$$
L,
$$
the mass
$$
M,
$$
and the acceleration due to gravity on the surface of the Earth
$$
g,
$$
which has dimensions of length divided by time squared.
|
https://en.wikipedia.org/wiki/Buckingham_%CF%80_theorem
|
passage: If you attempt to insert the element 45, then you get into a cycle, and fail. In the last row of the table we find the same initial situation as at the beginning again.
$$
h\left(45\right)=45\bmod 11=1
$$
$$
h'\left(45\right)=\left\lfloor\frac{45}{11}\right\rfloor\bmod 11=4
$$
Table 1 Table 2 45 replaces 67 in cell 1 67 replaces 75 in cell 6 75 replaces 53 in cell 9 53 replaces 50 in cell 4 50 replaces 105 in cell 6 105 replaces 100 in cell 9 100 replaces 45 in cell 1 45 replaces 53 in cell 4 53 replaces 75 in cell 9 75 replaces 67 in cell 6 67 replaces 100 in cell 1 100 replaces 105 in cell 9 105 replaces 50 in cell 6 50 replaces 45 in cell 4 45 replaces 67 in cell 1 67 replaces 75 in cell 6
## Variations
Several variations of cuckoo hashing have been studied, primarily with the aim of improving its space usage by increasing the load factor that it can tolerate to a number greater than the 50% threshold of the basic algorithm. Some of these methods can also be used to reduce the failure rate of cuckoo hashing, causing rebuilds of the data structure to be much less frequent.
Generalizations of cuckoo hashing that use more than two alternative hash functions can be expected to utilize a larger part of the capacity of the hash table efficiently while sacrificing some lookup and insertion speed. Using just three hash functions increases the load to 91%.
|
https://en.wikipedia.org/wiki/Cuckoo_hashing
|
passage: Cluster sampling is an approach to non-probability sampling; this is an approach in which each member of the population is assigned to a group (cluster), and then clusters are randomly selected, and all members of selected clusters are included in the sample. Often combined with stratification techniques (in which case it is called multistage sampling), cluster sampling is the approach most often used by epidemiologists. In areas of forced migration, there is more significant sampling error. Thus cluster sampling is not the ideal choice.
## Mortality statistics
Causes of death vary greatly between developed and less developed countries; see also list of causes of death by rate for worldwide statistics.
+ World historical and predicted crude death rates (1950–2050) UN, medium variant, 2012 rev.YearsCDRYearsCDR1950–195519.12000–20058.41955–196017.32005–20108.11960–196516.22010–20158.11965–197012.92015–20208.11970–197511.62020–20258.11975–198010.62025–20308.31980–198510.02030–20358.61985–19909.42035–20409.01990–19959.12040–20459.41995–20008.82045–20509.7
According to Jean Ziegler (the United Nations Special Rapporteur on the Right to Food for 2000 to March 2008), mortality due to malnutrition accounted for 58% of the total mortality in 2006: "In the world, approximately 62 million people, all causes of death combined, die each year. In 2006, more than 36 million died of hunger or diseases due to deficiencies in micronutrients".
|
https://en.wikipedia.org/wiki/Mortality_rate
|
passage: While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process.
Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer. In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes.
|
https://en.wikipedia.org/wiki/Nuclear_medicine
|
passage: A packrat parser
is a form of parser similar to a recursive descent parser in construction, except that during the parsing process it memoizes the intermediate results of all invocations of the mutually recursive parsing functions, ensuring that each parsing function is only invoked at most once at a given input position. Because of this memoization, a packrat parser has the ability to parse many context-free grammars and any parsing expression grammar (including some that do not represent context-free languages) in linear time. Examples of memoized recursive descent parsers are known from at least as early as 1993.
This analysis of the performance of a packrat parser assumes that enough memory is available to hold all of the memoized results; in practice, if there is not enough memory, some parsing functions might have to be invoked more than once at the same input position, and consequently the parser could take more than linear time.
It is also possible to build LL parsers and LR parsers from parsing expression grammars, with better worst-case performance than a recursive descent parser without memoization, but the unlimited lookahead capability of the grammar formalism is then lost. Therefore, not all languages that can be expressed using parsing expression grammars can be parsed by LL or LR parsers.
### Bottom-up PEG parsing
A pika parser uses dynamic programming to apply PEG rules bottom-up and right to left, which is the inverse of the normal recursive descent order of top-down, left to right.
|
https://en.wikipedia.org/wiki/Parsing_expression_grammar
|
passage: Thus
$$
p=\sqrt{a^2+b^2-2ab\cos{B}}=\sqrt{c^2+d^2-2cd\cos{D}}
$$
and
$$
q=\sqrt{a^2+d^2-2ad\cos{A}}=\sqrt{b^2+c^2-2bc\cos{C}}.
$$
Other, more symmetric formulas for the lengths of the diagonals, are
$$
p=\sqrt{\frac{(ac+bd)(ad+bc)-2abcd(\cos{B}+\cos{D})}{ab+cd}}
$$
and
$$
q=\sqrt{\frac{(ab+cd)(ac+bd)-2abcd(\cos{A}+\cos{C})}{ad+bc}}.
$$
### Generalizations of the parallelogram law and Ptolemy's theorem
In any convex quadrilateral ABCD, the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of the line segment connecting the midpoints of the diagonals. Thus
$$
a^2 + b^2 + c^2 + d^2 = p^2 + q^2 + 4x^2
$$
where is the distance between the midpoints of the diagonals. This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law.
|
https://en.wikipedia.org/wiki/Quadrilateral
|
passage: Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland.
However, there is ample evidence of precociality or superprecociality among many dinosaur species, particularly theropods. For instance, non-ornithuromorph birds have been abundantly demonstrated to have had slow growth rates, megapode-like egg burying behavior and the ability to fly soon after birth. Both Tyrannosaurus and Troodon had juveniles with clear superprecociality and likely occupying different ecological niches than the adults. Superprecociality has been inferred for sauropods.
Genital structures are unlikely to fossilize as they lack scales that may allow preservation via pigmentation or residual calcium phosphate salts. In 2021, the best preserved specimen of a dinosaur's cloacal vent exterior was described for Psittacosaurus, demonstrating lateral swellings similar to crocodylian musk glands used in social displays by both sexes and pigmented regions which could also reflect a signalling function. However, this specimen on its own does not offer enough information to determine whether this dinosaur had sexual signalling functions; it only supports the possibility. Cloacal visual signalling can occur in either males or females in living birds, making it unlikely to be useful to determine sex for extinct dinosaurs.
### Physiology
Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs.
|
https://en.wikipedia.org/wiki/Dinosaur
|
passage: The search for ever larger primes has generated interest outside mathematical circles, through the Great Internet Mersenne Prime Search and other distributed computing projects., p. 245. The idea that prime numbers had few applications outside of pure mathematics was shattered in the 1970s when public-key cryptography and the RSA cryptosystem were invented, using prime numbers as their basis.
The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. The mathematical theory of prime numbers also moved forward with the Green–Tao theorem (2004) that there are arbitrarily long arithmetic progressions of prime numbers, and Yitang Zhang's 2013 proof that there exist infinitely many prime gaps of bounded size.
### Primality of one
Most early Greeks did not even consider 1 to be a number, so they could not consider its primality. A few scholars in the Greek and later Roman tradition, including Nicomachus, Iamblichus, Boethius, and Cassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider to be prime either. However, Euclid and a majority of the other Greek mathematicians considered as prime. The medieval Islamic mathematicians largely followed the Greeks in viewing 1 as not being a number. By the Middle Ages and Renaissance, mathematicians began treating 1 as a number, and by the 17th century some of them included it as the first prime number.
|
https://en.wikipedia.org/wiki/Prime_number
|
passage: A simple infinite series for is the Gregory–Leibniz series:
$$
\pi = \frac{4}{1} - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} + \frac{4}{9} - \frac{4}{11} + \frac{4}{13} - \cdots
$$
As individual terms of this infinite series are added to the sum, the total gradually gets closer to , and – with a sufficient number of terms – can get as close to as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of .
An infinite series for (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is:
$$
\pi = 3 + \frac{4}{2\times3\times4} - \frac{4}{4\times5\times6} + \frac{4}{6\times7\times8} - \frac{4}{8\times9\times10} + \cdots
$$
The following table compares the convergence rates of these two series:
Infinite series for After 1st term After 2nd term After 3rd term After 4th term After 5th term Converges to: 4.00002.6666 ... 3.4666 ... 2.8952 ... 3.3396 ... = 3.1415 ... 3.00003.1666 ... 3.1333 ... 3.1452 ... 3.1396 ...
|
https://en.wikipedia.org/wiki/Pi
|
passage: After Google's acquisition of YouTube, the CEO role was retained. Salar Kamangar took over Hurley's position and kept the job until 2014. He was replaced by Susan Wojcicki, who later resigned in 2023. The current CEO is Neal Mohan, who was appointed on February 16, 2023.
## Features
YouTube offers different features based on user verification, such as standard or basic features like uploading videos, creating playlists, and using YouTube Music, with limits based on daily activity (verification via phone number or channel history increases feature availability and daily usage limits); intermediate or additional features like longer videos (over 15 minutes), live streaming, custom thumbnails, and creating podcasts; advanced features like content ID appeals, embedding live streams, applying for monetization, clickable links, adding chapters, and pinning comments on videos or posts.
## Videos
In January 2012, it was estimated that visitors to YouTube spent an average of 15 minutes a day on the site, in contrast to the four or five hours a day spent by a typical US citizen watching television. In 2017, viewers on average watched YouTube on mobile devices for more than an hour every day.
In December 2012, two billion views were removed from the view counts of Universal and Sony music videos on YouTube, prompting a claim by The Daily Dot that the views had been deleted due to a violation of the site's terms of service, which ban the use of automated processes to inflate view counts. That was disputed by Billboard, which said that the two billion views had been moved to Vevo, since the videos were no longer active on YouTube.
|
https://en.wikipedia.org/wiki/YouTube
|
passage: And thus, it provides evidence about the possibility for high-risk of ASD infants to learn and respond to action-based treatment interventions. Another study investigates how teaching methods can benefit from embodiment and proposes that a professor's movements and gestures contribute to learning by growing students' embodied experiences in the classroom, leading to an increased capacity to recall.
The action-based language theory (ABL) states that aspects of embodiment are also relevant for language learning and acquisition. ABL proposes that the brain exploits the same mechanisms used in motor control for language learning. When adults, for example, call attention to an object and an infant follows the lead and attends to said object, canonical neurons are activated and affordances of an object become available to the infant. Simultaneously, hearing the articulation of the object's name leads to the activation of speech mirror mechanisms in infants. This chain of events allows for Hebbian learning of the meaning of verbal labels by linking the speech and action controllers, which get activated in this scenario.
The role of gestures in learning is another example of the importance of embodiment for cognition. Gestures can aid, facilitate and enhance learning performance, or compromise it when the gestures are restricted or meaningless to the content that is being transmitted. In a study using the Tower of Hanoi (TOH) puzzle, participants were divided into two groups. In the first part of the experiment, the smallest disks used in TOH were the lightest and could be moved using just one hand.
|
https://en.wikipedia.org/wiki/Embodied_cognition
|
passage: The human ovum measures approximately in diameter.
In humans, recombination rates differ between maternal and paternal DNA:
- Maternal DNA: Recombines approximately 42 times on average.
- Paternal DNA: Recombines approximately 27 times on average.
### Ooplasm
Ooplasm is like the yolk of the ovum, a cell substance at its center, which contains its nucleus, named the germinal vesicle, and the nucleolus, called the germinal disc.
The ooplasm consists of the cytoplasm of the ordinary animal cell with its spongioplasm and hyaloplasm, often called the formative yolk; and the nutritive yolk or deutoplasm, made of rounded granules of fatty and albuminoid substances imbedded in the cytoplasm.
Mammalian ova contain only a tiny amount of the nutritive yolk, for nourishing the embryo in the early stages of its development only. In contrast, bird eggs contain enough to supply the chick with nutriment throughout the whole period of incubation.
### Ova development in oviparous animals
In the oviparous animals (all birds, most fish, amphibians and reptiles), the ova develop protective layers and pass through the oviduct to the outside of the body. They are fertilized by male sperm either inside the female body (as in birds), or outside (as in many fish). After fertilization, an embryo develops, nourished by nutrients contained in the egg. It then hatches from the egg, outside the mother's body. See egg for a discussion of eggs of oviparous animals.
|
https://en.wikipedia.org/wiki/Egg_cell
|
passage: Email, calendar entries, contacts, tasks, and memos kept on the company's server are automatically synchronized with the BlackBerry.
## Operating systems of PDAs
The most common operating systems pre-installed on PDAs are:
- Palm OS
- Microsoft Windows Mobile (Pocket PC) with a Windows CE kernel
Other, rarely used operating systems:
- EPOC, then Symbian OS (in mobile phone + PDA combinations)
- Linux (e.g. VR3, iPAQ, Sharp Zaurus PDA, Opie, GPE, Familiar Linux etc.)
- Newton
- QNX (also on iPAQ)
## Automobile navigation
Some PDAs include Global Positioning System (GPS) receivers. Other PDAs are compatible with external GPS-receiver add-ons that use the PDA's processor and screen to display location information. PDAs with GPS functionality can be used for automotive navigation. Integrated PDAs were fitted as standard on new cars throughout the 2000s. PDA-based GPS can also display traffic conditions, perform dynamic routing, and show known locations of roadside mobile radar guns. TomTom, Garmin, and iGO offered GPS navigation software for PDAs.
## Ruggedized
Some businesses and government organizations rely upon rugged PDAs, sometimes known as enterprise digital assistants (EDAs) or mobile computers, for mobile data applications. These PDAs have features that make them more robust and able to handle inclement weather, jolts, and moisture. EDAs often have extra features for data capture, such as barcode readers, radio-frequency identification (RFID) readers, magnetic stripe card readers, or smart card readers.
|
https://en.wikipedia.org/wiki/Personal_digital_assistant
|
passage: The remainder should equal zero if there are no detectable errors.
```
11010011101100 100 <--- input with check value
1011 <--- divisor
01100011101100 100 <--- result
1011 <--- divisor ...
00111011101100 100
......
00000000001110 100
1011
00000000000101 100
101 1
00000000000000 000 <--- remainder
|
https://en.wikipedia.org/wiki/Cyclic_redundancy_check
|
passage: The meaning of the formula above is that the derivative with respect to the appropriate component of and gives the matrix element of . This is exactly analogous to the bosonic path integration formula for a Gaussian integral of a complex bosonic field:
$$
\int e^{\phi^* M \phi + h^* \phi + \phi^* h } \,D\phi^*\, D\phi = \frac{e^{h^* M^{-1} h} }{ \mathrm{Det}(M)}
$$
$$
\left\langle\phi^* \phi\right\rangle = \frac{1}{Z} \frac{\partial}{\partial h} \frac{\partial}{\partial h^*}Z |_{h=h^*=0} = M^{-1} \,.
$$
So that the propagator is the inverse of the matrix in the quadratic part of the action in both the Bose and Fermi case.
For real Grassmann fields, for Majorana fermions, the path integral is a Pfaffian times a source quadratic form, and the formulas give the square root of the determinant, just as they do for real Bosonic fields. The propagator is still the inverse of the quadratic part.
|
https://en.wikipedia.org/wiki/Feynman_diagram
|
passage: For lines with slope greater than 1, we reverse the role of x and y i.e. we sample at dy=1 and calculate consecutive x values as
$$
x_{k+1} = x_k + \frac{1}{m}
$$
$$
y_{k+1} = y_k + 1
$$
Similar calculations are carried out to determine pixel positions along a line with negative slope. Thus, if the absolute value of the slope is less than 1, we set dx=1 if
$$
x_{\rm start}<x_{\rm end}
$$
i.e. the starting extreme point is at the left.
## Program
DDA algorithm program in C++:
```cpp
1. include <graphics.h>
1. include <iostream.h>
1. include <math.h>
1. include <dos.h>
1. include <conio.h>
void main()
{
float x,
float y,
float x1, y1,
float x2, y2, dx, dy, step;
int i, gd = DETECT, gm;
initgraph(&gd, &gm, "C:\\TURBOC3\\BGI");
cout << "Enter the value of x1 and y1: ";
cin >> x1 >> y1;
cout << "Enter the value of x2 and y2: ";
cin >> x2 >> y2;
dx = (x2 - x1);
dy = (y2 - y1);
if (abs(dx) >= abs(dy))
step = abs(dx);
else
step = abs(dy);
dx = dx / step;
dy = dy / step;
x = x1;
y = y1;
i = 0;
while (i <= step) {
putpixel(round(x), round(y), 5);
x = x + dx;
y = y + dy;
i = i + 1;
delay(100);
}
getch();
closegraph();
}
```
|
https://en.wikipedia.org/wiki/Digital_differential_analyzer_%28graphics_algorithm%29
|
passage: the converse, the proof of the triangle inequality from the reverse triangle inequality works in two cases:
If
$$
\|u +v\| - \|u\| \geq 0,
$$
then by the reverse triangle inequality,
$$
\|u +v\| - \|u\| = {\big|}\|u + v\|-\|u\|{\big|} \leq \|(u + v) - u\| = \|v\| \Rightarrow \|u + v\| \leq \|u\| + \|v\|
$$
,
and if
$$
\|u +v\| - \|u\| < 0,
$$
then trivially
$$
\|u\| +\|v\| \geq \|u\| > \|u + v\|
$$
by the nonnegativity of the norm.
Thus, in both cases, we find that
$$
\|u\| + \|v\| \geq \|u + v\|
$$
.
|
https://en.wikipedia.org/wiki/Triangle_inequality
|
passage: $$
- Nonsummable diminishing step lengths, i.e.
$$
\alpha_k = \gamma_k/\lVert g^{(k)} \rVert_2,
$$
where
$$
\gamma_k \geq 0,\qquad \lim_{k\to\infty} \gamma_k = 0,\qquad \sum_{k=1}^\infty \gamma_k = \infty.
$$
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations.
|
https://en.wikipedia.org/wiki/Subgradient_method
|
passage: Both tetrahedral and octahedral molecules are often shown with their atoms inscribed in the apices or faces of cubes and might be considered as a single "cubic" system. Every point group in this system contains the simple tetrahedral rotational group as a subgroup. Methane (CH4) is often used as an example and, although often described as a tetrahedral molecule because of the very visible rotational symmetry, it really belongs to the octahedral symmetry class. Considering methane first as a tetrahedral molecule the 12 operations of group T are {E, 3 x c, 4 x b, 4 x b3} where c is a 180 degree rotation along x,y and z axes and b is a 120 degree rotation about the apices of a cube. Character tables under these four headings exhibit the corresponding four irreps A, E+1, E-1 and T and it would it is not difficult to convert the transformations of atoms during the symmetry operations to reducible matrices and thence to molecular irreps but this not necessary. Methane has two sets of equivalent atoms that are transformed into each other during operations: a single carbon atom and 4 hydrogen atoms. A single atom can only ever be transformed into itself and therefore always contributes the most symmetrical irrep to the end total irrep count. Additionally, there is a rule of group theory that the most symmetrical irrep must occur once and only once in the irreps of any equivalent atom set so the five dimensions of irreps being sought contain 2A and three others.
|
https://en.wikipedia.org/wiki/Molecular_symmetry
|
passage: Both expressions for are proportional to , reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces:
$$
\frac{\mathbb{E}{\left[x^2\right]}}{2t} = D = \mu k_\text{B} T
|
https://en.wikipedia.org/wiki/Brownian_motion
|
passage: That added detail is used because the function may then be defined by an inverse Mellin transform.
Formally, we may define by
$$
\Pi_0(x) = \frac{1}{2} \left( \sum_{p^n < x} \frac{1}{n} + \sum_{p^n \le x} \frac{1}{n} \right)\
$$
where the variable in each sum ranges over all primes within the specified limits.
We may also write
$$
\ \Pi_0(x) = \sum_{n=2}^x \frac{\Lambda(n)}{\log n} - \frac{\Lambda(x)}{2\log x} = \sum_{n=1}^\infty \frac 1 n \pi_0\left(x^{1/n}\right)
$$
where is the von Mangoldt function and
$$
\pi_0(x) = \lim_{\varepsilon \to 0} \frac{\pi(x-\varepsilon) + \pi(x+\varepsilon)}{2}.
$$
The Möbius inversion formula then gives
$$
\pi_0(x) = \sum_{n=1}^\infty \frac{\mu(n)}{n}\ \Pi_0\left(x^{1/n}\right),
$$
where is the Möbius function.
|
https://en.wikipedia.org/wiki/Prime-counting_function
|
passage: In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton's method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization,
is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).
Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.
|
https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization
|
passage: Other instruments using flow cytometry include cell sorters which physically separate and thereby purify cells of interest based on their optical properties.
## History
The first impedance-based flow cytometry device, using the Coulter principle, was disclosed in U.S. Patent 2,656,508, issued in 1953, to Wallace H. Coulter. Mack Fulwyler was the inventor of the forerunner to today's flow cytometers – particularly the cell sorter. Fulwyler developed this in 1965 with his publication in Science. The first fluorescence-based flow cytometry device (ICP 11) was developed in 1968 by Wolfgang Göhde from the University of Münster, filed for patent on 18 December 1968 and first commercialized in 1968/69 by German developer and manufacturer Partec through Phywe AG in Göttingen. At that time, absorption methods were still widely favored by other scientists over fluorescence methods. Soon after, flow cytometry instruments were developed, including the Cytofluorograph (1971) from Bio/Physics Systems Inc. (later: Ortho Diagnostics), the PAS 8000 (1973) from Partec, the first FACS (fluorescence-activated cell sorting) instrument from Becton Dickinson (1974), the ICP 22 (1975) from Partec/Phywe and the Epics from Coulter (1977/78). The first label-free high-frequency impedance flow cytometer based on a patented microfluidic "lab-on-chip", Ampha Z30, was introduced by Amphasys (2012).
|
https://en.wikipedia.org/wiki/Flow_cytometry
|
passage: In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. This is one of the first studies of a CRISPR-based in vivo human gene editing therapy, where the editing takes place inside the human body. The first injection of the CRISPR-Cas System was confirmed in March 2020.
Exagamglogene autotemcel, a CRISPR-based human gene editing therapy, was used for sickle cell and thalassemia in clinical trials.
### 2020s
2020
In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene, irrespective of body weight or age.
In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. The trial has been put on clinical hold.
|
https://en.wikipedia.org/wiki/Gene_therapy
|
passage: Without loss of generality, let's suppose we may order the
$$
k_i
$$
such that:
$$
k_1 \leq k_2 \leq ... \leq k_n
$$
Now, there exists a prefix code if and only if at each step
$$
j
$$
there is at least one codeword to choose that does not contain any of the previous
$$
j-1
$$
codewords as a prefix. Due to the existence of a codeword at a previous step
$$
i<j, s^{k_j-k_i}
$$
codewords are forbidden as they contain
$$
\sigma_i
$$
as a prefix. It follows that in general a prefix code exists if and only if:
$$
\forall j \geq 2, s^{k_j} > \sum_{i=1}^{j-1} s^{k_j - k_i}
$$
Dividing both sides by
$$
s^{k_j}
$$
, we find:
$$
\sum_{i=1}^n s^{-k_i} \leq 1
$$
QED.
## History
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960, publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference." He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I and Part II.
In terms of practical implications and applications, the study of bias in empirical data related to Algorithmic Probability emerged in the early 2010s.
|
https://en.wikipedia.org/wiki/Algorithmic_probability
|
passage: ### Direct sum of modules
The direct sum of modules is a construction that combines several modules into a new module.
The most familiar examples of that construction occur in considering vector spaces, which are modules over a field. The construction may also be extended to Banach spaces and Hilbert spaces.
### Direct sum in categories
An additive category is an abstraction of the properties of the category of modules. In such a category, finite products and coproducts agree, and the direct sum is either of them: cf. biproduct.
General case:
In category theory the is often but not always the coproduct in the category of the mathematical objects in question. For example, in the category of abelian groups, the direct sum is a coproduct. That is also true in the category of modules.
#### Direct sums versus coproducts in category of groups
However, the direct sum
$$
S_3 \oplus \Z_2
$$
(defined identically to the direct sum of abelian groups) is not a coproduct of the groups
$$
S_3
$$
and
$$
\Z_2
$$
in the category of groups. Therefore, for that category, a categorical direct sum is often called simply a coproduct to avoid any possible confusion.
### Direct sum of group representations
The direct sum of group representations generalizes the direct sum of the underlying modules by adding a group action.
|
https://en.wikipedia.org/wiki/Direct_sum
|
passage: The translate of
$$
A
$$
by
$$
T_{\mathbf{v}}
$$
is often written as
$$
A+\mathbf{v}
$$
.
### Application in classical physics
In classical physics, translational motion is movement that changes the position of an object, as opposed to rotation. For example, according to Whittaker:
A translation is the operation changing the positions of all points
$$
(x, y, z)
$$
of an object according to the formula
$$
(x,y,z) \to (x+\Delta x,y+\Delta y, z+\Delta z)
$$
where
$$
(\Delta x,\ \Delta y,\ \Delta z)
$$
is the same vector for each point of the object. The translation vector
$$
(\Delta x,\ \Delta y,\ \Delta z)
$$
common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements.
When considering spacetime, a change of time coordinate is considered to be a translation.
## As an operator
The translation operator turns a function of the original position,
$$
f(\mathbf{v})
$$
, into a function of the final position,
$$
f(\mathbf{v}+\mathbf{\delta})
$$
.
|
https://en.wikipedia.org/wiki/Translation_%28geometry%29
|
passage: A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps.
## In mathematics
, tessellation can be generalized to higher dimensions and a variety of geometries.
A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern (an aperiodic set of prototiles). A tessellation of space, also known as a space filling or honeycomb, can be defined in the geometry of higher dimensions.
A real physical tessellation is a tiling made of materials such as cemented ceramic squares or hexagons. Such tilings may be decorative patterns, or may have functions such as providing durable and water-resistant pavement, floor, or wall coverings. Historically, tessellations were used in Ancient Rome and in Islamic art such as in the Moroccan architecture and decorative geometric tiling of the Alhambra palace. In the twentieth century, the work of M. C. Escher often made use of tessellations, both in ordinary Euclidean geometry and in hyperbolic geometry, for artistic effect.
|
https://en.wikipedia.org/wiki/Tessellation
|
passage: Three important types are pollination, cleaning symbiosis, and zoochory.
In pollination, a plant trades food resources in the form of nectar or pollen for the service of pollen dispersal. However, daciniphilous Bulbophyllum orchid species trade sex pheromone precursor or booster components via floral synomones/attractants in a true mutualistic interactions with males of Dacini fruit flies (Diptera: Tephritidae: Dacinae).
Phagophiles feed (resource) on ectoparasites, thereby providing anti-pest service, as in cleaning symbiosis.
Elacatinus and Gobiosoma, genera of gobies, feed on ectoparasites of their clients while cleaning them.
Zoochory is the dispersal of the seeds of plants by animals. This is similar to pollination in that the plant produces food resources (for example, fleshy fruit, overabundance of seeds) for animals that disperse the seeds (service). Plants may advertise these resources using colour and a variety of other fruit characteristics, e.g., scent. Fruit of the aardvark cucumber (Cucumis humifructus) is buried so deeply that the plant is solely reliant upon the aardvark's keen sense of smell to detect its ripened fruit, extract, consume and then scatter its seeds; C. humifructuss geographical range is thus restricted to that of the aardvark.
|
https://en.wikipedia.org/wiki/Mutualism_%28biology%29
|
passage: He wrote on the link between continued fractions and Pell's equation.
- First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function.
- Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form
$$
x^2 + N y^2
$$
, some of it prefiguring quadratic reciprocity.
- Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated.
|
https://en.wikipedia.org/wiki/Number_theory
|
passage: There is a natural linear map from
$$
\mathfrak{g}
$$
into
$$
U(\mathfrak{g})
$$
obtained by restricting the quotient map of
$$
T \to U(\mathfrak{g})
$$
to degree one piece. The PBW theorem implies that the canonical map is actually injective. Thus, every Lie algebra
$$
\mathfrak{g}
$$
can be embedded into an associative algebra
$$
A=U(\mathfrak{g})
$$
in such a way that the bracket on
$$
\mathfrak{g}
$$
is given by
$$
[X,Y]=XY-YX
$$
in
$$
A
$$
.
If
$$
\mathfrak{g}
$$
is abelian, then
$$
U(\mathfrak{g})
$$
is the symmetric algebra of the vector space
$$
\mathfrak{g}
$$
.
Since
$$
\mathfrak{g}
$$
is a module over itself via adjoint representation, the enveloping algebra
$$
U(\mathfrak{g})
$$
becomes a
$$
\mathfrak{g}
$$
-module by extending the adjoint representation.
|
https://en.wikipedia.org/wiki/Lie_algebra_representation
|
passage: Structure theorem
The structure theorem is of central importance to TDA; as commented by G. Carlsson, "what makes homology useful as a discriminator between topological spaces is the fact that there is a classification theorem for finitely generated abelian groups". (see the fundamental theorem of finitely generated abelian groups).
The main argument used in the proof of the original structure theorem is the standard structure theorem for finitely generated modules over a principal ideal domain. However, this argument fails if the indexing set is
$$
(\mathbb{R},\leq)
$$
.
In general, not every persistence module can be decomposed into intervals. Many attempts have been made at relaxing the restrictions of the original structure theorem. The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset of
$$
\mathbb{R}
$$
is solved based on the work of Webb. The most notable result is done by Crawley-Boevey, which solved the case of
$$
\mathbb{R}
$$
. Crawley-Boevey's theorem states that any pointwise finite-dimensional persistence module is a direct sum of interval modules.
To understand the definition of his theorem, some concepts need introducing.
|
https://en.wikipedia.org/wiki/Topological_data_analysis
|
passage: ### Forked-line method
The forked-line method (also known as the tree method and the branching system) can also solve dihybrid and multi-hybrid crosses. A problem is converted to a series of monohybrid crosses, and the results are combined in a tree. However, a tree produces the same result as a Punnett square in less time and with more clarity. The example below assesses another double-heterozygote cross using RrYy x RrYy. As stated above, the phenotypic ratio is expected to be 9:3:3:1 if crossing unlinked genes from two double-heterozygotes. The genotypic ratio was obtained in the diagram below, this diagram will have more branches than if only analyzing for phenotypic ratio.
|
https://en.wikipedia.org/wiki/Punnett_square
|
passage: As an example, moving target indication can interact with Doppler to produce signal cancellation at certain radial velocities, which degrades performance.
Sea-based radar systems, semi-active radar homing, active radar homing, weather radar, military aircraft, and radar astronomy rely on the Doppler effect to enhance performance. This produces information about target velocity during the detection process. This also allows small objects to be detected in an environment containing much larger nearby slow moving objects.
Doppler shift depends upon whether the radar configuration is active or passive. Active radar transmits a signal that is reflected back to the receiver. Passive radar depends upon the object sending a signal to the receiver.
The Doppler frequency shift for active radar is as follows, where
$$
F_D
$$
is Doppler frequency,
$$
F_T
$$
is transmit frequency,
$$
V_R
$$
is radial velocity, and
$$
C
$$
is the speed of light:
$$
F_D = 2 \times F_T \times \left (\frac {V_R}{C} \right)
$$
.
Passive radar is applicable to electronic countermeasures and radio astronomy as follows:
$$
F_D = F_T \times \left (\frac {V_R}{C} \right)
$$
.
Only the radial component of the velocity is relevant. When the reflector is moving at right angle to the radar beam, it has no relative velocity.
|
https://en.wikipedia.org/wiki/Radar
|
passage: Using the characteristic function representation for the wrapped normal distribution in the left side of the integral:
$$
f_{WN}(\theta;\mu,\sigma) =\frac{1}{2\pi}\sum_{n=-\infty}^\infty q^{n^2/2}\,z^n
$$
the entropy may be written:
$$
H = -\ln\left(\frac{\phi(q)}{2\pi}\right)+\frac{1}{2\pi}\int_\Gamma \left( \sum_{n=-\infty}^\infty\sum_{k=1}^\infty \frac{(-1)^k}{k} \frac{q^{(n^2+k)/2}}{1-q^k}\left(z^{n+k}+z^{n-k}\right) \right)\,d\theta
$$
which may be integrated to yield:
$$
H = -\ln\left(\frac{\phi(q)}{2\pi}\right)+2\sum_{k=1}^\infty \frac{(-1)^k}{k}\, \frac{q^{(k^2+k)/2}}{1-q^k}
$$
|
https://en.wikipedia.org/wiki/Wrapped_normal_distribution
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.