text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: A tree that does not have any terminal nodes is called pruned.
## Relation to other types of trees
In graph theory, a rooted tree is a directed graph in which every vertex except for a special root vertex has exactly one outgoing edge, and in which the path formed by following these edges from any vertex eventually leads to the root vertex.
If
$$
T
$$
is a tree in the descriptive set theory sense, then it corresponds to a graph with one vertex for each sequence in
$$
T
$$
, and an outgoing edge from each nonempty sequence that connects it to the shorter sequence formed by removing its last element. This graph is a tree in the graph-theoretic sense. The root of the tree is the empty sequence.
In order theory, a different notion of a tree is used: an order-theoretic tree is a partially ordered set with one minimal element in which each element has a well-ordered set of predecessors.
Every tree in descriptive set theory is also an order-theoretic tree, using a partial ordering in which two sequences
$$
T
$$
and
$$
U
$$
are ordered by
$$
T<U
$$
if and only if
$$
T
$$
is a proper prefix of
$$
U
$$
. The empty sequence is the unique minimal element, and each element has a finite and well-ordered set of predecessors (the set of all of its prefixes).
|
https://en.wikipedia.org/wiki/Tree_%28descriptive_set_theory%29
|
passage: However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state.
Example:
Package specification (example.ads)
```ada
package Example is
type Number is range 1 .. 11;
procedure Print_and_Increment (j: in out Number);
end Example;
```
Package body (example.adb)
```ada
with Ada.Text_IO;
package body Example is
i : Number := Number'First;
procedure Print_and_Increment (j: in out Number) is
function Next (k: in Number) return Number is
begin
return k + 1;
end Next;
begin
Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) );
j := Next (j);
end Print_and_Increment;
-- package initialization executed when the package is elaborated
begin
while i < Number'Last loop
Print_and_Increment (i);
end loop;
end Example;
```
This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing
```bash
gnatmake -z example.adb
```
Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block.
Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order.
### Pragmas
A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output.
|
https://en.wikipedia.org/wiki/Ada_%28programming_language%29
|
passage: If the signs differ, then the sequence is concave. In this example, the polygon is negatively oriented, but the determinant for the points F-G-H is positive, and so the sequence F-G-H is concave.
The following table illustrates rules for determining whether a sequence of points is convex, concave, or flat:
Negatively oriented polygon (clockwise) Positively oriented polygon (counterclockwise) determinant of orientation matrix for local points is negative convex sequence of points concave sequence of points determinant of orientation matrix for local points is positive concave sequence of points convex sequence of points determinant of orientation matrix for local points is 0 collinear sequence of points collinear sequence of points
|
https://en.wikipedia.org/wiki/Curve_orientation
|
passage: This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, illustrated by the "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. Other natural surfaces with these capabilities can include Beetle carapaces, and cacti spines, which may exhibit rough features at multiple size scales. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances or liquid-like silocones. These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles.
|
https://en.wikipedia.org/wiki/Biomimetics
|
passage: It can be negative and also greater than 1 in modulus.
The relationship between quantity rotation, N, and unit turns, tr, can be expressed as:
$$
N = \frac \varphi \text{tr} = \{ \varphi \}_\text{tr}
$$
where {}tr is the numerical value of the angle in units of turns (see ).
In the ISQ/SI, rotation is used to derive rotational frequency (the rate of change of rotation with respect to time), denoted by :
$$
n = \frac{\mathrm{d}N}{\mathrm{d}t}
$$
The SI unit of rotational frequency is the reciprocal second (s−1). Common related units of frequency are hertz (Hz), cycles per second (cps), and revolutions per minute (rpm).
The superseded version ISO 80000-3:2006 defined "revolution" as a special name for the dimensionless unit "one",
which also received other special names, such as the radian.
Despite their dimensional homogeneity, these two specially named dimensionless units are applicable for non-comparable kinds of quantity: rotation and angle, respectively.
"Cycle" is also mentioned in ISO 80000-3, in the definition of period.
|
https://en.wikipedia.org/wiki/Turn_%28angle%29
|
passage: To preserve their businesses, natural gas utilities in the United States have been lobbying for laws preventing local electrification ordinances, and are promoting renewable natural gas and hydrogen fuel.
### Other pollutants
Although natural gas produces far lower amounts of sulfur dioxide and nitrogen oxides (NOx) than other fossil fuels, from burning natural gas in homes can be a health hazard.
### Radionuclides
Natural gas extraction also produces radioactive isotopes of polonium (Po-210), lead (Pb-210) and radon (Rn-220). Radon is a gas with initial activity from 5 to 200,000 becquerels per cubic meter of gas. It decays rapidly to Pb-210 which can build up as a thin film in gas extraction equipment.
## Safety concerns
The natural gas extraction workforce face unique health and safety challenges.
Production
Some gas fields yield sour gas containing hydrogen sulfide (), a toxic compound when inhaled. Amine gas treating, an industrial scale process which removes acidic gaseous components, is often used to remove hydrogen sulfide from natural gas.
Extraction of natural gas (or oil) leads to decrease in pressure in the reservoir. Such decrease in pressure in turn may result in subsidence — sinking of the ground above. Subsidence may affect ecosystems, waterways, sewer and water supply systems, foundations, and so on.
### Fracking
Releasing natural gas from subsurface porous rock formations may be accomplished by a process called hydraulic fracturing or "fracking".
|
https://en.wikipedia.org/wiki/Natural_gas
|
passage: Male physicians aged 40–59 years have been found to be more likely to have been reported for sexual misconduct; women aged 20–39 have been found to make up a significant portion of reported victims of sexual misconduct. Doctors who enter into sexual relationships with patients face the threats of losing their medical license and prosecution. In the early 1990s, it was estimated that 2–9% of doctors had violated this rule. Sexual relationships between physicians and patients' relatives may also be prohibited in some jurisdictions, although this prohibition is highly controversial.
## Futility
In some hospitals, medical futility is referred to as treatment that is unable to benefit the patient. An important part of practicing good medical ethics is by attempting to avoid futility by practicing non-maleficence. What should be done if there is no chance that a patient will survive or benefit from a potential treatment but the family members insist on advanced care? Previously, some articles defined futility as the patient having less than a one percent chance of surviving. Some of these cases are examined in court.
Advance directives include living wills and durable powers of attorney for health care. (See also do not resuscitate and cardiopulmonary resuscitation) In many cases, the "expressed wishes" of the patient are documented in these directives, and this provides a framework to guide family members and health care professionals in the decision-making process when the patient is incapacitated. Undocumented expressed wishes can also help guide decisions in the absence of advance directives, as in the Quinlan case in Missouri.
|
https://en.wikipedia.org/wiki/Medical_ethics
|
passage: ### Crusher bucket
s Soft to very hardNo limitDry or wet and sticky3/1 to 5/1Heavy mining, quarried materials, sand & gravel, recycling
Jaw crusher
A jaw crusher uses compressive force for breaking of particle. This mechanical pressure is achieved by the two jaws of the crusher of which one is fixed while the other reciprocates. A jaw or toggle crusher consists of a set of vertical jaws, one jaw is kept stationary and is called a fixed jaw while the other jaw called a swing jaw, moves back and forth relative to it, by a cam or pitman mechanism, acting like a class II lever or a nutcracker. The volume or cavity between the two jaws is called the crushing chamber. The movement of the swing jaw can be quite small, since complete crushing is not performed in one stroke. The inertia required to crush the material is provided by a flywheel that moves a shaft creating an eccentric motion that causes the closing of the gap.
Jaw crushers are heavy duty machines and hence need to be robustly constructed. The outer frame is generally made of cast iron or steel. The jaws themselves are usually constructed from cast steel. They are fitted with replaceable liners which are made of manganese steel, or Ni-hard (a Ni-Cr alloyed cast iron). Jaw crushers are usually constructed in sections to ease the process transportation if they are to be taken underground for carrying out the operations.
|
https://en.wikipedia.org/wiki/Crusher
|
passage: EEG is most sensitive to a particular set of post-synaptic potentials: those generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites which are deeper in the cortex, inside sulci, in midline or deep structures (such as the cingulate gyrus or hippocampus), or producing currents that are tangential to the skull, make far less contribution to the EEG signal.
EEG recordings do not directly capture axonal action potentials. An action potential can be accurately represented as a current quadrupole, meaning that the resulting field decreases more rapidly than the ones produced by the current dipole of post-synaptic potentials. In addition, since EEGs represent averages of thousands of neurons, a large population of cells in synchronous activity is necessary to cause a significant deflection on the recordings. Action potentials are very fast and, as a consequence, the chances of field summation are slim. However, neural backpropagation, as a typically longer dendritic current dipole, can be picked up by EEG electrodes and is a reliable indication of the occurrence of neural output.
Not only do EEGs capture dendritic currents almost exclusively as opposed to axonal currents, they also show a preference for activity on populations of parallel dendrites and transmitting current in the same direction at the same time. Pyramidal neurons of cortical layers II/III and V extend apical dendrites to layer I. Currents moving up or down these processes underlie most of the signals produced by electroencephalography.
|
https://en.wikipedia.org/wiki/Electroencephalography
|
passage: In the parlance of differential forms, this is saying that
$$
f(x)\,dx
$$
is the exterior derivative of the 0-form, i.e. function,
$$
F
$$
: in other words, that
$$
dF=f\,dx
$$
. The general Stokes theorem applies to higher degree differential forms
$$
\omega
$$
instead of just 0-forms such as
$$
F
$$
.
- A closed interval
$$
[a,b]
$$
is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points
$$
a
$$
and
$$
b
$$
. Integrating
$$
f
$$
over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed: the manifold has to be orientable, and the form has to be compactly supported in order to give a well-defined integral.
- The two points
$$
a
$$
and
$$
b
$$
form the boundary of the closed interval. More generally, Stokes' theorem applies to oriented manifolds
$$
M
$$
with boundary. The boundary
$$
\partial M
$$
of
$$
M
$$
is itself a manifold and inherits a natural orientation from that of
$$
M
$$
. For example, the natural orientation of the interval gives an orientation of the two boundary points.
|
https://en.wikipedia.org/wiki/Generalized_Stokes_theorem
|
passage: When only finitely many of the angles
$$
\theta_i
$$
are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity.
### Tangents and cotangents of sums
Let
$$
e_k
$$
(for
$$
k = 0, 1, 2, 3, \ldots
$$
) be the th-degree elementary symmetric polynomial in the variables
$$
x_i = \tan \theta_i
$$
for
$$
i = 0, 1, 2, 3, \ldots,
$$
that is,
$$
\begin{align}
e_0 &= 1 \\[6pt]
e_1 &= \sum_i x_i &&= \sum_i \tan\theta_i \\[6pt]
e_2 &= \sum_{i<j} x_i x_j &&= \sum_{i<j} \tan\theta_i \tan\theta_j \\[6pt]
e_3 &= \sum_{i<j<k} x_i x_j x_k &&= \sum_{i<j<k} \tan\theta_i \tan\theta_j \tan\theta_k \\
BLOCK0\end{align}
$$
Then
$$
\begin{align}
{\tan}\Bigl(\sum_i \theta_i\Bigr)
|
https://en.wikipedia.org/wiki/List_of_trigonometric_identities
|
passage: ### Direct search methods
Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods.
- Alternating variable search. Each parameter is varied in turn by adding a fixed or variable increment to it and retaining the value that brings about a reduction in the sum of squares. The method is simple and effective when the parameters are not highly correlated. It has very poor convergence properties, but may be useful for finding initial parameter estimates.
- Nelder–Mead (simplex) search. A simplex in this context is a polytope of n + 1 vertices in n dimensions; a triangle on a plane, a tetrahedron in three-dimensional space and so forth. Each vertex corresponds to a value of the objective function for a particular set of parameters. The shape and size of the simplex is adjusted by varying the parameters in such a way that the value of the objective function at the highest vertex always decreases. Although the sum of squares may initially decrease rapidly, it can converge to a nonstationary point on quasiconvex problems, by an example of M. J. D. Powell.
More detailed descriptions of these, and other, methods are available, in Numerical Recipes, together with computer code in various languages.
|
https://en.wikipedia.org/wiki/Non-linear_least_squares
|
passage: Wang found algorithms to enumerate the tilesets that cannot tile the plane, and the tilesets that tile it periodically; by this he showed that such a decision algorithm exists if every finite set of prototiles that admits a tiling of the plane also admits a periodic tiling. In 1964, Robert Berger found an aperiodic set of prototiles from which he demonstrated that the tiling problem is in fact not decidable. This first such set, used by Berger in his proof of undecidability, required 20,426 Wang tiles. Berger later reduced his set to 104, and Hans Läuchli subsequently found an aperiodic set requiring only 40 Wang tiles. A smaller set, of six aperiodic tiles (based on Wang tiles), was discovered by Raphael M. Robinson in 1971. Roger Penrose discovered three more sets in 1973 and 1974, reducing the number of tiles needed to two, and Robert Ammann discovered several new sets in 1977. The number of tiles required was reduced to one in 2023 by David Smith, Joseph Samuel Myers, Craig S. Kaplan, and Chaim Goodman-Strauss.
The aperiodic Penrose tilings can be generated not only by an aperiodic set of prototiles, but also by a substitution and by a cut-and-project method. After the discovery of quasicrystals aperiodic tilings become studied intensively by physicists and mathematicians. The cut-and-project method of N.G. de Bruijn for Penrose tilings eventually turned out to be an instance of the theory of Meyer sets.
|
https://en.wikipedia.org/wiki/Aperiodic_tiling
|
passage: Of the four Maxwell's equations, two—Faraday's law and Ampère's law—can be compactly expressed using curl. Faraday's law states that the curl of an electric field is equal to the opposite of the time rate of change of the magnetic field, while Ampère's law relates the curl of the magnetic field to the current and the time rate of change of the electric field.
## Identities
In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be
$$
\nabla \times \left( \mathbf{v \times F} \right) = \Big( \left( \mathbf{ \nabla \cdot F } \right) + \mathbf{F \cdot \nabla} \Big) \mathbf{v}- \Big( \left( \mathbf{ \nabla \cdot v } \right) + \mathbf{v \cdot \nabla} \Big) \mathbf{F} \ .
$$
Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field:
$$
\mathbf{v \ \times } \left( \mathbf{ \nabla \times F} \right) =\nabla_\mathbf{F} \left( \mathbf{v \cdot F } \right) - \left( \mathbf{v \cdot \nabla } \right) \mathbf{F} \ ,
$$
where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space).
|
https://en.wikipedia.org/wiki/Curl_%28mathematics%29
|
passage: Self-leadership is a way toward more effectively leading other people.
### Biology and evolution of leadership
Mark van Vugt and Anjana Ahuja in Naturally Selected: The Evolutionary Science of Leadership present cases of leadership in non-human animals, from ants and bees to baboons and chimpanzees. They suggest that leadership has a long evolutionary history and that the same mechanisms underpinning leadership in humans appear in other social species, too. They also suggest that the evolutionary origins of leadership differ from those of dominance. In one study, van Vugt and his team looked at the relation between basal testosterone and leadership versus dominance. They found that testosterone correlates with dominance but not with leadership. This was replicated in a sample of managers in which there was no relation between hierarchical position and testosterone level.
Richard Wrangham and Dale Peterson, in Demonic Males: Apes and the Origins of Human Violence, present evidence that only humans and chimpanzees, among all the animals living on Earth, share a similar tendency for a cluster of behaviors: violence, territoriality, and competition for uniting behind the one chief male of the land. This position is contentious. Many animals apart from apes are territorial, compete, exhibit violence, and have a social structure controlled by a dominant male (lions, wolves, etc.), suggesting Wrangham and Peterson's evidence is not empirical.
|
https://en.wikipedia.org/wiki/Leadership
|
passage: Calculating the next 'instruction' address (i.e. table entry) can even be performed as an optional additional action of every individual table entry allowing loops and or jump instructions at any stage.
## Monitoring control table execution
The interpreter program can optionally save the program counter (and other relevant details depending upon instruction type) at each stage to record a full or partial trace of the actual program flow for debugging purposes, hot spot detection, code coverage analysis and performance analysis (see examples CT3 & CT4 above).
## Advantages
- clarity – information tables are ubiquitous and mostly inherently understood even by the general public (especially fault diagnostic tables in product guides)
- portability – can be designed to be 100% language independent (and platform independent – except for the interpreter)
- flexibility – ability to execute either primitives or subroutines transparently and be custom designed to suit the problem
- compactness – table usually shows condition/action pairing side-by-side (without the usual platform/language implementation dependencies), often also resulting in
- binary file – reduced in size through less duplication of instructions
- source file – reduced in size through elimination of multiple conditional statements
- improved program load (or download) speeds
- maintainability – tables often reduce the number of source lines needed to be maintained v. multiple compares
- locality of reference – compact tables structures result in tables remaining in cache
- code re-use – the "interpreter" is usually reusable.
|
https://en.wikipedia.org/wiki/Control_table
|
passage: The helicases remain associated for the remainder of replication process. Peter Meister et al. observed directly replication sites in budding yeast by monitoring green fluorescent protein (GFP)-tagged DNA polymerases α. They detected DNA replication of pairs of the tagged loci spaced apart symmetrically from a replication origin and found that the distance between the pairs decreased markedly by time. This finding suggests that the mechanism of DNA replication goes with DNA factories. That is, couples of replication factories are loaded on replication origins and the factories associated with each other. Also, template DNAs move into the factories, which bring extrusion of the template ssDNAs and new DNAs. Meister's finding is the first direct evidence of replication factory model. Subsequent research has shown that DNA helicases form dimers in many eukaryotic cells and bacterial replication machineries stay in single intranuclear location during DNA synthesis.
Replication Factories Disentangle Sister Chromatids. The disentanglement is essential for distributing the chromatids into daughter cells after DNA replication. Because sister chromatids after DNA replication hold each other by Cohesin rings, there is the only chance for the disentanglement in DNA replication. Fixing of replication machineries as replication factories can improve the success rate of DNA replication. If replication forks move freely in chromosomes, catenation of nuclei is aggravated and impedes mitotic segregation.
### Termination
|
https://en.wikipedia.org/wiki/DNA_replication
|
passage: It is stated as follows:
Subdivide a triangle arbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that
1. Each of the three vertices , , and of the initial triangle has a distinct color
1. The vertices that lie along any edge of triangle have only two colors, the two colors at the endpoints of the edge. For example, each vertex on must have the same color as or .
Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation that has its vertices colored with all three different colors. More precisely, there must be an odd number of rainbow triangles.
### Multidimensional case
In the general case the lemma refers to a -dimensional simplex:
$$
\mathcal{A}=A_1 A_2 \ldots A_{n+1}.
$$
Consider any triangulation , a disjoint division of
$$
\mathcal{A}
$$
into smaller -dimensional simplices, again meeting face-to-face. Denote the coloring function as:
$$
f:S\to\{1,2,3,\dots,n,n+1\},
$$
where is the set of vertices of . A coloring function defines a Sperner coloring when:
1. The vertices of the large simplex are colored with different colors, that is, without loss of generality, for .
|
https://en.wikipedia.org/wiki/Sperner%27s_lemma
|
passage: For an explanation of the symbols used in this article, refer to the table of mathematical symbols.
## Definition
The intersection of two sets
$$
A
$$
and
$$
B,
$$
denoted by
$$
A \cap B
$$
, is the set of all objects that are members of both the sets
$$
A
$$
and
$$
B.
$$
In symbols:
$$
A \cap B = \{ x: x \in A \text{ and } x \in B\}.
$$
That is,
$$
x
$$
is an element of the intersection
$$
A \cap B
$$
if and only if
$$
x
$$
is both an element of
$$
A
$$
and an element of
$$
B.
$$
For example:
- The intersection of the sets {1, 2, 3} and {2, 3, 4} is {2, 3}.
- The number 9 is in the intersection of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of odd numbers {1, 3, 5, 7, 9, 11, ...}, because 9 is not prime.
### Intersecting and disjoint sets
We say that if there exists some
$$
x
$$
that is an element of both
$$
A
$$
and
$$
B,
$$
in which case we also say that .
|
https://en.wikipedia.org/wiki/Intersection_%28set_theory%29
|
passage: Approximately of water falls as precipitation each year: over oceans and over land. Given the Earth's surface area, that means the globally averaged annual precipitation is , but over land it is only . Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. Global warming is already causing changes to weather, increasing precipitation in some geographies, and reducing it in others, resulting in additional extreme weather.
Precipitation may occur on other celestial bodies. Saturn's largest satellite, Titan, hosts methane precipitation as a slow-falling drizzle, which has been observed as rain puddles at its equator and polar regions.
## Types
Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle".
|
https://en.wikipedia.org/wiki/Precipitation
|
passage: Firstly, we define
$$
\begin{align}
c_0 &= \sup \{|f(a)|:a\in A\}\\
E_0 &= \{a\in A:f(a)\geq c_0/3\}\\
F_0 &=\{a\in A:f(a)\leq -c_0/3\}.
\end{align}
$$
Observe that
$$
E_0
$$
and
$$
F_0
$$
are closed and disjoint subsets of
$$
A
$$
. By taking a linear combination of the function obtained from the proof of Urysohn's lemma, there exists a continuous function
$$
g_0:X\to \mathbb{R}
$$
such that
$$
\begin{align}
g_0 &= \frac{c_0}{3}\text{ on }E_0\\
g_0 &= -\frac{c_0}{3}\text{ on }F_0
\end{align}
$$
and furthermore
$$
-\frac{c_0}{3}\leq g_0 \leq \frac{c_0}{3}
$$
on
$$
X
$$
. In particular, it follows that
$$
\begin{align}
|g_0| &\leq \frac{c_0}{3}\\
|f-g_0| &\leq \frac{2c_0}{3}
\end{align}
$$
on
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Tietze_extension_theorem
|
passage: One relaxes the condition that the transversals be open, relatively compact subsets of Rq, allowing the transverse coordinates yα to take their values in some more general topological space Z. The plaques are still open, relatively compact subsets of Rp, the change of transverse coordinate formula yα(yβ) is continuous and xα(xβ,yβ) is of class Cr in the coordinates xβ and its mixed xβ partials of orders ≤ r are continuous in the coordinates (xβ,yβ). One usually requires M and Z to be locally compact, second countable and metrizable. This may seem like a rather wild generalization, but there are contexts in which it is useful.
## Holonomy
Let (M,
$$
\mathcal{F}
$$
) be a foliated manifold. If L is a leaf of
$$
\mathcal{F}
$$
and s is a path in L, one is interested in the behavior of the foliation in a neighborhood of s in M. Intuitively, an inhabitant of the leaf walks along the path s, keeping an eye on all of the nearby leaves. As they (hereafter denoted by s(t)) proceed, some of these leaves may "peel away", getting out of visual range, others may suddenly come into range and approach L asymptotically, others may follow along in a more or less parallel fashion or wind around L laterally, etc.
|
https://en.wikipedia.org/wiki/Foliation
|
passage: ### Special orthogonal group
By choosing an orthonormal basis of a Euclidean vector space , the orthogonal group can be identified with the group (under matrix multiplication) of orthogonal matrices, which are the matrices such that
$$
Q Q^\mathsf{T} = I.
$$
It follows from this equation that the square of the determinant of equals , and thus the determinant of is either or . The orthogonal matrices with determinant form a subgroup called the special orthogonal group, denoted , consisting of all direct isometries of , which are those that preserve the orientation of the space.
is a normal subgroup of , as being the kernel of the determinant, which is a group homomorphism whose image is the multiplicative group . This implies that the orthogonal group is an internal semidirect product of and any subgroup formed with the identity and a reflection.
The group with two elements (where is the identity matrix) is a normal subgroup and even a characteristic subgroup of , and, if is even, also of . If is odd, is the internal direct product of and .
The group is abelian (whereas is not abelian when ). Its finite subgroups are the cyclic group of -fold rotations, for every positive integer . All these groups are normal subgroups of and .
|
https://en.wikipedia.org/wiki/Orthogonal_group
|
passage: The importance of this traditional use is diminishing with competition from polypropylene and the development of other haymaking techniques, while new higher-valued sisal products have been developed.
Apart from ropes, twines, and general cordage, sisal is used in low-cost and specialty paper, dartboards, buffing cloth, filters, geotextiles, mattresses, carpets, handicrafts, wire rope cores, and macramé. Sisal has been used as an environmentally friendly strengthening agent to replace asbestos and fiberglass in composite materials in various uses, including the automobile industry. The lower-grade fiber is processed by the paper industry because of its high content of cellulose and hemicelluloses. The medium-grade fiber is used in the cordage industry for making ropes and baler and binder twine. Ropes and twines are widely employed for marine, agricultural, and general industrial use. The higher-grade fiber after treatment is converted into yarns and used by the carpet industry.
Other products developed from sisal fiber include spa products, cat-scratching posts, lumbar support belts, rugs, slippers, cloths, and disc buffers. Sisal wall covering meets the abrasion and tearing resistance standards of the American Society for Testing and Materials and of the National Fire Protection Association.
Sisal walls were used very frequently in the construction of Mormon meetinghouses built between 1985 and 2010. Because of its frequent use, it has become a meme in Mormon culture.
|
https://en.wikipedia.org/wiki/Sisal
|
passage: Furthermore, with the relatively elementary technique of the Grzegorczyk hierarchy, it can be shown that every primitive recursive strictly decreasing infinite sequence of ordinals
can be "slowed down" so that it can be transformed to a Goodstein sequence where
$$
b_n = n+1
$$
, thus giving an alternative proof to the same result Kirby and Paris proved.
## Sequence length as a function of the starting value
The Goodstein function,
$$
\mathcal{G}: \mathbb{N} \to \mathbb{N}
$$
, is defined such that
$$
\mathcal{G}(n)
$$
is the length of the Goodstein sequence that starts with n. (This is a total function since every Goodstein sequence terminates.) The extremely high growth rate of
$$
\mathcal{G}
$$
can be calibrated by relating it to various standard ordinal-indexed hierarchies of functions, such as the functions
$$
H_\alpha
$$
in the Hardy hierarchy, and the functions
$$
f_\alpha
$$
in the fast-growing hierarchy of Löb and Wainer:
- Kirby and Paris (1982) proved that
$$
\mathcal{G}
$$
has approximately the same growth-rate as
$$
H_{\epsilon_0}
$$
(which is the same as that of
$$
f_{\epsilon_0}
$$
); more precisely,
$$
\mathcal{G}
$$
dominates
$$
H_\alpha
$$
for every
$$
\alpha < \epsilon_0
$$
, and
|
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
|
passage: Properties shared between equivalent rings are called Morita invariant properties. For example, a ring R is semisimple if and only if all of its modules are semisimple, and since semisimple modules are preserved under Morita equivalence, an equivalent ring S must also have all of its modules semisimple, and therefore be a semisimple ring itself.
Sometimes it is not immediately obvious why a property should be preserved. For example, using one standard definition of von Neumann regular ring (for all a in R, there exists x in R such that a = axa) it is not clear that an equivalent ring should also be von Neumann regular. However another formulation is: a ring is von Neumann regular if and only if all of its modules are flat. Since flatness is preserved across Morita equivalence, it is now clear that von Neumann regularity is Morita invariant.
The following properties are Morita invariant:
- simple, semisimple
- von Neumann regular
- right (or left) Noetherian, right (or left) Artinian
- right (or left) self-injective
- quasi-Frobenius
- prime, right (or left) primitive, semiprime, semiprimitive
- right (or left) (semi-)hereditary
- right (or left) nonsingular
- right (or left) coherent
- semiprimary, right (or left) perfect, semiperfect
- semilocal
Examples of properties which are not Morita invariant include commutative, local, reduced, domain, right (or left) Goldie, Frobenius, invariant basis number, and Dedekind finite.
|
https://en.wikipedia.org/wiki/Morita_equivalence
|
passage: ## Amateur astronomy
Astronomy is one of the sciences to which amateurs can contribute the most.
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with consumer-level equipment or equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events that interest them.
Most amateurs work at visible wavelengths, but many experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (e.g. the One-Mile Telescope).
|
https://en.wikipedia.org/wiki/Astronomy
|
passage: Thus Cayley–Menger determinants give a computational way to prove affine independence.
If
$$
k < n
$$
, then the points must be affinely dependent, thus
$$
\operatorname{CM}(A_0, \ldots, A_n) = 0
$$
. Cayley's 1841 paper studied the special case of
$$
k = 3, n = 4
$$
, that is, any five points
$$
A_0, \ldots, A_4
$$
in 3-dimensional space must have
$$
\operatorname{CM}(A_0, \ldots, A_4) = 0
$$
.
## History
The first result in distance geometry is Heron's formula, from 1st century AD, which gives the area of a triangle from the distances between its 3 vertices. Brahmagupta's formula, from 7th century AD, generalizes it to cyclic quadrilaterals. Tartaglia, from 16th century AD, generalized it to give the volume of tetrahedron from the distances between its 4 vertices.
The modern theory of distance geometry began with Arthur Cayley and Karl Menger. Cayley published the Cayley determinant in 1841, which is a special case of the general Cayley–Menger determinant. Menger proved in 1928 a characterization theorem of all semimetric spaces that are isometrically embeddable in the n-dimensional Euclidean space
$$
\mathbb{R}^n
$$
. In 1931, Menger used distance relations to give an axiomatic treatment of Euclidean geometry.
|
https://en.wikipedia.org/wiki/Distance_geometry
|
passage: It is considered clockwise circularly polarized because, from the point of view of the source, looking in the same direction of the wave's propagation, the field rotates in the clockwise direction. The second animation is that of left-handed or anti-clockwise light, using this same convention.
This convention is in conformity with the Institute of Electrical and Electronics Engineers (IEEE) standard and, as a result, it is generally used in the engineering community. Electromagnetic Waves &
## Antennas
– S. J. Orfanidis: Footnote p.45, "most engineering texts use the IEEE convention and most physics texts, the opposite convention. "
Quantum physicists also use this convention of handedness because it is consistent with their convention of handedness for a particle's spin.
Radio astronomers also use this convention in accordance with an International Astronomical Union (IAU) resolution made in 1973.
### From the point of view of the receiver
In this alternative convention, polarization is defined from the point of view of the receiver. Using this convention, left- or right-handedness is determined by pointing one's left or right thumb the source, the direction of propagation, and then matching the curling of one's fingers to the temporal rotation of the field.
When using this convention, in contrast to the other convention, the defined handedness of the wave matches the handedness of the screw type nature of the field in space.
|
https://en.wikipedia.org/wiki/Circular_polarization
|
passage: It reaches its minimum (zero) when all cases in the node fall into a single target category.
For a set of items with
$$
J
$$
classes and relative frequencies
$$
p_i
$$
,
$$
i \in \{1, 2, ...,J\}
$$
, the probability of choosing an item with label
$$
i
$$
is
$$
p_i
$$
, and the probability of miscategorizing that item is
$$
\sum_{k \ne i} p_k = 1-p_i
$$
. The Gini impurity is computed by summing pairwise products of these probabilities for each class label:
$$
\operatorname{I}_G(p) = \sum_{i=1}^J \left( p_i \sum_{k\neq i} p_k \right)
= \sum_{i=1}^J p_i (1-p_i)
= \sum_{i=1}^J (p_i - p_i^2)
= \sum_{i=1}^J p_i - \sum_{i=1}^J p_i^2
= 1 - \sum^J_{i=1} p_i^2.
$$
The Gini impurity is also an information theoretic measure and corresponds to Tsallis Entropy with deformation coefficient
$$
q=2
$$
, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems.
|
https://en.wikipedia.org/wiki/Decision_tree_learning
|
passage: These terms distinguish those plants with hidden sexual organs (cryptogamae) from those with visible ones (phanerogamae).
## Description
The extant spermatophytes form five divisions, the first four of which are classified as gymnosperms, plants that have unenclosed, "naked seeds":
- Cycadophyta, the cycads, a subtropical and tropical group of plants,
- Ginkgophyta, which includes a single living species of tree in the genus Ginkgo,
- Pinophyta, the conifers, which are cone-bearing trees and shrubs, and
- Gnetophyta, the gnetophytes, various woody plants in the relict genera Ephedra, Gnetum, and Welwitschia.
The fifth extant division is the flowering plants, also known as angiosperms or magnoliophytes, the largest and most diverse group of spermatophytes:
- Angiosperms, the flowering plants, possess seeds enclosed in a fruit, unlike gymnosperms.
In addition to the five living taxa listed above, the fossil record contains evidence of many extinct taxa of seed plants, among those:
- Pteridospermae, the so-called "seed ferns", were one of the earliest successful groups of land plants, and forests dominated by seed ferns were prevalent in the late Paleozoic.
- Glossopteris was the most prominent tree genus in the ancient southern supercontinent of Gondwana during the Permian period.
|
https://en.wikipedia.org/wiki/Seed_plant
|
passage: Low-cost DAB radio receivers are now available from various Japanese manufacturers, and WorldSpace has worked with Thomson Broadcast to introduce a village communications center known as a Telekiosk to bring communications services to rural areas. The Telekiosks are self-contained and are available as fixed or mobile units
## Two-way digital radio standards
The key breakthrough or key feature in digital radio transmission systems is that they allow lower transmission power, they can provide robustness to noise and cross-talk and other forms of interference, and thus allow the same radio frequency to be reused at shorter distance. Consequently, the spectral efficiency (the number of phonecalls per MHz and base station, or the number of bit/s per Hz and transmitter, etc.) may be sufficiently increased. Digital radio transmission can also carry any kind of information whatsoever — just as long at it has been expressed digitally. Earlier radio communication systems had to be made expressly for a given form of communications: telephone, telegraph, or television, for example.
|
https://en.wikipedia.org/wiki/Digital_radio
|
passage: Then the series
$$
\sum_{n=1}^{\infty} f_n (x)
$$
converges absolutely and uniformly on A.
### Extensions to the ratio test
The ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allows one to deal with this case.
#### Raabe–Duhamel's test
Let { an } be a sequence of positive numbers.
Define
$$
b_n=n\left(\frac{a_n}{a_{n+1}}-1 \right).
$$
If
$$
L=\lim_{n\to\infty}b_n
$$
exists there are three possibilities:
- if L > 1 the series converges (this includes the case L = ∞)
- if L < 1 the series diverges
- and if L = 1 the test is inconclusive.
An alternative formulation of this test is as follows. Let } be a series of real numbers. Then if b > 1 and K (a natural number) exist such that
$$
\left|\frac{a_{n+1}}{a_n}\right|\le 1-\frac{b}{n}
$$
for all n > K then the series {an} is convergent.
#### Bertrand's test
Let { an } be a sequence of positive numbers.
|
https://en.wikipedia.org/wiki/Convergence_tests
|
passage: In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.
Colloquially, measures of central tendency are often called averages. The term central tendency dates from the late 1920s.
The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value. "Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP for International Statistical Institute. (entry for "central tendency")
The central tendency of a distribution is typically contrasted with its dispersion or variability; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion.
## Measures
The following may be applied to one-dimensional data. Depending on the circumstances, it may be appropriate to transform the data before calculating a central tendency. Examples are squaring the values or taking logarithms. Whether a transformation is appropriate and what it should be, depend heavily on the data being analyzed.
Arithmetic mean or simply, mean the sum of all measurements divided by the number of observations in the data set.
Median the middle value that separates the higher half from the lower half of the data set.
|
https://en.wikipedia.org/wiki/Central_tendency
|
passage: Indeed, Winograd showed that the DFT can be computed with only
$$
O(n)
$$
irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes.
Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime , expresses a DFT of prime size as a cyclic convolution of (composite) size , which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity
$$
nk = -\frac{(k-n)^2} 2 + \frac{n^2} 2 + \frac{k^2} 2.
$$
Hexagonal fast Fourier transform (HFFT) aims at computing an efficient FFT for the hexagonally-sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA).
|
https://en.wikipedia.org/wiki/Fast_Fourier_transform
|
passage: For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality.
Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an n × m matrix in-place, with O(1) additional storage or at most storage much less than mn. For n ≠ m, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore, efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.
## Transposes of linear maps and bilinear forms
As the main use of matrices is to represent linear maps between finite-dimensional vector spaces, the transpose is an operation on matrices that may be seen as the representation of some operation on linear maps.
|
https://en.wikipedia.org/wiki/Transpose
|
passage: In December 2023, two gene therapies were approved for sickle cell disease, exagamglogene autotemcel and lovotibeglogene autotemcel.
2024
In November 2024, FDA granted accelerated approval for eladocagene exuparvovec-tneq (Kebilidi, PTC Therapeutics), a direct-to-brain gene therapy for aromatic L-amino acid decarboxylase deficiency. It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen, increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure.
List of gene therapies
- Gene therapy for color blindness
- Gene therapy for epilepsy
- Gene therapy for osteoarthritis
- Gene therapy in Parkinson's disease
- Gene therapy of the human retina
- List of gene therapies
## References
## Further reading
-
-
-
-
-
-
-
-
-
-
## External links
Category:Applied genetics
Category:Approved gene therapies
Category:Bioethics
Category:Biotechnology
Category:Medical genetics
Category:Molecular biology
Category:Molecular genetics
Category:Gene delivery
Category:1989 introductions
Category:1996 introductions
Category:1989 in biotechnology
Category:Genetic engineering
|
https://en.wikipedia.org/wiki/Gene_therapy
|
passage: The base distribution is the expected value of the process, i.e., the Dirichlet process draws distributions "around" the base distribution the way a normal distribution draws real numbers around its mean. However, even if the base distribution is continuous, the distributions drawn from the Dirichlet process are almost surely discrete. The scaling parameter specifies how strong this discretization is: in the limit of
$$
\alpha\rightarrow 0
$$
, the realizations are all concentrated at a single value, while in the limit of
$$
\alpha\rightarrow\infty
$$
the realizations become continuous. Between the two extremes the realizations are discrete distributions with less and less concentration as
$$
\alpha
$$
increases.
The Dirichlet process can also be seen as the infinite-dimensional generalization of the Dirichlet distribution. In the same way as the Dirichlet distribution is the conjugate prior for the categorical distribution, the Dirichlet process is the conjugate prior for infinite, nonparametric discrete distributions. A particularly important application of Dirichlet processes is as a prior probability distribution in infinite mixture models.
The Dirichlet process was formally introduced by Thomas S. Ferguson in 1973.
It has since been applied in data mining and machine learning, among others for natural language processing, computer vision and bioinformatics.
## Introduction
Dirichlet processes are usually used when modelling data that tends to repeat previous values in a so-called "rich get richer" fashion.
|
https://en.wikipedia.org/wiki/Dirichlet_process
|
passage: More importantly, the estimator for
$$
P (y=1\mid x)
$$
becomes inconsistent, too. To deal with this problem, the original model needs to be transformed to be homoskedastic. For instance, in the same example,
$$
1[\beta_0+\beta_1 x_1+\varepsilon>0]
$$
can be rewritten as
$$
1[\beta_0/x_1+\beta_1+\varepsilon/x_1>0]
$$
, where
$$
\varepsilon/x_1\mid x\sim N(0,1)
$$
. Therefore,
$$
P(y=1\mid x) = \Phi (\beta_1 + \beta_0/x_1)
$$
and running probit on
$$
(1, 1/x_1)
$$
generates a consistent estimator for the conditional probability
$$
P(y=1\mid x).
$$
When the assumption that
$$
\varepsilon
$$
is normally distributed fails to hold, then a functional form misspecification issue arises: if the model is still estimated as a probit model, the estimators of the coefficients
$$
\beta
$$
are inconsistent. For instance, if
$$
\varepsilon
$$
follows a logistic distribution in the true model, but the model is estimated by probit, the estimates will be generally smaller than the true value.
|
https://en.wikipedia.org/wiki/Probit_model
|
passage: ## Example
Using "playfair example" as the key (assuming that I and J are interchangeable), the table becomes (omitted letters in red):
The first step of encrypting the message "hide the gold in the tree stump" is to convert it to the pairs of letters "HI DE TH EG OL DI NT HE TR EX ES TU MP" (with the null "X" used to separate the repeated "E"s). Then:
1. The pair HI forms a rectangle, replace it with BM 2. The pair DE is in a column, replace it with OD 3. The pair TH forms a rectangle, replace it with ZB 4. The pair EG forms a rectangle, replace it with XD 5. The pair OL forms a rectangle, replace it with NA 6. The pair DI forms a rectangle, replace it with BE 7. The pair NT forms a rectangle, replace it with KU 8. The pair HE forms a rectangle, replace it with DM 9. The pair TR forms a rectangle, replace it with UI 10. The pair EX (X inserted to split EE) is in a row, replace it with XM 11. The pair ES forms a rectangle, replace it with MO 12. The pair TU is in a row, replace it with UV 13. The pair MP forms a rectangle, replace it with IF
Thus the message "hide the gold in the tree stump" becomes "BM OD ZB XD NA BE KU DM UI XM MO UV
|
https://en.wikipedia.org/wiki/Playfair_cipher
|
passage: "
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such as SPSS requires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstone made progress toward developing a justification for obtaining the interval type, based on the law of comparative judgment. A common application of the law is the analytic hierarchy process. Further progress was made by Georg Rasch (1960), who developed the probabilistic Rasch model that provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
### Other proposed typologies
Typologies aside from Stevens's typology have been proposed.
|
https://en.wikipedia.org/wiki/Level_of_measurement
|
passage: Applying Laplace transformation results in the transformed PID controller equation
$$
u(s) = K_P \, e(s) + K_I \, \frac{1}{s} \, e(s) + K_D \, s \, e(s)
$$
$$
u(s) = \left(K_P + K_I \, \frac{1}{s} + K_D \, s\right) e(s)
$$
with the PID controller transfer function
$$
C(s) = \left(K_P + K_I \, \frac{1}{s} + K_D \, s\right).
$$
As an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by
$$
P(s) = \frac{A}{1 + sT_P}
$$
where and are some constants. The plant output is fed back through
$$
F(s) = \frac{1}{1 + sT_F}
$$
where is also a constant. Now if we set
$$
K_P=K\left(1+\frac{T_D}{T_I}\right)
$$
, , and
$$
K_I=\frac{K}{T_I}
$$
, we can express the PID controller transfer function in series form as
$$
C(s) = K \left(1 + \frac{1}{sT_I}\right)(1 + sT_D)
$$
Plugging , , and into the closed-loop transfer function , we find that by setting
$$
K = \frac{1}{A}, T_I = T_F, T_D = T_P
$$
.
|
https://en.wikipedia.org/wiki/Closed-loop_controller
|
passage: then the inequalities above become equalities (with
$$
\limsup_{n\to\infty} a_n
$$
or
$$
\liminf_{n\to\infty} a_n
$$
being replaced by
$$
a
$$
).
- For any two sequences of non-negative real numbers
$$
(a_n), (b_n),
$$
the inequalities
$$
\limsup_{n\to\infty}\, (a_n b_n) \leq \left(\limsup_{n\to\infty} a_n \!\right) \!\!\left(\limsup_{n\to\infty} b_n \!\right)
$$
and
$$
\liminf_{n\to\infty}\, (a_n b_n) \geq \left(\liminf_{n\to\infty} a_n \right)\!\!\left(\liminf_{n\to\infty} b_n\right)
$$
hold whenever the right-hand side is not of the form
$$
0 \cdot \infty.
$$
If
$$
\lim_{n\to\infty} a_n = A
$$
exists (including the case
$$
A = +\infty
$$
)
|
https://en.wikipedia.org/wiki/Limit_inferior_and_limit_superior
|
passage: ### Myerson–Satterthwaite theorem
show there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics.
### Shapley value
Phillips and Marden (2018) proved that for cost-sharing games with concave cost functions, the optimal cost-sharing rule that firstly optimizes the worst-case inefficiencies in a game (the price of anarchy), and then secondly optimizes the best-case outcomes (the price of stability), is precisely the Shapley value cost-sharing rule. A symmetrical statement is similarly valid for utility-sharing games with convex utility functions.
### Price discrimination
introduces a setting in which the transfer function t() is easy to solve for. Due to its relevance and tractability it is a common setting in the literature. Consider a single-good, single-agent setting in which the agent has quasilinear utility with an unknown type parameter
and in which the principal has a prior CDF over the agent's type . The principal can produce goods at a convex marginal cost c(x) and wants to maximize the expected profit from the transaction
subject to IC and IR conditions
The principal here is a monopolist trying to set a profit-maximizing price scheme in which it cannot identify the type of the customer. A common example is an airline setting fares for business, leisure and student travelers.
|
https://en.wikipedia.org/wiki/Mechanism_design
|
passage: The calculus has applications in, for example, stochastic filtering.
## Overview and history
Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.
## Gaussian probability space
Consider a Wiener functional
$$
F
$$
(a functional from the classical Wiener space) and consider the task of finding a derivative for it. The natural idea would be to use the Gateaux derivative
$$
D_g F:=\left.\frac{d}{d\tau}F[f+\tau g]\right|_{\tau=0},
$$
however this does not always exist. Therefore it does make sense to find a new differential calculus for such spaces by limiting the directions.
The toy model of Malliavin calculus is an irreducible Gaussian probability space
$$
X=(\Omega,\mathcal{F},P,\mathcal{H})
$$
.
|
https://en.wikipedia.org/wiki/Malliavin_calculus
|
passage: &&= 2^{-\frac n 2}\cdot\sum_{k=0}^n \binom{n}{k} H_{n-k}\left(x\sqrt 2\right) H_k\left(y\sqrt 2\right).
\end{align}
$$
These umbral identities are self-evident and included in the differential operator representation detailed below,
$$
\begin{align}
\operatorname{He}_n(x) &= e^{-\frac{D^2}{2}} x^n, \\
H_n(x) &= 2^n e^{-\frac{D^2}{4}} x^n.
\end{align}
$$
In consequence, for the th derivatives the following relations hold:
$$
\begin{align}
\operatorname{He}_n^{(m)}(x) &= \frac{n!}{(n-m)!} \operatorname{He}_{n-m}(x)
&&= m! \binom{n}{m} \operatorname{He}_{n-m}(x), \\
H_n^{(m)}(x) &= 2^m \frac{n!}{(n-m)!} H_{n-m}(x)
&&= 2^m m! \binom{n}{m} H_{n-m}(x).
\end{align}
$$
It follows that the Hermite polynomials also satisfy the recurrence relation
$$
\begin{align}
|
https://en.wikipedia.org/wiki/Hermite_polynomials
|
passage: In geography, regions, otherwise referred to as areas, zones, lands or territories, are portions of the Earth's surface that are broadly divided by physical characteristics (physical geography), human impact characteristics (human geography), and the interaction of humanity and the environment (environmental geography). Geographic regions and sub-regions are mostly described by their imprecisely defined, and sometimes transitory boundaries, except in human geography, where jurisdiction areas such as national borders are defined in law. More confined or well bounded portions are called locations or places.
Apart from the global continental regions, there are also hydrospheric and atmospheric regions that cover the oceans, and discrete climates above the land and water masses of the planet. The land and water global regions are divided into subregions geographically bounded by large geological features that influence large-scale ecologies, such as plains and features.
As a way of describing spatial areas, the concept of regions is important and widely used among the many branches of geography, each of which can describe areas in regional terms. For example, ecoregion is a term used in environmental geography, cultural region in cultural geography, bioregion in biogeography, and so on. The field of geography that studies regions themselves is called regional geography. Regions are an area or division, especially part of a country or the world having definable characteristics but not always fixed boundaries.
|
https://en.wikipedia.org/wiki/Region
|
passage: For small values of capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger values can be made by multiple stacks of plates and disks. Larger value capacitors usually use a metal foil or metal film layer deposited on the surface of a dielectric film to make the plates, and a dielectric film of impregnated paper or plasticthese are rolled up to save space. To reduce the series resistance and inductance for long plates, the plates and dielectric are staggered so that connection is made at the common edge of the rolled-up plates, not at the ends of the foil or metalized film strips that comprise the plates.
The assembly is encased to prevent moisture entering the dielectricearly radio equipment used a cardboard tube sealed with wax. Modern paper or film dielectric capacitors are dipped in a hard thermoplastic. Large capacitors for high-voltage use may have the roll form compressed to fit into a rectangular metal case, with bolted terminals and bushings for connections. The dielectric in larger capacitors is often impregnated with a liquid to improve its properties.
Capacitors may have their connecting leads arranged in many configurations, for example axially or radially. "Axial" means that the leads are on a common axis, typically the axis of the capacitor's cylindrical bodythe leads extend from opposite ends. Radial leads are rarely aligned along radii of the body's circle, so the term is conventional.
|
https://en.wikipedia.org/wiki/Capacitor
|
passage: Now if each pair of the eigenvalues
$$
(a_n, b_n)
$$
uniquely specifies a state vector of this basis, we claim to have formed a CSCO: the set
$$
\{A, B\}
$$
. The degeneracy in
$$
\hat{A}
$$
is completely removed.
It may so happen, nonetheless, that the degeneracy is not completely lifted. That is, there exists at least one pair
$$
(a_n, b_n)
$$
which does not uniquely identify one eigenvector. In this case, we repeat the above process by adding another observable
$$
C
$$
, which is compatible with both
$$
A
$$
and
$$
B
$$
. If the basis of common eigenfunctions of
$$
\hat{A}
$$
,
$$
\hat{B}
$$
and
$$
\hat{C}
$$
is unique, that is, uniquely specified by the set of eigenvalues
$$
(a_n, b_n, c_n)
$$
, then we have formed a CSCO:
$$
\{A, B, C\}
$$
. If not, we add one more compatible observable and continue the process till a CSCO is obtained.
The same vector space may have distinct complete sets of commuting operators.
Suppose we are given a finite CSCO
$$
\{A, B, C,...,\}
$$
.
|
https://en.wikipedia.org/wiki/Complete_set_of_commuting_observables
|
passage: Nyquist and Bode built on Black's work to develop a theory of amplifier stability.
Early researchers in the area of cybernetics subsequently generalized the idea of negative feedback to cover any goal-seeking or purposeful behavior.
Cybernetics pioneer Norbert Wiener helped to formalize the concepts of feedback control, defining feedback in general as "the chain of the transmission and return of information", and negative feedback as the case when:
While the view of feedback as any "circularity of action" helped to keep the theory simple and consistent, Ashby pointed out that, while it may clash with definitions that require a "materially evident" connection, "the exact definition of feedback is nowhere important". Ashby pointed out the limitations of the concept of "feedback":
To reduce confusion, later authors have suggested alternative terms such as degenerative, self-correcting, balancing, or discrepancy-reducing in place of "negative".
|
https://en.wikipedia.org/wiki/Negative_feedback
|
passage: Such disorders include cryptorchidism (undescended testes), congenital abnormalities of the genitourinary tract, enuresis, underdeveloped genitalia (due to delayed growth or delayed puberty, often an endocrinological problem), and vesicoureteral reflux.
### Andrology
Andrology is the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men such as prostate cancer, male fertility problems, and surgery of the male reproductive system. It is the counterpart to gynaecology, which deals with medical issues that are specific to female health, especially reproductive and urologic health.
### Reconstructive urology
Reconstructive urology is a highly specialized field of male urology that restores both structure and function to the genitourinary tract. Prostate procedures, full or partial hysterectomies, trauma (auto accidents, gunshot wounds, industrial accidents, straddle injuries, etc.), disease, obstructions, blockages (e.g., urethral strictures), and occasionally, childbirth, can necessitate reconstructive surgery. The urinary bladder, ureters (the tubes that lead from the kidneys to the urinary bladder) and genitalia are other examples of reconstructive urology.
### Female urology
Female urology is a branch of urology dealing with overactive bladder, pelvic organ prolapse, and urinary incontinence. Many of these physicians also practice neurourology and reconstructive urology as mentioned above.
|
https://en.wikipedia.org/wiki/Urology
|
passage: ### In deformable bodies and fluids
#### Conservation in a continuum
In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of individual atoms or molecules. Instead, the materials must be approximated by a continuum in which, at each point, there is a particle or fluid parcel that is assigned the average of the properties of atoms in a small region nearby. In particular, it has a density and velocity that depend on time and position . The momentum per unit volume is .
Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance and the water is motionless. On any given drop of water, two forces are balanced. The first is gravity, which acts directly on each atom and molecule inside. The gravitational force per unit volume is , where is the gravitational acceleration. The second force is the sum of all the forces exerted on its surface by the surrounding water. The force from below is greater than the force from above by just the amount needed to balance gravity. The normal force per unit area is the pressure . The average force per unit volume inside the droplet is the gradient of the pressure, so the force balance equation is
$$
-\nabla p +\rho \mathbf{g} = 0\,.
$$
If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial derivative because the fluid in a given volume changes with time.
|
https://en.wikipedia.org/wiki/Momentum
|
passage: To see that no two of the numbers can occupy the same position (as a single number), suppose to the contrary that
$$
j/r = k/s
$$
for some j and k. Then
$$
r/s
$$
=
$$
j/k
$$
, a rational number, but also,
$$
r/s = r(1 - 1/r) = r - 1,
$$
not a rational number. Therefore, no two of the numbers occupy the same position.
For any
$$
j/r
$$
, there are
$$
j
$$
positive integers
$$
i
$$
such that
$$
i/r \le j/r
$$
and
$$
\lfloor js/r \rfloor
$$
positive integers
$$
k
$$
such that
$$
k/s \le j/r
$$
, so that the position of
$$
j/r
$$
in the list is
$$
j + \lfloor js/r \rfloor
$$
. The equation
$$
1/r + 1/s = 1
$$
implies
$$
j + \lfloor js/r \rfloor = j + \lfloor j(s - 1) \rfloor = \lfloor js \rfloor.
$$
Likewise, the position of
$$
k/s
$$
in the list is
$$
\lfloor kr \rfloor
$$
.
|
https://en.wikipedia.org/wiki/Beatty_sequence
|
passage: Therefore, the problem of mapping inputs to outputs can be reduced to an optimization problem of finding a function that will produce the minimal error.
However, the output of a neuron depends on the weighted sum of all its inputs:
$$
y=x_1w_1 + x_2w_2,
$$
where
$$
w_1
$$
and
$$
w_2
$$
are the weights on the connection from the input units to the output unit. Therefore, the error also depends on the incoming weights to the neuron, which is ultimately what needs to be changed in the network to enable learning.
In this example, upon injecting the training data
$$
(1, 1, 0)
$$
, the loss function becomes
$$
E = (t-y)^2 = y^2 = (x_1w_1 + x_2w_2)^2 = (w_1 + w_2)^2.
$$
Then, the loss function
$$
E
$$
takes the form of a parabolic cylinder with its base directed along
$$
w_1 = -w_2
$$
. Since all sets of weights that satisfy
$$
w_1 = -w_2
$$
minimize the loss function, in this case additional constraints are required to converge to a unique solution. Additional constraints could either be generated by setting specific conditions to the weights, or by injecting additional training data.
One commonly used algorithm to find the set of weights that minimizes the error is gradient descent. By backpropagation, the steepest descent direction is calculated of the loss function versus the present synaptic weights.
|
https://en.wikipedia.org/wiki/Backpropagation
|
passage: Consequently, if
$$
t_n = 0
$$
then
$$
t_{n+1} = 0
$$
and
$$
(t_{n+1} < f \leq t_n) = \varnothing
$$
so that
$$
f_n = \frac{1}{r_n} \, f \,\mathbf{1}_{(t_{n+1} < f \leq t_n)}
$$
is identically equal to
$$
0
$$
(in particular, the division
$$
\tfrac{1}{r_n}
$$
by
$$
r_n = 0
$$
causes no issues).
|
https://en.wikipedia.org/wiki/Lp_space
|
passage: ## Outline of the proof of the Calabi conjecture
Calabi transformed the Calabi conjecture into a non-linear partial differential equation of complex Monge–Ampère type, and showed that this equation has at most one solution, thus establishing the uniqueness of the required Kähler metric.
Yau proved the Calabi conjecture by constructing a solution of this equation using the continuity method. This involves first solving an easier equation, and then showing that a solution to the easy equation can be continuously deformed to a solution of the hard equation. The hardest part of Yau's solution is proving certain a priori estimates for the derivatives of solutions.
### Transformation of the Calabi conjecture to a differential equation
Suppose that
$$
M
$$
is a complex compact manifold with a Kähler form
$$
\omega
$$
.
By the -lemma, any other Kähler form in the same de Rham cohomology class is of the form
$$
\omega+dd'\varphi
$$
for some smooth function
$$
\varphi
$$
on
$$
M
$$
, unique up to addition of a constant. The Calabi conjecture is therefore equivalent to the following problem:
Let
$$
F=e^f
$$
be a positive smooth function on
$$
M
$$
with average value 1.
|
https://en.wikipedia.org/wiki/Calabi_conjecture
|
passage: Substituting equations (2) and (5) in the
$$
\beta
$$
equation and equations (3) and (6) in the
$$
\gamma
$$
equation gives
$$
h=4\overline{AB}\sin\beta\cdot\frac{\overline{DX}}{\overline{XE}}\cdot\frac{\overline{AC}}{\overline{AY}}\sin\gamma
$$
and
$$
h=4\overline{AC}\sin\gamma\cdot\frac{\overline{DX}}{\overline{XF}}\cdot\frac{\overline{AB}}{\overline{AZ}}\sin\beta
$$
Since the numerators are equal
$$
\overline{XE}\cdot\overline{AY}=\overline{XF}\cdot\overline{AZ}
$$
or
$$
\frac{\overline{XE}}{\overline{XF}}=\frac{\overline{AZ}}{\overline{AY}}.
$$
Since angle
$$
EXF
$$
and angle
$$
ZAY
$$
are equal and the sides forming these angles are in the same ratio, triangles
$$
XEF
$$
and
$$
AZY
$$
are similar.
Similar angles
$$
AYZ
$$
and
$$
XFE
$$
equal
$$
(60^\circ+\gamma)
$$
, and similar angles
$$
AZY
$$
and
$$
XEF
$$
equal
$$
(60^\circ+\beta).
$$
Similar arguments yield the base angles of triangles
$$
BXZ
$$
|
https://en.wikipedia.org/wiki/Morley%27s_trisector_theorem
|
passage: Let
$$
\mathcal{B}
$$
be the category of finite sets, with the morphisms of the category being the bijections between these sets. A species is a functor
$$
F\colon \mathcal{B} \to \mathcal{B}.
$$
For each finite set A in
$$
\mathcal{B}
$$
, the finite set F[A] is called the set of F-structures on A, or the set of structures of species F on A. Further, by the definition of a functor, if φ is a bijection between sets A and B, then F[φ] is a bijection between the sets of F-structures F[A] and F[B], called transport of F-structures along φ.
For example, the "species of permutations" maps each finite set A to the set S[A] of all permutations of A (all ways of ordering A into a list), and each bijection f from A to another set B naturally induces a bijection (a relabeling) taking each permutation of A to a corresponding permutation of B, namely a bijection
$$
S[f]:S[A]\to S[B]
$$
. Similarly, the "species of partitions" can be defined by assigning to each finite set the set of all its partitions, and the "power set species" assigns to each finite set its power set.
|
https://en.wikipedia.org/wiki/Combinatorial_species
|
passage: Such relations are used in social choice theory or microeconomics.
Proposition: If R is a univalent, then R;RT is transitive.
proof: Suppose
$$
x R;R^T y R;R^T z.
$$
Then there are a and b such that
$$
x R a R^T y R b R^T z .
$$
Since R is univalent, yRb and aRTy imply a=b. Therefore xRaRTz, hence xR;RTz and R;RT is transitive.
Corollary: If R is univalent, then R;RT is an equivalence relation on the domain of R.
proof: R;RT is symmetric and reflexive on its domain. With univalence of R, the transitive requirement for equivalence is fulfilled.
|
https://en.wikipedia.org/wiki/Transitive_relation
|
passage: For this reason, commutative superalgebras are often called supercommutative in order to avoid confusion.
## Sign conventions
When the Z2 grading arises as a "rollup" of a Z- or N-graded algebra into even and odd components, then two distinct (but essentially equivalent) sign conventions can be found in the literature. These can be called the "cohomological sign convention" and the "super sign convention". They differ in how the antipode (exchange of two elements) behaves. In the first case, one has an exchange map
$$
xy\mapsto (-1)^{mn+pq} yx
$$
where
$$
m=\deg x
$$
is the degree (Z- or N-grading) of
$$
x
$$
and
$$
p
$$
the parity. Likewise,
$$
n=\deg y
$$
is the degree of
$$
y
$$
and with parity
$$
q.
$$
This convention is commonly seen in conventional mathematical settings, such as differential geometry and differential topology. The other convention is to take
$$
xy\mapsto (-1)^{pq} yx
$$
with the parities given as
$$
p=m\bmod 2
$$
and
$$
q=n\bmod 2
$$
the parity. This is more often seen in physics texts, and requires a parity functor to be judiciously employed to track isomorphisms. Detailed arguments are provided by Pierre Deligne
|
https://en.wikipedia.org/wiki/Superalgebra
|
passage: This property follows from the fact that all pieces have the same continuity properties, within their individual range of support, at the knots.
Expressions for the polynomial pieces can be derived by means of the Cox–de Boor recursion formula
$$
B_{i,0}(x) := \begin{cases}
1 & \text{if } t_i \leq x < t_{i+1}, \\
0 & \text{otherwise}.
\end{cases}
$$
$$
B_{i,k}(x) := \frac{x - t_i}{t_{i+k} - t_i} B_{i,k-1}(x) + \frac{t_{i+k+1} - x}{t_{i+k+1} - t_{i+1}} B_{i+1,k-1}(x).
$$
That is,
$$
B_{j,0}(x)
$$
is piecewise constant one or zero indicating which knot span x is in (zero if knot span j is repeated). The recursion equation is in two parts:
$$
\frac{x - t_i}{t_{i+k} - t_i}
$$
ramps from zero to one as x goes from
$$
t_i
$$
to
$$
t_{i+k}
$$
, and
$$
\frac{t_{i+k+1} - x}{t_{i+k+1} - t_{i+1}}
$$
ramps from one to zero as x goes from
$$
t_{i+1}
$$
to
$$
t_{i+k+1}
$$
.
|
https://en.wikipedia.org/wiki/B-spline
|
passage: For example, this happens near for the continuous function defined by for and otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function
$$
Q
$$
as follows:
$$
Q(y) = \begin{cases}
\displaystyle\frac{f(y) - f(g(a))}{y - g(a)}, & y \neq g(a), \\
f'(g(a)), & y = g(a).
\end{cases}
$$
We will show that the difference quotient for is always equal to:
$$
Q(g(x)) \cdot \frac{g(x) - g(a)}{x - a}.
$$
Whenever is not equal to , this is clear because the factors of cancel. When equals , then the difference quotient for is zero because equals , and the above product is zero because it equals times zero. So the above product is always equal to the difference quotient, and to show that the derivative of at exists and to determine its value, we need only show that the limit as goes to of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are and . The latter is the difference quotient for at , and because is differentiable at by assumption, its limit as tends to exists and equals .
|
https://en.wikipedia.org/wiki/Chain_rule
|
passage: $$
ultimately gives:
$$
\left( \frac{\partial \left( \ln X_2 \right) } {\partial T} \right) = \frac {\Delta H^\circ_\text{fus}} {RT^2}
$$
or:
$$
\partial \ln X_2 = \frac {\Delta H^\circ_\text{fus}} {RT^2} \times \delta T
$$
and with integration:
$$
\int^{X_2=x_2}_{X_2 = 1} \delta \ln X_2 = \ln x_2 = \int_{T_\text{fus}}^T \frac {\Delta H^\circ_\text{fus}} {RT^2} \times \Delta T
$$
the result is obtained:
$$
\ln x_2 = - \frac {\Delta H^\circ_\text{fus}} {R}\left(\frac{1}{T}- \frac{1}{T_\text{fus}}\right)
$$
|
https://en.wikipedia.org/wiki/Enthalpy_of_fusion
|
passage: The FETI-DP method is hybrid between a dual and a primal method.
Non-overlapping domain decomposition methods are also called iterative substructuring methods.
Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.
Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
## Example 1: 1D Linear BVP
$$
u''(x)-u(x)=0
$$
$$
u(0)=0, u(1)=1
$$
The exact solution is:
$$
u(x)=\frac{e^x-e^{-x}}{e^{1}-e^{-1}}
$$
Subdivide the domain into two subdomains, one from
$$
\left[0,\frac{1}{2}\right]
$$
and another from
$$
\left[\frac{1}{2},1\right]
$$
.
|
https://en.wikipedia.org/wiki/Domain_decomposition_methods
|
passage: The children of a coefficient are only scanned if the coefficient was found to be significant, or if the coefficient was an isolated zero. The subordinate pass emits one bit (the most significant bit of each coefficient not so far emitted) for each coefficient which has been found significant in the previous significance passes. The subordinate pass is therefore similar to bit-plane coding.
There are several important features to note. Firstly, it is possible to stop the compression algorithm at any time and obtain an approximation of the original image, the greater the number of bits received, the better the image. Secondly, due to the way in which the compression algorithm is structured as a series of decisions, the same algorithm can be run at the decoder to reconstruct the coefficients, but with the decisions being taken according to the incoming bit stream. In practical implementations, it would be usual to use an entropy code such as arithmetic code to further improve the performance of the dominant pass. Bits from the subordinate pass are usually random enough that entropy coding provides no further coding gain.
The coding performance of EZW has since been exceeded by SPIHT and its many derivatives.
## Introduction
Embedded zerotree wavelet algorithm (EZW) as developed by J. Shapiro in 1993, enables scalable image transmission and decoding.
|
https://en.wikipedia.org/wiki/Embedded_zerotrees_of_wavelet_transforms
|
passage: New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies.
Cardiovascular disease
Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens. Therapeutic radiation increases the risk of a subsequent cardiovascular event (i.e., heart attack or stroke) by 1.5 to 4 times a person's normal rate, aggravating factors included. The increase is dose dependent, related to the RT's dose strength, volume and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10 and 30%.
Cardiovascular late side effects have been termed radiation-induced heart disease (RIHD) and radiation-induced cardiovascular disease (RIVD). Symptoms are dose dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side effect symptoms.
|
https://en.wikipedia.org/wiki/Radiation_therapy
|
passage: ## Notation in vector calculus
Vector calculus concerns differentiation and integration of vector or scalar fields. Several notations specific to the case of three-dimensional Euclidean space are common.
Assume that is a given Cartesian coordinate system, that A is a vector field with components
$$
\mathbf{A} = (A_x, A_y, A_z)
$$
, and that
$$
\varphi = \varphi(x,y,z)
$$
is a scalar field.
The differential operator introduced by William Rowan Hamilton, written ∇ and called del or nabla, is symbolically defined in the form of a vector,
$$
\nabla = \left( \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \right)\!,
$$
where the terminology symbolically reflects that the operator ∇ will also be treated as an ordinary vector.
- Gradient: The gradient
$$
\mathrm{grad\,} \varphi
$$
of the scalar field
$$
\varphi
$$
is a vector, which is symbolically expressed by the multiplication of ∇ and scalar field ,
$$
\begin{align}
\operatorname{grad} \varphi
BLOCK0\end{align}
$$
-
|
https://en.wikipedia.org/wiki/Notation_for_differentiation
|
passage: In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but the supplying of default values does not imply that the data has been made complete.)
- Consistency: The degree to which a set of measures are equivalent across systems. (See also Consistency). Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct. Fixing inconsistency is not always possible: it requires a variety of strategies - e.g., deciding which data were recorded more recently, which data source is likely to be most reliable (the latter knowledge may be specific to a given organization), or simply trying to find the truth by testing both data items (e.g., calling up the customer).
- Uniformity: The degree to which a set of data measures are specified using the same units of measure in all systems. (See also Unit of measurement.) In datasets pooled from different locales, weight may be recorded either in pounds or kilos and must be converted to a single measure using an arithmetic transformation.
The term integrity encompasses accuracy, consistency and some aspects of validation (see also Data integrity) but is rarely used by itself in data-cleansing contexts because it is insufficiently specific. (For example, "referential integrity" is a term used to refer to the enforcement of foreign-key constraints above.)
|
https://en.wikipedia.org/wiki/Data_cleansing
|
passage: It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a "standard interpretation" of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution.
## Weaknesses
Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.
Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.
### Naïveté of interrogators
In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill, or naïveté of the questioner.
|
https://en.wikipedia.org/wiki/Turing_test
|
passage: Finally the third method, known as k-refinement (without a counterpart in FEA), derives from the preceding two techniques, i.e. combines the order elevation with the insertion of a unique knot in
$$
\Xi
$$
.
## References
## External links
- GeoPDEs: a free software tool for Isogeometric Analysis based on Octave
- MIG(X)FEM: a free Matlab code for IGA (FEM and extended FEM)
- PetIGA: A framework for high-performance Isogeometric Analysis based on PETSc
- G+Smo (Geometry plus Simulation modules): a C++ library for isogeometric analysis, aiming at the seamless integration of Computer-aided Design (CAD) and Finite Element Analysis (FEA), maintained by an open-source community of contributors.
- FEAP: a general purpose finite element analysis program which is designed for research and educational use, developed at University of California, Berkeley
- Bembel: An open-source isogeometric boundary element library for Laplace, Helmholtz, and Maxwell problems written in C++
- T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: "Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement", Computer Methods in Applied Mechanics and Engineering, Elsevier, 2005, 194 (39-41), pp.4135-4195.
Category:Finite element method
Category:Computer-aided design
|
https://en.wikipedia.org/wiki/Isogeometric_analysis
|
passage: Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests. Research shows information security culture needs to be improved continuously. In Information Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.
- Pre-evaluation: to identify the awareness of information security within employees and to analyze current security policy
- Strategic planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it
- Operative planning: create a good security culture based on internal communication, management buy-in, security awareness, and training programs
- Implementation: should feature commitment of management, communication with organizational members, courses for all organizational members, and commitment of the employees
- Post-evaluation: to better gauge the effectiveness of the prior steps and build on continuous improvement
|
https://en.wikipedia.org/wiki/Information_security
|
passage: To recap, GCC supports both basic and extended assembly. The former simply passes text verbatim to the assembler, while the latter performs some substitutions for register locations.
```c
extern int errno;
int syscall3(int num, int arg1, int arg2, int arg3)
{
int res;
__asm__ (
"int $0x80" /* make the request to the OS */
: "=a" (res), /* return result in eax ("a") */
"+b" (arg1), /* pass arg1 in ebx ("b") [as a "+" output because the syscall may change it] */
"+c" (arg2), /* pass arg2 in ecx ("c") [ditto] */
"+d" (arg3) /* pass arg3 in edx ("d") [ditto] */
: "a" (num) /* pass system call number in eax ("a") */
: "memory", "cc", /* announce to the compiler that the memory and condition codes have been modified */
"esi", "edi", "ebp"); /* these registers are clobbered [changed by the syscall] too */
/* The operating system will return a negative value on error;
- wrappers return -1 on error and set the errno global variable */
if (-125 <= res && res < 0) {
errno = -res;
res = -1;
}
return res;
}
```
|
https://en.wikipedia.org/wiki/Inline_assembler
|
passage: In this definition, x is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below).
Additivity alone implies homogeneity for rational α, since
$$
f(x+x)=f(x)+f(x)
$$
implies
$$
f(nx)=n f(x)
$$
for any natural number n by mathematical induction, and then
$$
n f(x) = f(nx)=f(m\tfrac{n}{m}x)= m f(\tfrac{n}{m}x)
$$
implies
$$
f(\tfrac{n}{m}x) = \tfrac{n}{m} f(x)
$$
. The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear.
The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.
### Linear polynomials
In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a straight line.
|
https://en.wikipedia.org/wiki/Linearity
|
passage: At high frequencies, capacitance approaches a constant value, equal to "geometric" capacitance, determined by the terminals' geometry and dielectric content in the device.
A paper by Steven Laux presents a review of numerical techniques for capacitance calculation. In particular, capacitance can be calculated by a Fourier transform of a transient current in response to a step-like voltage excitation:
$$
C(\omega) = \frac{1}{\Delta V} \int_0^\infty [i(t)-i(\infty)] \cos (\omega t) dt.
$$
## Negative capacitance in semiconductor devices
Usually, capacitance in semiconductor devices is positive. However, in some devices and under certain conditions (temperature, applied voltages, frequency, etc.), capacitance can become negative. Non-monotonic behavior of the transient current in response to a step-like excitation has been proposed as the mechanism of negative capacitance. Negative capacitance has been demonstrated and explored in many different types of semiconductor devices.
## Measuring capacitance
A capacitance meter is a piece of electronic test equipment used to measure capacitance, mainly of discrete capacitors. For most purposes and in most cases the capacitor must be disconnected from circuit.
Many DVMs (digital volt meters) have a capacitance-measuring function.
|
https://en.wikipedia.org/wiki/Capacitance
|
passage: However, it is analytic (all Borel sets are also analytic), and complete in the class of analytic sets. For more details see descriptive set theory and the book by A. S. Kechris (see References), especially Exercise (27.2) on page 209, Definition (22.9) on page 169, Exercise (3.4)(ii) on page 14, and on page 196.
It's important to note, that while Zermelo–Fraenkel axioms (ZF) are sufficient to formalize the construction of
$$
A
$$
, it cannot be proven in ZF alone that
$$
A
$$
is non-Borel. In fact, it is consistent with ZF that
$$
\mathbb{R}
$$
is a countable union of countable sets, so that any subset of
$$
\mathbb{R}
$$
is a Borel set.
Another non-Borel set is an inverse image
$$
f^{-1}[0]
$$
of an infinite parity function
$$
f\colon \{0, 1\}^{\omega} \to \{0, 1\}
$$
. However, this is a proof of existence (via the axiom of choice), not an explicit example.
## Alternative non-equivalent definitions
According to Paul Halmos, a subset of a locally compact Hausdorff topological space is called a Borel set if it belongs to the smallest σ-ring containing all compact sets.
|
https://en.wikipedia.org/wiki/Borel_set
|
passage: About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about /year, lengthening the terrestrial day.
Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year.
### Bathymetry
The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast.
|
https://en.wikipedia.org/wiki/Tide
|
passage: In modern terms, the Kurosh subgroup theorem is a straightforward corollary of the basic structural results of Bass–Serre theory about groups acting on trees.
## Statement of the theorem
Let
$$
G = A*B
$$
be the free product of groups A and B and let
$$
H \le G
$$
be a subgroup of G. Then there exist a family
$$
(A_i)_{i\in I}
$$
of subgroups
$$
A_i \le A
$$
, a family
$$
(B_j)_{j\in J}
$$
of subgroups
$$
B_j \le B
$$
, families
$$
g_i, i\in I
$$
and
$$
f_j, j\in J
$$
of elements of G, and a subset
$$
X\subseteq G
$$
such that
$$
H=F(X)*(*_{i\in I} g_i A_ig_i^{-1})* (*_{j\in J} f_jB_jf_j^{-1}).
$$
This means that X freely generates a subgroup of G isomorphic to the free group F(X) with free basis X and that, moreover, giAigi−1, fjBjfj−1 and X generate H in G as a free product of the above form.
There is a generalization of this to the case of free products with arbitrarily many factors. Its formulation is:
|
https://en.wikipedia.org/wiki/Kurosh_subgroup_theorem
|
passage: $$
Consider now the finite approximations to the Wallis product, obtained by taking the first
$$
k
$$
terms in the product
$$
p_k = \prod_{n=1}^{k} \frac{2n}{2n - 1}\frac{2n}{2n + 1},
$$
where
$$
p_k
$$
can be written as
$$
\begin{align}
p_k &= {1 \over {2k + 1}} \prod_{n=1}^{k} \frac{(2n)^4}{[(2n)(2n - 1)]^2} \\[6pt]
BLOCK0\end{align}
$$
Substituting Stirling's approximation in this expression (both for
$$
k!
$$
and
$$
(2k)!
$$
) one can deduce (after a short calculation) that
$$
p_k
$$
converges to
$$
\frac{\pi}{2}
$$
as
$$
k \rightarrow \infty
$$
.
|
https://en.wikipedia.org/wiki/Wallis_product
|
passage: ### Server-side versus DOM-based vulnerabilities
XSS vulnerabilities were originally found in applications that performed all data processing on the server side. User input (including an XSS vector) would be sent to the server, and then sent back to the user as a web page. The need for an improved user experience resulted in popularity of applications that had a majority of the presentation logic (maybe written in JavaScript) working on the client-side that pulled data, on-demand, from the server using AJAX.
As the JavaScript code was also processing user input and rendering it in the web page content, a new sub-class of reflected XSS attacks started to appear that was called DOM-based cross-site scripting. In a DOM-based XSS attack, the malicious data does not touch the web server. Rather, it is being reflected by the JavaScript code, fully on the client side.
An example of a DOM-based XSS vulnerability is the bug found in 2011 in a number of jQuery plugins. Prevention strategies for DOM-based XSS attacks include very similar measures to traditional XSS prevention strategies but implemented in JavaScript code and contained in web pages (i.e. input validation and escaping). Some JavaScript frameworks have built-in countermeasures against this and other types of attack — for example AngularJS.
### Self-XSS
Self-XSS is a form of XSS vulnerability that relies on social engineering in order to trick the victim into executing malicious JavaScript code in their browser.
|
https://en.wikipedia.org/wiki/Cross-site_scripting
|
passage: Key printing methods are extrusion and inkjet based printing, stereolithography, selective laser sintering, direct ink writing, and VAT photopolymerization. A diversity in soft materials for 3D/4D printing includes elastomers, hydrogels, bio-inspired polymers, conductive and flexible materials, andinkjet-based biomimetic materials for applications in biomedical engineering, soft robotics, wearable devices, textiles, food technology, and pharmaceuticals. Changelings and limitations prevail in design geometric complexity,cost, resolution, material #compatibility, scalability and regulatory concerns.
Foams can naturally occur, such as the head on a beer, or be created intentionally, such as by fire extinguishers. The physical properties available to foams have resulted in applications which can be based on their viscosity, with more rigid and self-supporting forms of foams being used as insulation or cushions, and foams that exhibit the ability to flow being used in the cosmetic industry as shampoos or makeup. Foams have also found biomedical applications in tissue engineering as scaffolds and biosensors.
Historically the problems considered in the early days of soft matter science were those pertaining to the biological sciences. As such, an important application of soft matter research is biophysics, with a major goal of the discipline being the reduction of the field of cell biology to the concepts of soft matter physics. Applications of soft matter characteristics are used to understand biologically relevant topics such as membrane mobility, as well as the rheology of blood.
|
https://en.wikipedia.org/wiki/Soft_matter
|
passage: The above pairwise algorithm corresponds to b = 2 for every stage except for the last stage which is b = N.
Dalton, Wang & Blainey (2014) describe a iterative, "shift-reduce" formulation for pairwise summation. It can be unrolled and sped up using SIMD instructions. The non-unrolled version is:
```c
double shift_reduce_sum(double ∗x, size_t n) {
double stack[64], v;
size_t p = 0;
for (size_t i = 0; i < n; ++i) {
v = x[i]; // shift
for (size_t b = 1; i & b; b <<= 1, −−p) // reduce
v += stack[p−1];
stack[p++] = v;
}
double sum = 0.0;
while (p)
sum += stack[−−p];
return sum;
}
```
## Accuracy
Suppose that one is summing n values xi, for i = 1, ..., n. The exact sum is:
$$
S_n = \sum_{i=1}^n x_i
$$
(computed with infinite precision).
|
https://en.wikipedia.org/wiki/Pairwise_summation
|
passage: ## Scope of specialty
As for many medical specialties, patients are most likely to be referred by family physicians (i.e., GP) or by physicians from different disciplines. The reasons might be:
- Drug overdose. Paracetamol overdose is common.
- Gastrointestinal bleeding from portal hypertension related to liver damage
- Abnormal blood test suggesting liver disease
- Enzyme defects leading to bigger liver in children commonly named storage disease of liver
- Jaundice / Hepatitis virus positivity in blood, perhaps discovered on screening blood tests
- Ascites or swelling of abdomen from fluid accumulation, commonly due to liver disease but can be from other diseases like heart failure
- All patients with advanced liver disease e.g. cirrhosis should be under specialist care
- To undergo ERCP for diagnosing diseases of biliary tree or their management
- Fever with other features suggestive of infection involving mentioned organs. Some exotic tropical diseases like hydatid cyst, kala-azar or schistosomiasis may be suspected. Microbiologists would be involved as well
- Systemic diseases affecting liver and biliary tree e.g. haemochromatosis
- Follow-up of liver transplant
- Pancreatitis - commonly due to alcohol or gallstone
- Cancer of above organs. Usually multi-disciplinary approach is undertaken with involvement of oncologist and other experts.
## History
Evidence from autopsies on Egyptian mummies suggests that liver damage from the parasitic infection bilharziasis was widespread in the ancient society.
|
https://en.wikipedia.org/wiki/Hepatology
|
passage:
```Lisp
(let ((phone-book (make-hash-table :test #'equal)))
(setf (gethash "Sally Smart" phone-book) "555-9999")
(setf (gethash "John Doe" phone-book) "555-1212")
(setf (gethash "J. Random Hacker" phone-book) "553-1337"))
```
The `gethash` function permits obtaining the value associated with a key.
```Lisp
(gethash "John Doe" phone-book)
```
Additionally, a default value for the case of an absent key may be specified.
```Lisp
(gethash "Incognito" phone-book 'no-such-key)
```
An invocation of `gethash` actually returns two values: the value or substitute value for the key and a boolean indicator, returning `T` if the hash table contains the key and `NIL` to signal its absence.
```Lisp
(multiple-value-bind (value contains-key) (gethash "Sally Smart" phone-book)
(if contains-key
(format T "~&The associated value is: ~s" value)
(format T "~&The key could not be found.")))
```
Use `remhash` for deleting the entry associated with a key.
```Lisp
(remhash "J. Random Hacker" phone-book)
```
`clrhash` completely empties the hash table.
```Lisp
(clrhash phone-book)
```
The dedicated `maphash` function specializes in iterating hash tables.
|
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28associative_array%29
|
passage: ## Applied to the Earth
The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term
$$
-2 \, \boldsymbol{\omega} \times \mathbf{v}
$$
This component is orthogonal to the velocity over the Earth surface and is given by the expression
$$
\omega \, v\ 2 \, \sin \phi
$$
where
-
$$
\omega
$$
is the spin rate of the Earth
-
$$
\phi
$$
is the latitude, positive in the northern hemisphere and negative in the southern hemisphere
In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere.
### Rotating sphere
Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards.
|
https://en.wikipedia.org/wiki/Coriolis_force
|
passage: The width of a bounded convex set can be defined in the same way as for curves, by the distance between pairs of parallel lines that touch the set without crossing it, and a convex set is a body of constant width when this distance is nonzero and does not depend on the direction of the lines. Every body of constant width has a curve of constant width as its boundary, and every curve of constant width has a body of constant width as its convex hull.
Another equivalent way to define the width of a compact curve or of a convex set is by looking at its orthogonal projection onto a line. In both cases, the projection is a line segment, whose length equals the distance between support lines that are perpendicular to the line. So, a curve or a convex set has constant width when all of its orthogonal projections have the same length.
## Examples
Circles have constant width, equal to their diameter. On the other hand, squares do not: supporting lines parallel to two opposite sides of the square are closer together than supporting lines parallel to a diagonal. More generally, no polygon can have constant width. However, there are other shapes of constant width. A standard example is the Reuleaux triangle, the intersection of three circles, each centered where the other two circles cross. Its boundary curve consists of three arcs of these circles, meeting at 120° angles, so it is not smooth, and in fact these angles are the sharpest possible for any curve of constant width.
|
https://en.wikipedia.org/wiki/Curve_of_constant_width
|
passage: However, sexual reproduction involving meiosis is also a primitive characteristic of eukaryotes. Thus meiosis and mitosis may both have evolved, in parallel, from ancestral prokaryotic processes.
While in bacterial cell division, after duplication of DNA, two circular chromosomes are attached to a special region of the cell membrane, eukaryotic mitosis is usually characterized by the presence of many linear chromosomes, whose kinetochores attaches to the microtubules of the spindle. In relation to the forms of mitosis, closed intranuclear pleuromitosis seems to be the most primitive type, as it is more similar to bacterial division.
## Gallery
Mitotic cells can be visualized microscopically by staining them with fluorescent antibodies and dyes.
|
https://en.wikipedia.org/wiki/Mitosis
|
passage: This upper bound is not necessarily reached. For example, the unit sphere in
$$
\R^d
$$
has Carathéodory's number equal to 2, since any point inside the sphere is the convex sum of two points on the sphere.
With additional assumptions on
$$
P\subset \R^d
$$
, upper bounds strictly lower than
$$
d+1
$$
can be obtained.
### Dimensionless variant
Recently, Adiprasito, Barany, Mustafa and Terpai proved a variant of Caratheodory's theorem that does not depend on the dimension of the space.
### Colorful Carathéodory theorem
Let X1, ..., Xd+1 be sets in Rd and let x be a point contained in the intersection of the convex hulls of all these d+1 sets.
Then there is a set T = {x1, ..., xd+1}, where , such that the convex hull of T contains the point x.
By viewing the sets X1, ..., Xd+1 as different colors, the set T is made by points of all colors, hence the "colorful" in the theorem's name. The set T is also called a rainbow simplex, since it is a d-dimensional simplex in which each corner has a different color.
This theorem has a variant in which the convex hull is replaced by the conical hull. Let X1, ..., Xd be sets in Rd and let x be a point contained in the intersection of the conical hulls of all these d sets.
|
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_theorem_%28convex_hull%29
|
passage: ### Metabolism
Modern non-avian reptiles exhibit some form of cold-bloodedness (i.e. some mix of poikilothermy, ectothermy, and bradymetabolism) so that they have limited physiological means of keeping the body temperature constant and often rely on external sources of heat. Due to a less stable core temperature than birds and mammals, reptilian biochemistry requires enzymes capable of maintaining efficiency over a greater range of temperatures than in the case for warm-blooded animals. The optimum body temperature range varies with species, but is typically below that of warm-blooded animals; for many lizards, it falls in the range, while extreme heat-adapted species, like the American desert iguana Dipsosaurus dorsalis, can have optimal physiological temperatures in the mammalian range, between . While the optimum temperature is often encountered when the animal is active, the low basal metabolism makes body temperature drop rapidly when the animal is inactive.
As in all animals, reptilian muscle action produces heat. In large reptiles, like leatherback turtles, the low surface-to-volume ratio allows this metabolically produced heat to keep the animals warmer than their environment even though they do not have a warm-blooded metabolism. This form of homeothermy is called gigantothermy; it has been suggested as having been common in large dinosaurs and other extinct large-bodied reptiles.
The benefit of a low resting metabolism is that it requires far less fuel to sustain bodily functions.
|
https://en.wikipedia.org/wiki/Reptile
|
passage: [exp(727.95133239), exp(727.95133920)] (assuming RH) 6.6587 (without RH) 1.2741 (assuming RH)2010 2.2 Zegowitz [exp(727.951324783), exp(727.951346802)] (without RH) [exp(727.951332973), exp(727.951338612)] (assuming RH) 6.695531258 (without RH) 1.15527413 (assuming RH)
Rigorously, proved that there are no crossover points below
$$
x = 10^8
$$
, improved by to
$$
8\times 10^{10}
$$
, by to
$$
10^{14}
$$
, by to
$$
1.39\times 10^{17}
$$
, and by to
$$
10^{19}
$$
.
There is no explicit value
$$
x
$$
known for certain to have the property
$$
\pi(x) > \operatorname{li}(x),
$$
though computer calculations suggest some explicit numbers that are quite likely to satisfy this.
Even though the natural density of the positive integers for which
$$
\pi(x) > \operatorname{li}(x)
$$
does not exist, showed that the logarithmic density of these positive integers does exist and is positive. showed that this proportion is about , which is surprisingly large given how far one has to go to find the first example.
|
https://en.wikipedia.org/wiki/Skewes%27s_number
|
passage: Since DNA polymerase cannot add bases in the 3′→5′ direction complementary to the template strand, DNA is synthesized ‘backward’ in short fragments moving away from the replication fork, known as Okazaki fragments. Unlike in the leading strand, this method results in the repeated starting and stopping of DNA synthesis, requiring multiple RNA primers. Along the DNA template, primase intersperses RNA primers that DNA polymerase uses to synthesize DNA from in the 5′→3′ direction.
Another example of primers being used to enable DNA synthesis is reverse transcription. Reverse transcriptase is an enzyme that uses a template strand of RNA to synthesize a complementary strand of DNA. The DNA polymerase component of reverse transcriptase requires an existing 3' end to begin synthesis.
### Primer removal
After the insertion of Okazaki fragments, the RNA primers are removed (the mechanism of removal differs between prokaryotes and eukaryotes) and replaced with new deoxyribonucleotides that fill the gaps where the RNA primer was present. DNA ligase then joins the fragmented strands together, completing the synthesis of the lagging strand.
In prokaryotes, DNA polymerase I synthesizes the Okazaki fragment until it reaches the previous RNA primer. Then the enzyme simultaneously acts as a 5′→3′ exonuclease, removing primer ribonucleotides in front and adding deoxyribonucleotides behind.
|
https://en.wikipedia.org/wiki/Primer_%28molecular_biology%29
|
passage: The Enuma anu enlil, written during the Neo-Assyrian period in the 7th century BC, comprises a list of omens and their relationships with various celestial phenomena including the motions of the planets. The inferior planets Venus and Mercury and the superior planets Mars, Jupiter, and Saturn were all identified by Babylonian astronomers. These would remain the only known planets until the invention of the telescope in early modern times.
#### Greco-Roman astronomy
The ancient Greeks initially did not attach as much significance to the planets as the Babylonians. In the 6th and 5th centuries BC, the Pythagoreans appear to have developed their own independent planetary theory, which consisted of the Earth, Sun, Moon, and planets revolving around a "Central Fire" at the center of the Universe. Pythagoras or Parmenides is said to have been the first to identify the evening star (Hesperos) and morning star (Phosphoros) as one and the same (Aphrodite, Greek corresponding to Latin Venus), though this had long been known in Mesopotamia. In the 3rd century BC, Aristarchus of Samos proposed a heliocentric system, according to which Earth and the planets revolved around the Sun. The geocentric system remained dominant until the Scientific Revolution.
By the 1st century BC, during the Hellenistic period, the Greeks had begun to develop their own mathematical schemes for predicting the positions of the planets.
|
https://en.wikipedia.org/wiki/Planet
|
passage: The names of relationships between nodes model the kinship terminology of family relations. The gender-neutral names "parent" and "child" have largely displaced the older "father" and "son" terminology. The term "uncle" is still widely used for other nodes at the same level as the parent, although it is sometimes replaced with gender-neutral terms like "ommer".
- A node's "parent" is a node one step higher in the hierarchy (i.e. closer to the root node) and lying on the same branch.
- "Sibling" ("brother" or "sister") nodes share the same parent node.
- A node's "uncles" (sometimes "ommers") are siblings of that node's parent.
- A node that is connected to all lower-level nodes is called an "ancestor". The connected lower-level nodes are "descendants" of the ancestor node.
In the example, "encyclopedia" is the parent of "science" and "culture", its children. "Art" and "craft" are siblings, and children of "culture", which is their parent and thus one of their ancestors. Also, "encyclopedia", as the root of the tree, is the ancestor of "science", "culture", "art" and "craft". Finally, "science", "art" and "craft", as leaves, are ancestors of no other node.
|
https://en.wikipedia.org/wiki/Tree_structure
|
passage: Similarly, theories based on the generative view of language pioneered by Noam Chomsky see language mostly as an innate faculty that is largely genetically encoded, whereas functionalist theories see it as a system that is largely cultural, learned through social interaction.
Continuity-based theories are held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, such as psychologist Steven Pinker, hold the precedents to be animal cognition, whereas those who see language as a socially learned tool of communication, such as psychologist Michael Tomasello, see it as having developed from animal communication in primates: either gestural or vocal communication to assist in cooperation. Other continuity-based models see language as having developed from music, a view already espoused by Rousseau, Herder, Humboldt, and Charles Darwin. A prominent proponent of this view is archaeologist Steven Mithen. Stephen Anderson states that the age of spoken languages is estimated at 60,000 to 100,000 years and that: Researchers on the evolutionary origin of language generally find it plausible to suggest that language was invented only once, and that all modern spoken languages are thus in some way related, even if that relation can no longer be recovered ... because of limitations on the methods available for reconstruction.
Because language emerged in the early prehistory of man, before the existence of any written records, its early development has left no historical traces, and it is believed that no comparable processes can be observed today.
|
https://en.wikipedia.org/wiki/Language
|
passage: $$
where:
$$
\frac{\partial \ln \Beta(\alpha,\beta)}{\partial \alpha} = -\frac{\partial \ln \Gamma(\alpha+\beta)}{\partial \alpha}+ \frac{\partial \ln \Gamma(\alpha)}{\partial \alpha}+ \frac{\partial \ln \Gamma(\beta)}{\partial \alpha}=-\psi(\alpha + \beta) + \psi(\alpha) + 0
$$
$$
\frac{\partial \ln \Beta(\alpha,\beta)}{\partial \beta}= - \frac{\partial \ln \Gamma(\alpha+\beta)}{\partial \beta}+ \frac{\partial \ln \Gamma(\alpha)}{\partial \beta} + \frac{\partial \ln \Gamma(\beta)}{\partial \beta}=-\psi(\alpha + \beta) + 0 + \psi(\beta)
$$
since the digamma function denoted ψ(α) is defined as the logarithmic derivative of the gamma function:
$$
\psi(\alpha) =\frac {\partial\ln \Gamma(\alpha)}{\partial \alpha}
$$
|
https://en.wikipedia.org/wiki/Beta_distribution
|
passage: Additionally, the integration of OCBA with real-time digital twin-based optimization has further advanced its application in predictive simulation learning, enabling dynamic adjustments to resource allocation in healthcare settings. Furthermore, a contextual ranking and selection method for personalized medicine leverages OCBA to optimize resource allocation in treatments tailored to individual patient profiles, demonstrating its potential in personalized healthcare.
Sequential Allocation using Machine-learning Predictions as Light-weight Estimates (SAMPLE): SAMPLE is an extension of OCBA that presents a new opportunity for the integration of machine learning with digital twins for real-time simulation optimization and decision-making. Current methods for applying machine learning on simulation data may not produce the optimal solution due to errors encountered during the predictive learning phase since training data can be limited. SAMPLE overcomes this issue by leveraging lightweight machine learning models, which are easy to train and interpret, then running additional simulations once the real-world context is captured through the digital twin.
## References
## External links
- Optimal Computing Budget Allocation (OCBA) for Simulation-based Decision Making Under Uncertainty (Simulation Optimization)
Category:Stochastic optimization
|
https://en.wikipedia.org/wiki/Optimal_computing_budget_allocation
|
passage: Instead, we can consider the derivative with respect to
$$
\tau
$$
at
$$
\tau=0
$$
:
$$
\frac{\partial}{\partial\tau}\bigg|_{\tau=0}d(\gamma_0(t),\gamma_\tau(t))=|J(t)|=\sin t.
$$
Notice that we still detect the intersection of the geodesics at
$$
t=\pi
$$
. Notice further that to calculate this derivative we do not actually need to know
$$
d(\gamma_0(t),\gamma_\tau(t)) \,
$$
,
rather , all we need do is solve the equation
$$
y''+y=0 \,
$$
,
for some given initial data.
Jacobi fields give a natural generalization of this phenomenon to arbitrary Riemannian manifolds.
## Solving the Jacobi equation
Let
$$
e_1(0)=\dot\gamma(0)/|\dot\gamma(0)|
$$
and complete this to get an orthonormal basis
$$
\big\{e_i(0)\big\}
$$
at
$$
T_{\gamma(0)}M
$$
. Parallel transport it to get a basis
$$
\{e_i(t)\}
$$
all along
$$
\gamma
$$
.
This gives an orthonormal basis with
$$
e_1(t)=\dot\gamma(t)/|\dot\gamma(t)|
$$
.
|
https://en.wikipedia.org/wiki/Jacobi_field
|
passage: Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on
$$
\Omega
$$
can be approximated arbitrarily well by polynomials in some neighborhood of every point in
$$
\Omega
$$
. This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see .
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions are holomorphic over the entire complex plane, making them entire functions, while rational functions
$$
p/q
$$
, where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions and
$$
z\mapsto \bar{z}
$$
are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
|
https://en.wikipedia.org/wiki/Complex_analysis
|
passage: - There is or was a small flashlight-shaped handheld sonar for divers, that merely displays range.
- For the INSS (integrated navigation sonar system)
### Upward looking sonar
An upward looking sonar (ULS) is a sonar device pointed upwards looking towards the surface of the sea. It is used for similar purposes as downward looking sonar, but has some unique applications such as measuring sea ice thickness, roughness and concentration, or measuring air entrainment from bubble plumes during rough seas. Often it is moored on the bottom of the ocean or floats on a taut line mooring at a constant depth of perhaps 100 m. They may also be used by submarines, AUVs, and floats such as the Argo float.
Passive sonar
Passive sonar listens without transmitting. It is often employed in military settings, although it is also used in science applications, e.g., detecting fish for presence/absence studies in various aquatic environments – see also passive acoustics and passive radar. In the very broadest usage, this term can encompass virtually any analytical technique involving remotely generated sound, though it is usually restricted to techniques applied in an aquatic environment.
### Identifying sound sources
Passive sonar has a wide variety of techniques for identifying the source of a detected sound. For example, U.S. vessels usually operate 60 Hertz (Hz) alternating current power systems.
|
https://en.wikipedia.org/wiki/Sonar
|
passage: Otherwise, the time taken in this step is
$$
O(|A| + \log |B| \log \log |B|)
$$
. Finally, making a heap of the subtree
$$
T
$$
takes
$$
O(|A|)
$$
time. This amounts to a total running time for shadow merging of
$$
O(|A| + \min\{\log |A| \log |B|, \log |B| \log \log |B|\})
$$
.
## Structure
A shadow heap
$$
H
$$
consists of threshold function
$$
f(H)
$$
, and an array for which the usual array-implemented binary heap property is upheld in its first entries, and for which the heap property is not necessarily upheld in the other entries. Thus, the shadow heap is essentially a binary heap
$$
B
$$
adjacent to an array
$$
A
$$
. To add an element to the shadow heap, place it in the array
$$
A
$$
. If the array becomes too large according to the specified threshold, we first build a heap out of
$$
A
$$
using Floyd's algorithm for heap construction, and then merge this heap with
$$
B
$$
using shadow merge. Finally, the merging of shadow heaps is simply done through sequential insertion of one heap into the other using the above insertion procedure.
|
https://en.wikipedia.org/wiki/Shadow_heap
|
passage: Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues. However, for the example , the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for , the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive.
However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form , where
$$
r
$$
is a real strictly positive eigenvalue, and ranges over the complex h th roots of 1 for some positive integer h called the period of the matrix.
The eigenvector corresponding to has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial.
### Further properties
are described below.
#### Classification of matrices
Let A be a n × n square matrix over field F.
The matrix A is irreducible if any of the following equivalent properties
holds. Definition 1 :
|
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.