text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: Plant viruses cannot infect humans and other animals because they can reproduce only in living plant cells.
Originally from Peru, the potato has become a staple crop worldwide. The potato virus Y causes disease in potatoes and related species including tomatoes and peppers. In the 1980s, this virus acquired economical importance when it proved difficult to control in seed potato crops. Transmitted by aphids, this virus can reduce crop yields by up to 80 per cent, causing significant losses to potato yields.
Plants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants.> When they are infected, plants often produce natural disinfectants that kill viruses, such as salicylic acid, nitric oxide, and reactive oxygen molecules.
Plant virus particles or virus-like particles (VLPs) have applications in both biotechnology and nanotechnology. The capsids of most plant viruses are simple and robust structures and can be produced in large quantities either by the infection of plants or by expression in a variety of heterologous systems.
|
https://en.wikipedia.org/wiki/Virus
|
passage: #### Cocycle
A q-cochain is called a q-cocycle if it is in the kernel of
$$
\delta
$$
, hence
$$
Z^q(\mathcal{U}, \mathcal{F}) := \ker ( \delta_q) \subseteq C^q(\mathcal U, \mathcal F)
$$
is the set of all q-cocycles.
Thus a (q−1)-cochain
$$
f
$$
is a cocycle if for all q-simplices
$$
\sigma
$$
the cocycle condition
$$
\sum_{j=0}^{q} (-1)^j \mathrm{res}^{|\partial_j \sigma|}_{|\sigma|} f (\partial_j \sigma) = 0
$$
holds.
A 0-cocycle
$$
f
$$
is a collection of local sections of
$$
\mathcal{F}
$$
satisfying a compatibility relation on every intersecting
$$
A,B\in \mathcal{U}
$$
$$
f(A)|_{A \cap B} = f(B)|_{A \cap B}
$$
A 1-cocycle
$$
f
$$
satisfies for every non-empty
$$
U = A\cap B \cap C
$$
with
$$
A,B,C \in \mathcal{U}
$$
$$
f(B \cap C)|_U - f(A \cap C)|_U + f(A \cap B)|_U = 0
$$
|
https://en.wikipedia.org/wiki/%C4%8Cech_cohomology
|
passage: On 2 April 2025, Linus Media Group and Kioxia computed to 300 trillion digits, also using y-cruncher.
## Practical approximations
Depending on the purpose of a calculation, can be approximated by using fractions for ease of calculation. The most notable such approximations are (relative error of about 4·10−4) and (relative error of about 8·10−8).
In Chinese mathematics, the fractions 22/7 and 355/113 are known as Yuelü () and Milü ().
## Non-mathematical "definitions" of
Of some notability are legal or historical texts purportedly "defining " to have some rational value, such as the "Indiana Pi Bill" of 1897, which stated "the ratio of the diameter and circumference is as five-fourths to four" (which would imply "") and a passage in the Hebrew Bible that implies that .
### Indiana bill
The so-called "Indiana Pi Bill" from 1897 has often been characterized as an attempt to "legislate the value of Pi". Rather, the bill dealt with a purported solution to the problem of geometrically "squaring the circle".
The bill was nearly passed by the Indiana General Assembly in the U.S., and has been claimed to imply a number of different values for , although the closest it comes to explicitly asserting one is the wording "the ratio of the diameter and circumference is as five-fourths to four", which would make , a discrepancy of nearly 2 percent.
|
https://en.wikipedia.org/wiki/Approximations_of_%CF%80
|
passage: Typically siblings have an order, with the first one conventionally drawn on the left. Some definitions allow a tree to have no nodes at all, in which case it is called empty.
An internal node (also known as an inner node, inode for short, or branch node) is any node of a tree that has child nodes. Similarly, an external node (also known as an outer node, leaf node, or terminal node) is any node that does not have child nodes.
The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). Thus the root node has depth zero, leaf nodes have height zero, and a tree with only a single node (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1.
Each non-root node can be treated as the root node of its own subtree, which includes that node and all its descendants.
Other terms used with trees:
|
https://en.wikipedia.org/wiki/Tree_%28abstract_data_type%29
|
passage: In Spain, the overhead line crossing pylons in the Spanish bay of Cádiz have a particularly interesting construction. The main crossing towers are tall with one crossarm atop a frustum framework construction. The longest overhead line spans are the crossing of the Norwegian Sognefjord Span ( between two masts) and the Ameralik Span in Greenland (). In Germany, the overhead line of the EnBW AG crossing of the Eyachtal has the longest span in the country at .
In order to drop overhead lines into steep, deep valleys, inclined towers are occasionally used. These are utilized at the Hoover Dam, located in the United States, to descend the cliff walls of the Black Canyon of the Colorado. In Switzerland, a pylon inclined around 20 degrees to the vertical is located near Sargans, St. Gallens. Highly sloping masts are used on two 380 kV pylons in Switzerland, the top 32 meters of one of them being bent by 18 degrees to the vertical.
Power station chimneys are sometimes equipped with crossbars for fixing conductors of the outgoing lines. Because of possible problems with corrosion by flue gases, such constructions are very rare.
However, there are also roof-mounted support structures for high-voltage. Some thermal power plants in Poland like Połaniec Power Station and in the former Soviet Union like Lukoml Power Station use portal pylons on the roof of the power station building for the high voltage line from the machine transformer to the switchyard. Also other industrial buildings may have a rooftop powerline support structure.
|
https://en.wikipedia.org/wiki/Transmission_tower
|
passage: The term was taken up by Michael Faraday in connection with electromagnetic induction in the 1820s. However, a clear definition of voltage and method of measuring it had not been developed at this time. Volta distinguished electromotive force (emf) from tension (potential difference): the observed potential difference at the terminals of an electrochemical cell when it was open circuit must exactly balance the emf of the cell so that no current flowed.
|
https://en.wikipedia.org/wiki/Voltage
|
passage: One of Gauss's sketches of this kind was a drawing of a tessellation of the unit disk by "equilateral" hyperbolic triangles with all angles equal to
$$
\pi/4
$$
.
An example of Gauss's insight in analysis is the cryptic remark that the principles of circle division by compass and straightedge can also be applied to the division of the lemniscate curve, which inspired Abel's theorem on lemniscate division. Another example is his publication "Summatio quarundam serierum singularium" (1811) on the determination of the sign of quadratic Gauss sums, in which he solved the main problem by introducing q-analogs of binomial coefficients and manipulating them by several original identities that seem to stem from his work on elliptic function theory; however, Gauss cast his argument in a formal way that does not reveal its origin in elliptic function theory, and only the later work of mathematicians such as Jacobi and Hermite has exposed the crux of his argument.
In the "Disquisitiones generales circa series infinitam..." (1813), he provides the first systematic treatment of the general hypergeometric function
$$
F(\alpha,\beta,\gamma,x)
$$
, and shows that many of the functions known at the time are special cases of the hypergeometric function. This work is the first exact inquiry into convergence of infinite series in the history of mathematics. Furthermore, it deals with infinite continued fractions arising as ratios of hypergeometric functions, which are now called Gauss continued fractions.
|
https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss
|
passage: ## One-point third-order iterative method: Halley's formula
The origin of the interpolation with rational functions can be found in the previous work done by Edmond Halley. Halley's formula is known as one-point third-order iterative method to solve
$$
\,f(x)=0
$$
by means of approximating a rational function defined by
$$
h(z)=\frac{a}{z+b}+c.
$$
We can determine a, b, and c so that
$$
h^{(i)}(x)=f^{(i)}(x), \qquad i=0,1,2.
$$
Then solving
$$
\,h(z)=0
$$
yields the iteration
$$
x_{n+1}=x_{n}-\frac{f(x_n)}{f'(x_n)} \left({\frac{1}{1-\frac{f(x_n)f''(x_n)}{2(f'(x_n))^2}}}\right).
$$
This is referred to as Halley's formula.
This geometrical interpretation
$$
h(z)
$$
was derived by Gander (1978), where the equivalent iteration also was derived by applying Newton's method to
$$
g(x)=\frac{f(x)}{\sqrt{f'(x)}}=0.
$$
We call this algebraic interpretation
$$
g(x)
$$
of Halley's formula.
|
https://en.wikipedia.org/wiki/Simple_rational_approximation
|
passage: Therefore, the smaller the significance level, the lower the probability of committing type I error.
Some problems are usually associated with this framework (See criticism of hypothesis testing):
- A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this. One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. The p-value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies. A better and increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it.
- Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is the probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result. An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability.
- Rejecting the null hypothesis does not automatically prove the alternative hypothesis.
- As everything in inferential statistics it relies on sample size, and therefore under fat tails p-values may be seriously mis-computed.
##### Examples
Some well-known statistical tests and procedures are:
|
https://en.wikipedia.org/wiki/Statistics
|
passage: It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current. Some authors apply the name displacement current to only this contribution.
The second term on the right hand side is the displacement current as originally conceived by Maxwell, associated with the polarization of the individual molecules of the dielectric material.
Maxwell's original explanation for displacement current focused upon the situation that occurs in dielectric media. In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying. For greater discussion see Displacement current.
## Extending the original law: the Ampère–Maxwell equation
Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law.
|
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
|
passage: For example, the predecessor function can be defined as:
which can be verified by showing inductively that is the add − 1 function for > 0.
### Pairs
A pair (2-tuple) can be defined in terms of and , by using the Church encoding for pairs. For example, encapsulates the pair (,), returns the first element of the pair, and returns the second.
A linked list can be defined as either NIL for the empty list, or the of an element and a smaller list. The predicate tests for the value . (Alternatively, with , the construct obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps to can be defined as
which allows us to give perhaps the most transparent version of the predecessor function:
## Additional programming techniques
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
### Named constants
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants.
|
https://en.wikipedia.org/wiki/Lambda_calculus
|
passage: Error function
$$
\sqrt{\pi}x e^{x^2}\operatorname{erfc}(x) \sim 1+\sum_{n=1}^\infty (-1)^n \frac{(2n-1)!!}{n!(2x^2)^n} \ (x \to \infty)
$$
where is the double factorial.
### Worked example
Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. For example, we might start with the ordinary series
$$
\frac{1}{1-w}=\sum_{n=0}^\infty w^n
$$
The expression on the left is valid on the entire complex plane
$$
w \ne 1
$$
, while the right hand side converges only for
$$
|w|< 1
$$
. Multiplying by
$$
e^{-w/t}
$$
and integrating both sides yields
$$
\int_0^\infty \frac{e^{-\frac{w}{t}}}{1 - w} \, dw = \sum_{n=0}^\infty t^{n+1} \int_0^\infty e^{-u} u^n \, du
$$
The integral on the left hand side can be expressed in terms of the exponential integral. The integral on the right hand side, after the substitution
$$
u=w/t
$$
, may be recognized as the gamma function.
|
https://en.wikipedia.org/wiki/Asymptotic_analysis
|
passage: The green algae (: green alga) are a group of chlorophyll-containing autotrophic eukaryotes consisting of the phylum Prasinodermophyta and its unnamed sister group that contains the Chlorophyta and Charophyta/Streptophyta. The land plants (Embryophytes) have emerged deep within the charophytes as a sister of the Zygnematophyceae. Since the realization that the Embryophytes emerged within the green algae, some authors are starting to include them. The completed clade that includes both green algae and embryophytes is monophyletic and is referred to as the clade Viridiplantae and as the kingdom Plantae. The green algae include unicellular and colonial flagellates, most with two flagella per cell, as well as various colonial, coccoid (spherical), and filamentous forms, and macroscopic, multicellular seaweeds. There are about 22,000 species of green algae, many of which live most of their lives as single cells, while other species form coenobia (colonies), long filaments, or highly differentiated macroscopic seaweeds.
A few other organisms rely on green algae to conduct photosynthesis for them. The chloroplasts in dinoflagellates of the genus Lepidodinium, euglenids and chlorarachniophytes were acquired from ingested endosymbiont green algae, and in the latter retain a nucleomorph (vestigial nucleus).
|
https://en.wikipedia.org/wiki/Green_algae
|
passage: Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions.
## Examples
### Simple example
The function
$$
f(x, y) = x^2 + y^2
$$
is homogeneous of degree 2:
$$
f(tx, ty) = (tx)^2 + (ty)^2 = t^2 \left(x^2 + y^2\right) = t^2 f(x, y).
$$
### Absolute value and norms
The absolute value of a real number is a positively homogeneous function of degree , which is not homogeneous, since
$$
|sx|=s|x|
$$
if
$$
s>0,
$$
and
$$
|sx|=-s|x|
$$
if
$$
s<0.
$$
The absolute value of a complex number is a positively homogeneous function of degree
$$
1
$$
over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers.
More generally, every norm and seminorm is a positively homogeneous function of degree which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function.
|
https://en.wikipedia.org/wiki/Homogeneous_function
|
passage: | P−19
| P−18
| P−17
| P−16
| P−15
| P−14
| P−13
| P−12
| P−11
| P−10
| P−9
| P−8
| P−7
| P−6
| P−5
| P−4
| P−3
| P−2
| P−1
| P0
| P1
| P2
|-
| 7
| −7
| 4
| 0
| −3
| 4
| −3
| 1
| 1
| −2
| 2
| −1
| 0
| 1
| −1
| 1
| 0
| 0
| 1
| 0
| 1
| 1
| 1
|-
|}
## Sums of terms
The sum of the first n terms in the Padovan sequence is 2 less than P(n + 5), i.e.
$$
\sum_{m=0}^n P(m)=P(n+5)-2.
$$
Sums of alternate terms, sums of every third term and sums of every fifth term are also related to other terms in the sequence:
$$
\sum_{m=0}^n P(2m)=P(2n+3)-1
$$
$$
\sum_{m=0}^n P(2m+1)=P(2n+4)-1
$$
$$
\sum_{m=0}^n P(3m)=P(3n+2)
$$
$$
\sum_{m=0}^n P(3m+1)=P(3n+3)-1
$$
$$
|
https://en.wikipedia.org/wiki/Padovan_sequence
|
passage: However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).
## Thermal and electromagnetic radiation as a form of heat
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material.
|
https://en.wikipedia.org/wiki/Electromagnetic_radiation
|
passage: In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail.
Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway.
### Three prime untranslated regions and microRNAs
Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3′-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
|
https://en.wikipedia.org/wiki/Gene_expression
|
passage: - Q is the set of states,
- Σ is the input alphabet,
- Δ is the tape alphabet,
- δ is the transition function,
- q0 is the initial state,
- Qa and Qr are the sets of accepting and rejecting states.
Since we are dealing with the problem of halting on an empty input we may assume w.l.o.g. that Δ={0,1} and that 0 represents a blank, while 1 represents some tape symbol. We define τ so that we can represent computations:
τ := {<, min, T0 (⋅,⋅), T1 (⋅,⋅), (Hq(⋅,⋅))(q ∈ Q)}
Where:
- < is a linear order and min is a constant symbol for the minimal element with respect to < (our finite domain will be associated with an initial segment of the natural numbers).
- T0 and T1 are tape predicates. Ti(s,t) indicates that position s at time t contains i, where i ∈ {0,1}.
- Hq's are head predicates. Hq(s,t) indicates that at time t the machine is in state q, and its head is in position s.
The sentence φM states that (i) <, min, Ti's and Hq's are interpreted as above and (ii) that the machine eventually halts. The halting condition is equivalent to saying that Hq∗(s, t) holds for some s, t and q∗ ∈ Qa ∪ Qr and after that state, the configuration of the machine does not change.
|
https://en.wikipedia.org/wiki/Trakhtenbrot%27s_theorem
|
passage: Attention weights are calculated using the query and key vectors: the attention weight
$$
a_{ij}
$$
from token
$$
i
$$
to token
$$
j
$$
is the dot product between
$$
q_i
$$
and
$$
k_j
$$
. The attention weights are divided by the square root of the dimension of the key vectors,
$$
\sqrt{d_k}
$$
, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that
$$
W^Q
$$
and
$$
W^K
$$
are different matrices allows attention to be non-symmetric: if token
$$
i
$$
attends to token
$$
j
$$
(i.e.
$$
q_i\cdot k_j
$$
is large), this does not necessarily mean that token
$$
j
$$
will attend to token
$$
i
$$
(i.e.
$$
q_j\cdot k_i
$$
could be small). The output of the attention unit for token
$$
i
$$
is the weighted sum of the value vectors of all tokens, weighted by
$$
a_{ij}
$$
, the attention from token
$$
i
$$
to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations.
|
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29
|
passage: siftDown(a, start, count)
(after sifting down the root all nodes/elements are in heap order)
(Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid)
procedure siftDown(a, root, end) is
while iLeftChild(root) < end do (While the root has at least one child)
child ← iLeftChild(root) (Left child of root)
(If there is a right child and that child is greater)
if child+1 < end and a[child] < a[child+1] then
child ← child + 1
if a[root] < a[child] then
swap(a[root], a[child])
root ← child (repeat to continue sifting down the child now)
else
(The root holds the largest element. Since we may assume the heaps rooted
at the children are valid, this means that we are done.)
return
The procedure operates by building small heaps and repeatedly merging them using .
|
https://en.wikipedia.org/wiki/Heapsort
|
passage: At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation. By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine. By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen".
An analysis of trends in the criticism of complementary and alternative medicine (CAM) in five prestigious American medical journals during the period of reorganization within medicine (1965–1999) was reported as showing that the medical profession had responded to the growth of CAM in three phases, and that in each phase, changes in the medical marketplace had influenced the type of response in the journals. Changes included relaxed medical licensing, the development of managed care, rising consumerism, and the establishment of the USA Office of Alternative Medicine (later National Center for Complementary and Alternative Medicine, currently National Center for Complementary and Integrative Health).
### Medical education
Mainly as a result of reforms following the Flexner Report of 1910 medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic.
|
https://en.wikipedia.org/wiki/Alternative_medicine
|
passage: The construction of this differential on an exterior algebra makes sense for any Lie algebra, so it is used to define Lie algebra cohomology for all Lie algebras. More generally one uses a similar construction to define Lie algebra cohomology with coefficients in a module.
If
$$
G
$$
is a simply connected noncompact Lie group, the Lie algebra cohomology of the associated Lie algebra
$$
\mathfrak g
$$
does not necessarily reproduce the de Rham cohomology of
$$
G
$$
. The reason for this is that the passage from the complex of all differential forms to the complex of left-invariant differential forms uses an averaging process that only makes sense for compact groups.
## Definition
Let
$$
\mathfrak g
$$
be a Lie algebra over a commutative ring R with universal enveloping algebra
$$
U\mathfrak g
$$
, and let M be a representation of
$$
\mathfrak g
$$
(equivalently, a
$$
U\mathfrak g
$$
-module). Considering R as a trivial representation of
$$
\mathfrak g
$$
, one defines the cohomology groups
$$
\mathrm{H}^n(\mathfrak{g}; M) := \mathrm{Ext}^n_{U\mathfrak{g}}(R, M)
$$
(see Ext functor for the definition of Ext).
|
https://en.wikipedia.org/wiki/Lie_algebra_cohomology
|
passage: ### Work patterns
Patterns vary by country and region. In the
### United States
, the employment arrangement of emergency physician practices are either private (with a co-operative group of doctors staffing an emergency department under contract), institutional (physicians with or without an independent contractor relationship with the hospital), corporate (physicians with an independent contractor relationship with a third-party staffing company that services multiple emergency departments), or governmental (for example, when working within personal service military services, public health services, veterans' benefit systems or other government agencies).
In the
### United Kingdom
, all consultants in emergency medicine work in the National Health Service, and there is little scope for private emergency practice. In other countries like Australia, New Zealand, or
### Turkey
, emergency medicine specialists are almost always salaried employees of government health departments and work in public hospitals, with pockets of employment in private or non-government aeromedical rescue or transport services, as well as some private hospitals with emergency departments ; they may be supplemented or backed by non-specialist medical officers, and visiting general practitioners. Rural emergency departments are sometimes run by general practitioners alone, sometimes with non-specialist qualifications in emergency medicine.
## History
During the French Revolution, after seeing the speed with which the carriages of the French flying artillery maneuvered across the battlefields, French military surgeon Dominique Jean Larrey applied the idea of ambulances, or "flying carriages", for rapid transport of wounded soldiers to a central place where medical care was more accessible and practical.
|
https://en.wikipedia.org/wiki/Emergency_medicine
|
passage: This eventually allows the program to determine how the object will be lit.
### Shadow Maps
3D shadow mapping is another method which creates approximate shadows from a set position to create very diffuse shadows that may not be entirely accurate.
### Radiosity Normal Mapping
Chris Green of Valve, a video game maker, says that bump map data is derived from geometric descriptions of the objects surface significant lighting cues due to lighting occlusion by surface details are not calculated. A common fix is to use an additional texture channel to create an ambient occlusion field. This only provides a darkening effect that is not connected to the direction of the light source on acting on the surface.
## History
Shadow volume was proposed by Frank Crow in 1977. The advantage of a shadow volume was that is could be used to shadow everything, including itself.
|
https://en.wikipedia.org/wiki/Self-shadowing
|
passage: The innermost layer is the ovarian medulla. It can be hard to distinguish between the cortex and medulla, but follicles are usually not found in the medulla.
Follicular cells are flat epithelial cells that originate from surface epithelium covering the ovary. They are surrounded by granulosa cells that have changed from flat to cuboidal and proliferated to produce a stratified epithelium.
The ovary also contains blood vessels and lymphatics.
## Function
At puberty, the ovary begins to secrete increasing levels of hormones. Secondary sex characteristics begin to develop in response to the hormones. The ovary changes structure and function beginning at puberty. Since the ovaries are able to regulate hormones, they also play an important role in pregnancy and fertility. When egg cells (oocytes) are released from the fallopian tube, a variety of feedback mechanisms stimulate the endocrine system, which cause hormone levels to change. These feedback mechanisms are controlled by the hypothalamus and pituitary glands. Messages or signals from the hypothalamus are sent to the pituitary gland. In turn, the pituitary gland releases hormones to the ovaries. From this signaling, the ovaries release their own hormones.
### Gamete production
The ovaries are the site of production and periodical release of egg cells, the female gametes. In the ovaries, the developing egg cells (or oocytes) mature in the fluid-filled follicles.
|
https://en.wikipedia.org/wiki/Ovary
|
passage: In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
#### Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
|
https://en.wikipedia.org/wiki/Biomimetics
|
passage: If
$$
Q
$$
sets of statistical moments are known:
$$
(\gamma_{0,q},\mu_{q},\sigma^2_{q},\alpha_{3,q},\alpha_{4,q})
\quad
$$
for
$$
q=1,2,\ldots,Q
$$
, then each
$$
\gamma_n
$$
can
be expressed in terms of the equivalent
$$
n
$$
raw moments:
$$
\gamma_{n,q}= m_{n,q} \gamma_{0,q} \qquad \quad \textrm{for} \quad n=1,2,3,4 \quad \text{ and } \quad q = 1,2, \dots ,Q
$$
where
$$
\gamma_{0,q}
$$
is generally taken to be the duration of the
$$
q^{th}
$$
time-history, or the number of points if
$$
\Delta t
$$
is constant.
The benefit of expressing the statistical moments in terms of
$$
\gamma
$$
is that the
$$
Q
$$
sets can be combined by addition, and there is no upper limit on the value of
$$
Q
$$
.
$$
\gamma_{n,c}= \sum_{q=1}^Q \gamma_{n,q} \quad \quad \text{for } n=0,1,2,3,4
$$
where the subscript
$$
_c
$$
represents the concatenated time-history or combined
$$
\gamma
$$
.
|
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
|
passage: ## Applications
Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and reverb removal or addition, etc.).
### Audio broadcasting
Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with medium wave and shortwave broadcasting), and adjust overall loudness to the desired level.
### Active noise control
Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference.
### Audio synthesis
Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human speech using speech synthesis.
### Audio effects
Audio effects alter the sound of a musical instrument or other audio source.
|
https://en.wikipedia.org/wiki/Audio_signal_processing
|
passage: The functional is lower semi-continuous:
to see this, choose a Cauchy sequence of BV-functions converging to . Then, since all the functions of the sequence and their limit function are integrable and by the definition of lower limit
Now considering the supremum on the set of functions such that then the following inequality holds true
which is exactly the definition of lower semicontinuity.
### BV(Ω) is a Banach space
By definition is a subset of , while linearity follows from the linearity properties of the defining integral i.e.
for all therefore for all , and
for all , therefore for all , and all . The proved vector space properties imply that is a vector subspace of . Consider now the function defined as
where is the usual norm: it is easy to prove that this is a norm on . To see that is complete respect to it, i.e. it is a Banach space, consider a Cauchy sequence in . By definition it is also a Cauchy sequence in and therefore has a limit in : since is bounded in for each , then by lower semicontinuity of the variation , therefore is a BV function. Finally, again by lower semicontinuity, choosing an arbitrary small positive number From this we deduce that is continuous because it's a norm.
### BV(Ω) is not separable
To see this, it is sufficient to consider the following example belonging to the space : The example is taken from : see also . for each 0 < α < 1 define
as the characteristic function of the left-closed interval .
|
https://en.wikipedia.org/wiki/Bounded_variation
|
passage: As a result, use of the term normal to mean "orthogonal" is often avoided. The word "normal" also has a different meaning in probability and statistics.
A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two vectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality. In the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given
$$
\phi
$$
.
## Euclidean vector spaces
In Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i.e. they make an angle of 90° (
$$
\frac{\pi}{2}
$$
radians), or one of the vectors is zero. Hence orthogonality of vectors is an extension of the concept of perpendicular vectors to spaces of any dimension.
The orthogonal complement of a subspace is the space of all vectors that are orthogonal to every vector in the subspace. In a three-dimensional Euclidean vector space, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to it, and vice versa.
Note that the geometric concept of two planes being perpendicular does not correspond to the orthogonal complement, since in three dimensions a pair of vectors, one from each of a pair of perpendicular planes, might meet at any angle.
|
https://en.wikipedia.org/wiki/Orthogonality_%28mathematics%29
|
passage: We impose the Robin boundary condition
$$
-\sum_{k,l}\nu_k a^{kl}\frac{\partial u}{\partial x_l} = c(u-g),
$$
where is the component of the unit outward normal vector in the -th direction. The system to be solved is
$$
\sum_j\left(\sum_{k,l}\int_\Omega a^{kl}\frac{\partial\varphi_i}{\partial x_k}\frac{\partial\varphi_j}{\partial x_l}dx+\int_{\partial\Omega}c\varphi_i\varphi_j\, ds\right)u_j = \int_\Omega\varphi_i f\, dx+\int_{\partial\Omega}c\varphi_i g\, ds,
$$
as can be shown using an analogue of Green's identity. The coefficients are still found by solving a system of linear equations, but the matrix representing the system is markedly different from that for the ordinary Poisson problem.
In general, to each scalar elliptic operator of order , there is associated a bilinear form on the Sobolev space , so that the weak formulation of the equation is
$$
B[u,v] = (f,v)
$$
for all functions in . Then the stiffness matrix for this problem is
$$
\mathbf A_{ij} = B[\varphi_j,\varphi_i].
$$
|
https://en.wikipedia.org/wiki/Stiffness_matrix
|
passage: ### Five-part rules
Substituting the second cosine rule into the first and simplifying gives:
$$
\begin{align}
\cos a &= (\cos a \,\cos c + \sin a \,\sin c \,\cos B) \cos c + \sin b \,\sin c \,\cos A \\[4pt]
\cos a \,\sin^2 c &= \sin a \,\cos c \,\sin c \,\cos B + \sin b \,\sin c \,\cos A
\end{align}
$$
Cancelling the factor of gives
$$
\cos a \sin c = \sin a \,\cos c \,\cos B + \sin b \,\cos A
$$
Similar substitutions in the other cosine and supplementary cosine formulae give a large variety of 5-part rules. They are rarely used.
### Cagnoli's Equation
Multiplying the first cosine rule by gives
$$
\cos a \cos A = \cos b \,\cos c \,\cos A + \sin b \,\sin c - \sin b \,\sin c \,\sin^2 A.
$$
Similarly multiplying the first supplementary cosine rule by yields
$$
\cos a \cos A = -\cos B \,\cos C \,\cos a + \sin B \,\sin C - \sin B \,\sin C \,\sin^2 a.
$$
|
https://en.wikipedia.org/wiki/Spherical_trigonometry
|
passage: For linear non-isotropic materials, becomes a matrix; even more generally, may be replaced by a tensor, which may depend upon the electric field itself, or may exhibit frequency dependence (hence dispersion).
For a linear isotropic dielectric, the polarization is given by:
$$
\mathbf{P} = \varepsilon_0 \chi_\mathrm{e} \, \mathbf{E} = \varepsilon_0 (\varepsilon_\mathrm{r} - 1) \, \mathbf{E} ~,
$$
where is known as the susceptibility of the dielectric to electric fields. Note that
$$
\varepsilon = \varepsilon_\mathrm{r} \, \varepsilon_0 = \left( 1 + \chi_\mathrm{e} \right) \, \varepsilon_0 ~.
$$
## Necessity
Some implications of the displacement current follow, which agree with experimental observation, and with the requirements of logical consistency for the theory of electromagnetism.
### Generalizing Ampère's circuital law
#### Current in capacitors
An example illustrating the need for the displacement current arises in connection with capacitors with no medium between the plates. Consider the charging capacitor in the figure. The capacitor is in a circuit that causes equal and opposite charges to appear on the left plate and the right plate, charging the capacitor and increasing the electric field between its plates. No actual charge is transported through the vacuum between its plates.
|
https://en.wikipedia.org/wiki/Displacement_current
|
passage: In fluid theory, in which the plasma is modeled as a dispersive dielectric medium, the energy of Langmuir waves is known: field energy multiplied by the Brillouin factor
$$
\partial_\omega(\omega\epsilon)
$$
.
But damping cannot be derived in this model. To calculate energy exchange of the wave with resonant electrons, Vlasov plasma theory has to be expanded to second order and problems about suitable initial conditions and secular terms arise.
In Ref. these problems are studied. Because calculations for an infinite wave are deficient in second order, a wave packet is analysed. Second-order initial conditions are found that suppress secular behavior and excite a wave packet of which the energy agrees with fluid theory. The figure shows the energy density of a wave packet traveling at the group velocity, its energy being carried away by electrons moving at the phase velocity. Total energy, the area under the curves, is conserved.
### The Cauchy problem for perturbative solutions
The rigorous mathematical theory is based on solving the Cauchy problem for the evolution equation (here the partial differential Vlasov–Poisson equation) and proving estimates on the solution.
First a rather complete linearized mathematical theory has been developed since Landau.
Going beyond the linearized equation and dealing with the nonlinearity has been a longstanding problem in the mathematical theory of Landau damping.
|
https://en.wikipedia.org/wiki/Landau_damping
|
passage: The closest pair of points problem or closest pair problem is a problem of computational geometry: given
$$
n
$$
points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
## Time bounds
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the
$$
O(n^2)
$$
time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear
$$
O(n\log\log n)
$$
time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower
$$
O(n\log n)
$$
time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
|
https://en.wikipedia.org/wiki/Closest_pair_of_points_problem
|
passage: ### Langevin equation
The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation becomes valid on timescales much larger than the timescale of individual atomic collisions, since it does not include a term to describe the acceleration of particles during collision. The time evolution of the position of the Brownian particle over all time scales described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. At longer times scales, where acceleration is negligible, individual particle dynamics can be approximated using Brownian dynamics in place of Langevin dynamics.
### Astrophysics: star motion within galaxies
In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity of the massive object, of mass , is related to the rms velocity
$$
v_\star
$$
of the background stars by
$$
MV^2 \approx m v_\star^2
$$
where
$$
m \ll M
$$
is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both
$$
v_\star
$$
and .
|
https://en.wikipedia.org/wiki/Brownian_motion
|
passage: One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = . In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s.
Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization.
## Flat case
When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion.
## Cocycles and chain rules
As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields.
Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way.
|
https://en.wikipedia.org/wiki/Tensor_field
|
passage: Polytopes also began to be studied in non-Euclidean spaces such as hyperbolic space.
An important milestone was reached in 1948 with H. S. M. Coxeter's book Regular Polytopes, summarizing work to date and adding new findings of his own.
Meanwhile, the French mathematician Henri Poincaré had developed the topological idea of a polytope as the piecewise decomposition (e.g. CW-complex) of a manifold. Branko Grünbaum published his influential work on Convex Polytopes in 1967.
In 1952 Geoffrey Colin Shephard generalised the idea as complex polytopes in complex space, where each real dimension has an imaginary one associated with it. Coxeter developed the theory further.
The conceptual issues raised by complex polytopes, non-convexity, duality and other phenomena led Grünbaum and others to the more general study of abstract combinatorial properties relating vertices, edges, faces and so on. A related idea was that of incidence complexes, which studied the incidence or connection of the various elements with one another. These developments led eventually to the theory of abstract polytopes as partially ordered sets, or posets, of such elements. Peter McMullen and Egon Schulte published their book Abstract Regular Polytopes in 2002.
Enumerating the uniform polytopes, convex and nonconvex, in four or more dimensions remains an outstanding problem. The convex uniform 4-polytopes were fully enumerated by John Conway and Michael Guy using a computer in 1965; in higher dimensions this problem was still open as of 1997.
|
https://en.wikipedia.org/wiki/Polytope
|
passage: It has nilpotency class 2 with central series 1, Z(H), H.
- The multiplicative group of invertible upper triangular n × n matrices over a field F is not in general nilpotent, but is solvable.
- Any nonabelian group G such that G/Z(G) is abelian has nilpotency class 2, with central series {1}, Z(G), G.
The natural numbers k for which any group of order k is nilpotent have been characterized .
## Explanation of term
Nilpotent groups are called so because the "adjoint action" of any element is nilpotent, meaning that for a nilpotent group
$$
G
$$
of nilpotence degree
$$
n
$$
and an element
$$
g
$$
, the function
$$
\operatorname{ad}_g \colon G \to G
$$
defined by
$$
\operatorname{ad}_g(x) := [g,x]
$$
(where
$$
[g,x]=g^{-1} x^{-1} g x
$$
is the commutator of
$$
g
$$
and
$$
x
$$
) is nilpotent in the sense that the
$$
n
$$
th iteration of the function is trivial:
$$
\left(\operatorname{ad}_g\right)^n(x)=e
$$
for all
$$
x
$$
in
$$
G
$$
.
This is not a defining characteristic of nilpotent groups: groups for which
$$
\operatorname{ad}_g
$$
is nilpotent of degree
$$
n
$$
|
https://en.wikipedia.org/wiki/Nilpotent_group
|
passage: In order for the function to approach zero as x approaches negative infinity (as the CDF must do), an integration constant of 1/2 must be added. This gives for the CDF of Voigt:
$$
F(x;\mu,\sigma)=\operatorname{Re}\left[\frac{1}{2}+
\frac{\operatorname{erf}(z)}{2}
+\frac{iz^2}{\pi}\,_2F_2\left(1,1;\frac{3}{2},2;-z^2\right)\right].
$$
### The uncentered Voigt profile
If the Gaussian profile is centered at
$$
\mu_G
$$
and the Lorentzian profile is centered at
$$
\mu_L
$$
, the convolution is centered at
$$
\mu_V = \mu_G+\mu_L
$$
and the characteristic function is:
$$
\varphi_f(t;\sigma,\gamma,\mu_\mathrm{G},\mu_\mathrm{L})= e^{i(\mu_\mathrm{G}+\mu_\mathrm{L})t-\sigma^2t^2/2 - \gamma |t|}.
$$
The probability density function is simply offset from the centered profile by
$$
\mu_V
$$
:
$$
V(x;\mu_V,\sigma,\gamma)=\frac{\operatorname{Re}[w(z)]}{\sigma\sqrt{2 \pi}},
$$
where:
$$
z= \frac{x-\mu_V+i \gamma}{\sigma\sqrt{2}}
$$
The mode and median are both located at
$$
\mu_V
$$
.
|
https://en.wikipedia.org/wiki/Voigt_profile
|
passage: For any number of fields:
where again the overdots are partial time derivatives, the variational derivative with respect to the fields
$$
\frac{\delta H}{\delta \phi_i} = \frac{\partial\mathcal{H}}{\partial \phi_i} - \nabla\cdot \frac{\partial \mathcal{H}}{\partial (\nabla \phi_i)} \,,
$$
with · the dot product, must be used instead of simply partial derivatives.
## Phase space
The fields and conjugates form an infinite dimensional phase space, because fields have an infinite number of degrees of freedom.
## Poisson bracket
For two functions which depend on the fields and , their spatial derivatives, and the space and time coordinates,
$$
A = \int d^3 x \mathcal{A}\left(\phi_1,\phi_2,\ldots,\pi_1,\pi_2,\ldots,\nabla\phi_1,\nabla\phi_2,\ldots,\nabla\pi_1,\nabla\pi_2,\ldots,\mathbf{x},t\right)\,,
$$
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_field_theory
|
passage: A monoid homomorphism from the free monoid to any other monoid (M,•) is a function f such that
- f(x1...xn) = f(x1) • ... • f(xn)
- f() = e
where e is the identity on M. Computationally, every such homomorphism corresponds to a map operation applying f to all the elements of a list, followed by a fold operation which combines the results using the binary operator •. This computational paradigm (which can be generalized to non-associative binary operators) has inspired the MapReduce software framework.
## Endomorphisms
An endomorphism of A∗ is a morphism from A∗ to itself. The identity map I is an endomorphism of A∗, and the endomorphisms form a monoid under composition of functions.
An endomorphism f is prolongable if there is a letter a such that f(a) = as for a non-empty string s.
### String projection
The operation of string projection is an endomorphism.
|
https://en.wikipedia.org/wiki/Free_monoid
|
passage: This general definition recovers well-known cohomology theories:
- The cohomology of a Lie algebroid coincides with the Chevalley-Eilenberg cohomology of as a Lie algebra.
- The cohomology of a tangent Lie algebroid coincides with the de Rham cohomology of .
- The cohomology of a foliation Lie algebroid coincides with the leafwise cohomology of the foliation .
- The cohomology of the cotangent Lie algebroid associated to a Poisson structure coincides with the Poisson cohomology of .
## Lie groupoid-Lie algebroid correspondence
The standard construction which associates a Lie algebra to a Lie group generalises to this setting: to every Lie groupoid one can canonically associate a Lie algebroid defined as follows:
- the vector bundle is
$$
\mathrm{Lie}(G) = A:=u^*T^sG
$$
, where is the vertical bundle of the source fibre and is the groupoid unit map;
- the sections of are identified with the right-invariant vector fields on , so that inherits a Lie bracket;
- the anchor map is the differential
$$
\rho := dt_{\mid A}: A \to TM
$$
of the target map .
|
https://en.wikipedia.org/wiki/Lie_algebroid
|
passage: Having done so, these same centers execute their second function: The transfer of that energy by resonance energy transfer to a specific chlorophyll pair in the reaction center of the photosystems.
1. This specific pair performs the final function of chlorophylls: Charge separation, which produces the unbound protons (H) and electrons (e) that separately propel biosynthesis.
The two currently accepted photosystem units are and which have their own distinct reaction centres, named P700 and P680, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them.
The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates.
|
https://en.wikipedia.org/wiki/Chlorophyll
|
passage: Reversing this is accomplished by the j-invariant j(E), which can be used to determine τ and hence a torus.
## Classification of Riemann surfaces
The set of all Riemann surfaces can be divided into three subsets: hyperbolic, parabolic and elliptic Riemann surfaces. Geometrically, these correspond to surfaces with negative, vanishing or positive constant sectional curvature. That is, every connected Riemann surface X admits a unique complete 2-dimensional real Riemann metric with constant curvature equal to −1, 0 or 1 that belongs to the conformal class of Riemannian metrics determined by its structure as a Riemann surface. This can be seen as a consequence of the existence of isothermal coordinates.
In complex analytic terms, the Poincaré–Koebe uniformization theorem (a generalization of the Riemann mapping theorem) states that every simply connected Riemann surface is conformally equivalent to one of the following:
- The Riemann sphere , which is isomorphic to P1(C);
- The complex plane C;
- The open disk , which is isomorphic to the upper half-plane .
A Riemann surface is elliptic, parabolic or hyperbolic according to whether its universal cover is isomorphic to P1(C), C or D. The elements in each class admit a more precise description.
|
https://en.wikipedia.org/wiki/Riemann_surface
|
passage: The symmetry groups of the Platonic solids are a special class of three-dimensional point groups known as polyhedral groups. The high degree of symmetry of the Platonic solids can be interpreted in a number of ways. Most importantly, the vertices of each solid are all equivalent under the action of the symmetry group, as are the edges and faces. One says the action of the symmetry group is transitive on the vertices, edges, and faces. In fact, this is another way of defining regularity of a polyhedron: a polyhedron is regular if and only if it is vertex-uniform, edge-uniform, and face-uniform.
There are only three symmetry groups associated with the Platonic solids rather than five, since the symmetry group of any polyhedron coincides with that of its dual. This is easily seen by examining the construction of the dual polyhedron. Any symmetry of the original must be a symmetry of the dual and vice versa. The three polyhedral groups are:
- the tetrahedral group T,
- the octahedral group O (which is also the symmetry group of the cube), and
- the icosahedral group I (which is also the symmetry group of the dodecahedron).
The orders of the proper (rotation) groups are 12, 24, and 60 respectively – precisely twice the number of edges in the respective polyhedra. The orders of the full symmetry groups are twice as much again (24, 48, and 120). See (Coxeter 1973) for a derivation of these facts.
|
https://en.wikipedia.org/wiki/Platonic_solid
|
passage: The benefit of the expansion in terms of the real harmonic functions
$$
Y_{\ell m}
$$
is that for real functions
$$
f:S^2 \to \R
$$
the expansion coefficients
$$
f_{\ell m}
$$
are guaranteed to be real, whereas their coefficients
$$
f_{\ell}^m
$$
in their expansion in terms of the
$$
Y_{\ell}^m
$$
(considering them as functions
$$
f: S^2 \to \Complex \supset \R
$$
) do not have that property.
## Spectrum analysis
### Power spectrum in signal processing
The total power of a function f is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain.
|
https://en.wikipedia.org/wiki/Spherical_harmonics
|
passage: To wit, for each point , determines a function defined on tangent vectors at so that the following linearity condition holds for all tangent vectors and , and all real numbers and :
$$
\alpha_p \left(aX_p + bY_p\right) = a\alpha_p \left(X_p\right) + b\alpha_p \left(Y_p\right)\,.
$$
As varies, is assumed to be a smooth function in the sense that
$$
p \mapsto \alpha_p \left(X_p\right)
$$
is a smooth function of for any smooth vector field .
Any covector field has components in the basis of vector fields . These are determined by
$$
\alpha_i = \alpha \left(X_i\right)\,,\quad i = 1, 2, \dots, n\,.
$$
Denote the row vector of these components by
$$
\alpha[\mathbf{f}] = \big\lbrack\begin{array}{cccc} \alpha_1 & \alpha_2 & \dots & \alpha_n \end{array}\big\rbrack \,.
$$
Under a change of by a matrix , changes by the rule
$$
\alpha[\mathbf{f}A] = \alpha[\mathbf{f}]A \,.
$$
That is, the row vector of components transforms as a covariant vector.
|
https://en.wikipedia.org/wiki/Metric_tensor
|
passage: ### Circumradius theorem
Denoting the altitude from one side of a triangle as , the other two sides as and , and the triangle's circumradius (radius of the triangle's circumscribed circle) as , the altitude is given by
$$
h_a=\frac{bc}{2R}.
$$
### Interior point
If are the perpendicular distances from any point to the sides, and are the altitudes to the respective sides, then
$$
\frac{p_1}{h_1} +\frac{p_2}{h_2} + \frac{p_3}{h_3} = 1.
$$
### Area theorem
Denoting the altitudes of any triangle from sides respectively as , and denoting the semi-sum of the reciprocals of the altitudes as
$$
H = \tfrac{h_a^{-1} + h_b^{-1} + h_c^{-1}}{2}
$$
we have
$$
\mathrm{Area}^{-1} = 4 \sqrt{H(H-h_a^{-1})(H-h_b^{-1})(H-h_c^{-1})}.
$$
### General point on an altitude
If is any point on an altitude of any triangle , then
$$
\overline{AC}^2 + \overline{EB}^2 = \overline{AB}^2 + \overline{CE}^2.
$$
|
https://en.wikipedia.org/wiki/Altitude_%28triangle%29
|
passage: The following pictures show curves with winding numbers between −2 and 3:
−2−10 123
## Formal definition
Let
$$
\gamma:[0,1] \to \Complex \setminus \{a\}
$$
be a continuous closed path on the plane minus one point. The winding number of
$$
\gamma
$$
around
$$
a
$$
is the integer
$$
\text{wind}(\gamma,a) = s(1) - s(0),
$$
where
$$
(\rho,s)
$$
is the path written in polar coordinates, i.e. the lifted path through the covering map
$$
p:\Reals_{>0} \times \Reals \to \Complex \setminus \{a\}: (\rho_0,s_0) \mapsto a+\rho_0 e^{i2\pi s_0}.
$$
The winding number is well defined because of the existence and uniqueness of the lifted path (given the starting point in the covering space) and because all the fibers of
$$
p
$$
are of the form
$$
\rho_0 \times (s_0 + \Z)
$$
(so the above expression does not depend on the choice of the starting point). It is an integer because the path is closed.
## Alternative definitions
Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above:
|
https://en.wikipedia.org/wiki/Winding_number
|
passage: In mathematics, a periodic travelling wave (or wavetrain) is a periodic function of one-dimensional space that moves with constant speed. Consequently, it is a special type of spatiotemporal oscillation that is a periodic function of both space and time.
Periodic travelling waves play a fundamental role in many mathematical equations, including self-oscillatory systems,I. S. Aranson, L. Kramer (2002) "The world of the complex Ginzburg–Landau equation", Rev. Mod. Phys. 74: 99–143. DOI:10.1103/RevModPhys.74.99
excitable systems and
reaction–diffusion–advection systems.
Equations of these types are widely used as mathematical models of biology, chemistry and physics, and many examples in phenomena resembling periodic travelling waves have been found empirically.
The mathematical theory of periodic travelling waves is most fully developed for partial differential equations, but these solutions also occur in a number of other types of mathematical system, including integrodifferential equations,P. Ashwin, M. V. Bartuccelli, T. J. Bridges, S. A. Gourley (2002) "Travelling fronts for the KPP equation with spatio-temporal delay", Z. Angew. Math. Phys. 53: 103–122.
DOI:0010-2571/02/010103-20
integrodifference equations,
coupled map lattices
and cellular automata.
|
https://en.wikipedia.org/wiki/Periodic_travelling_wave
|
passage: If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the
$$
\widehat\alpha\text{ and }\widehat\beta
$$
equations. These expanded equations may be derived from the more general polynomial regression equations by defining the regression polynomial to be of order 1, as follows.
$$
\begin{bmatrix}
n & \sum_{i=1 }^nx_i \\
\sum_{i=1}^nx_i & \sum_{i=1}^nx_i^{2}
\end{bmatrix}
\begin{bmatrix}
\widehat\alpha \\
\widehat\beta
\end{bmatrix}
=
\begin{bmatrix}
\sum_{ i=1 }^ny_i \\
\sum_{ i=1 }^ny_ix_i
\end{bmatrix}
$$
The above system of linear equations may be solved directly, or stand-alone equations for
$$
\widehat\alpha\text{ and }\widehat\beta
$$
may be derived by expanding the matrix equations above.
|
https://en.wikipedia.org/wiki/Simple_linear_regression
|
passage: Reactions of electrochemical cells.
1. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics.
1. Calculation of the energy of electron movement in molecules and metal complexes.
## Key concepts
The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.
One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them.
## Disciplines
Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.
Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture.
|
https://en.wikipedia.org/wiki/Physical_chemistry
|
passage: Renaming the dummy indices:
$$
\ddot \gamma^q \frac{\partial}{\partial x^q}+\dot \gamma^i \dot \gamma^m \Gamma^{q}_{im} \frac{\partial}{\partial x^q}=0
$$
We finally arrive to the geodesic equation:
$$
\ddot \gamma^q +\dot \gamma^i \dot \gamma^m \Gamma^{q}_{im}=0
$$
|
https://en.wikipedia.org/wiki/Geodesics_in_general_relativity
|
passage: In group theory, the symmetry group of a geometric object is the group of all transformations under which the object is invariant, endowed with the group operation of composition. Such a transformation is an invertible mapping of the ambient space which takes the object to itself, and which preserves all the relevant structure of the object. A frequent notation for the symmetry group of an object X is G = Sym(X).
For an object in a metric space, its symmetries form a subgroup of the isometry group of the ambient space. This article mainly considers symmetry groups in Euclidean geometry, but the concept may also be studied for more general types of geometric structure.
## Introduction
We consider the "objects" possessing symmetry to be geometric figures, images, and patterns, such as a wallpaper pattern. For symmetry of physical objects, one may also take their physical composition as part of the pattern. (A pattern may be specified formally as a scalar field, a function of position with values in a set of colors or substances; as a vector field; or as a more general function on the object.) The group of isometries of space induces a group action on objects in it, and the symmetry group Sym(X) consists of those isometries which map X to itself (as well as mapping any further pattern to itself). We say X is invariant under such a mapping, and the mapping is a symmetry of X.
|
https://en.wikipedia.org/wiki/Symmetry_group
|
passage: Reverse Polish notation (RPN), also known as reverse Łukasiewicz notation, Polish postfix notation or simply postfix notation, is a mathematical notation in which operators follow their operands, in contrast to prefix or Polish notation (PN), in which operators precede their operands. The notation does not need any parentheses for as long as each operator has a fixed number of operands.
The term postfix notation describes the general scheme in mathematics and computer sciences, whereas the term reverse Polish notation typically refers specifically to the method used to enter calculations into hardware or software calculators, which often have additional side effects and implications depending on the actual implementation involving a stack. The description "Polish" refers to the nationality of logician Jan Łukasiewicz, who invented Polish notation in 1924.
The first computer to use postfix notation, though it long remained essentially unknown outside of Germany, was Konrad Zuse's Z3 in 1941 as well as his Z4 in 1945. The reverse Polish scheme was again proposed in 1954 by Arthur Burks, Don Warren, and Jesse Wright and was independently reinvented by Friedrich L. Bauer and Edsger W. Dijkstra in the early 1960s to reduce computer memory access and use the stack to evaluate expressions. The algorithms and notation for this scheme were extended by the philosopher and computer scientist Charles L. Hamblin in the mid-1950s.
During the 1970s and 1980s,
|
https://en.wikipedia.org/wiki/Reverse_Polish_notation
|
passage: Eduard Čech, Topological Spaces, revised by Zdenek Frolík and Miroslav Katetov, John Wiley & Sons, 1966.
## The continuum hypothesis
The continuum hypothesis (CH) states that there are no cardinals strictly between
$$
\aleph_0
$$
and
$$
2^{\aleph_0}.
$$
The latter cardinal number is also often denoted by
$$
\mathfrak{c}
$$
; it is the cardinality of the continuum (the set of real numbers). In this case
$$
2^{\aleph_0} = \aleph_1.
$$
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal
$$
\kappa
$$
, there are no cardinals strictly between
$$
\kappa
$$
and
$$
2^\kappa
$$
. Both the continuum hypothesis and the generalized continuum hypothesis have been proved to be independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals
$$
\kappa
$$
, the only restrictions ZFC places on the cardinality of
$$
2^\kappa
$$
are that
$$
\kappa < \operatorname{cf}(2^\kappa)
$$
, and that the exponential function is non-decreasing.
|
https://en.wikipedia.org/wiki/Cardinal_number
|
passage: `Property Bar() As typeGetinstructionsReturn valueEnd GetSet (ByVal Value As type)instructionsEnd SetEnd Property` `ReadOnly Property Bar() As typeGetinstructionsReturn valueEnd GetEnd Property` `WriteOnly Property Bar() As typeSet (ByVal Value As type)instructionsEnd SetEnd Property` Xojo `ComputedProperty Bar() As typeGetinstructionsReturn valueEnd GetSet (ByVal Value As type)instructionsEnd SetEnd ComputedProperty` `ComputedProperty Bar() As typeGetinstructionsReturn valueEnd GetEnd ComputedProperty` `ComputedProperty Bar() As typeSet (value As type)instructionsEnd SetEnd ComputedProperty` PHP `function __get($property) { switch ($property) { case Bar : instructions ... return value; } }function __set($property, $value) { switch ($property) { case Bar : instructions } }` `function __get($property) { switch ($property) { case Bar : instructions ... return value; } }` `function __ set($property, $value) { switch ($property) { case Bar : instructions } }` Perl `sub Bar { my $self = shift;
|
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28object-oriented_programming%29
|
passage: Implementations that fully or nearly comply with the Haskell 98 standard include:
- The Glasgow Haskell Compiler (GHC) compiles to native code on many different processor architectures, and to ANSI C, via one of two intermediate languages: C--, or in more recent versions, LLVM (formerly Low Level Virtual Machine) bitcode. GHC has become the de facto standard Haskell dialect. There are libraries (e.g., bindings to OpenGL) that work only with GHC. GHC was also distributed with the Haskell platform. GHC features an asynchronous runtime that also schedules threads across multiple CPU cores similar to the Go runtime.
- Jhc, a Haskell compiler written by John Meacham, emphasizes speed and efficiency of generated programs and exploring new program transformations.
- Ajhc is a fork of Jhc.
- The Utrecht Haskell Compiler (UHC) is a Haskell implementation from Utrecht University. It supports almost all Haskell 98 features plus many experimental extensions. It is implemented using attribute grammars and is primarily used for research on generated type systems and language extensions.
Implementations no longer actively maintained include:
- The Haskell User's Gofer System (Hugs) is a bytecode interpreter. It was once one of the implementations used most widely, alongside the GHC compiler, but has now been mostly replaced by GHCi. It also comes with a graphics library.
- HBC is an early implementation supporting Haskell 1.4.
|
https://en.wikipedia.org/wiki/Haskell
|
passage: ### Physiological monitoring
This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are blood volume pulse, galvanic skin response, facial electromyography, and facial color patterns.
#### Blood volume pulse
#####
##### Overview
A subject's blood volume pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities. The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually 'jumps' and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body's inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.
#####
##### Methodology
Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the bloodstream.
|
https://en.wikipedia.org/wiki/Affective_computing
|
passage: In addition, this can be used to compute the Grothendieck group
$$
K(\mathbb{P}^n)
$$
by observing it is a projective bundle over the field
$$
\mathbb{F}
$$
.
### K0 of singular spaces and spaces with isolated quotient singularities
One recent technique for computing the Grothendieck group of spaces with minor singularities comes from evaluating the difference between
$$
K^0(X)
$$
and
$$
K_0(X)
$$
, which comes from the fact every vector bundle can be equivalently described as a coherent sheaf. This is done using the Grothendieck group of the Singularity category
$$
D_{sg}(X)
$$
from derived noncommutative algebraic geometry. It gives a long exact sequence starting with
$$
\cdots \to K^0(X) \to K_0(X) \to K_{sg}(X) \to 0
$$
where the higher terms come from higher K-theory. Note that vector bundles on a singular
$$
X
$$
are given by vector bundles
$$
E \to X_{sm}
$$
on the smooth locus
$$
X_{sm} \hookrightarrow X
$$
. This makes it possible to compute the Grothendieck group on weighted projective spaces since they typically have isolated quotient singularities.
|
https://en.wikipedia.org/wiki/K-theory
|
passage: It celebrates their uniqueness and differences, who are from seven to ten percent of the world's population. Thousands of left-handed people in today's society have to adapt to use right-handed tools and objects. Again according to the club, "in the U.K. alone there were over 20 regional events to mark the day in 2001—including left-v-right sports matches, a left-handed tea party, pubs using left-handed corkscrews where patrons drank and played pub games with the left hand only, and nationwide 'Lefty Zones' where left-handers' creativity, adaptability and sporting prowess were celebrated, whilst right-handers were encouraged to try out everyday left-handed objects to see just how awkward it can feel using the wrong equipment. "
## In other animals
Kangaroos and other macropod marsupials show a left-hand preference for everyday tasks in the wild. 'True' handedness is unexpected in marsupials however, because unlike placental mammals, they lack a corpus callosum. Left-handedness was particularly apparent in the red kangaroo (Macropus rufus) and the eastern gray kangaroo (Macropus giganteus). Red-necked (Bennett's) wallabies (Macropus rufogriseus) preferentially use their left hand for behaviours that involve fine manipulation, but the right for behaviours that require more physical strength. There was less evidence for handedness in arboreal species.
|
https://en.wikipedia.org/wiki/Handedness
|
passage: The numerical approach
For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations.
The equations to be solved can be obtained from the time-dependent Schrödinger equation:
$$
i\hbar\dot{\underline{c}}^A(t) = \mathbf{H}_A(t)\underline{c}^A(t) ,
$$
where
$$
\underline{c}^A(t)
$$
is a vector containing the adiabatic state amplitudes,
$$
\mathbf{H}_A(t)
$$
is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative.
Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system:
$$
P_D = |c^A_2(t_1)|^2
$$
for a system that began with
$$
|c^A_1(t_0)|^2 = 1
$$
.
|
https://en.wikipedia.org/wiki/Adiabatic_theorem
|
passage: ### Hilbert's paradox
The von Neumann universe satisfies the following two properties:
-
$$
\mathcal{P}(x) \in V
$$
for every set
$$
x \in V
$$
.
-
$$
\bigcup x \in V
$$
for every subset
$$
x \subseteq V
$$
.
Indeed, if
$$
x \in V
$$
, then
$$
x \in V_\alpha
$$
for some ordinal
$$
\alpha
$$
. Any stage is a transitive set, hence every
$$
y \in x
$$
is already
$$
y \in V_\alpha
$$
, and so every subset of
$$
x
$$
is a subset of
$$
V_\alpha
$$
. Therefore,
$$
\mathcal{P}(x) \subseteq V_{\alpha+1}
$$
and
$$
\mathcal{P}(x) \in V_{\alpha+2} \subseteq V
$$
. For unions of subsets, if
$$
x \subseteq V
$$
, then for every
$$
y \in x
$$
, let
$$
\beta_y
$$
be the smallest ordinal for which
$$
y \in V_{\beta_y}
$$
. Because by assumption
$$
x
$$
is a set, we can form the limit
$$
\alpha = \sup \{ \beta_y : y \in x \}
$$
.
|
https://en.wikipedia.org/wiki/Von_Neumann_universe
|
passage: In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:
$$
\frac{s}{b^{\,p-1}} \times b^e,
$$
where is the significand (ignoring any implied decimal point), is the precision (the number of digits in the significand), is the base (in our example, this is the number ten), and is the exponent.
Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point), base eight (octal floating point), base four (quaternary floating point), base three (balanced ternary floating point) and even base 256 and base .
A floating-point number is a rational number, because it can be represented as one integer divided by another; for example is (145/100)×1000 or /100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (, or ). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occur depend on the base and its prime factors.
|
https://en.wikipedia.org/wiki/Floating-point_arithmetic
|
passage: Human genetic clustering
The similarity of genetic data is used in clustering to infer population structures.
### Medicine
Medical imaging
On PET scans, cluster analysis can be used to differentiate between different types of tissue in a three-dimensional image for many different purposes.
Analysis of antimicrobial activity
Cluster analysis can be used to analyse patterns of antibiotic resistance, to classify antimicrobial compounds according to their mechanism of action, to classify antibiotics according to their antibacterial activity.
IMRT segmentation
Clustering can be used to divide a fluence map into distinct regions for conversion into deliverable fields in MLC-based Radiation Therapy.
### Business and marketing
Market research
Cluster analysis is widely used in market research when working with multivariate data from surveys and test panels. Market researchers use cluster analysis to partition the general population of consumers into market segments and to better understand the relationships between different groups of consumers/potential customers, and for use in market segmentation, product positioning, new product development and selecting test markets.
Grouping of shopping items
Clustering can be used to group all the shopping items available on the web into a set of unique products. For example, all the items on eBay can be grouped into unique products (eBay does not have the concept of a SKU).
### World Wide Web
Social network analysis
In the study of social networks, clustering may be used to recognize communities within large groups of people.
Search result grouping
In the process of intelligent grouping of the files and websites, clustering may be used to create a more relevant set of search results compared to normal search engines like Google.
|
https://en.wikipedia.org/wiki/Cluster_analysis
|
passage: In this case, the Lebesgue measure of
$$
[a, b]
$$
need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get
$$
\varphi\left(\frac{1}{b-a}\int_a^b f(x)\, dx\right) \le \frac{1}{b-a} \int_a^b \varphi(f(x)) \,dx.
$$
### Probabilistic form
The same result can be equivalently stated in a probability theory setting, by a simple change of notation. Let
$$
(\Omega, \mathfrak{F},\operatorname{P})
$$
be a probability space, X an integrable real-valued random variable and
$$
\varphi
$$
a convex function.
|
https://en.wikipedia.org/wiki/Jensen%27s_inequality
|
passage: If one takes
$$
x_i = x - \lambda_i
$$
as a local coordinate near a pole λiof order
$$
r_i+1
$$
, one can then solve term-by-term for a holomorphic gauge transformation g such that locally, the system looks like
$$
\frac{d(g_i^{-1}Z_i)}{dx_i} = \left(\sum_{j=1}^{r_i} \frac{(-j)T^{(i)}_j}{x_i^{j+1}}+\frac{M^{(i)}}{x_i}\right)(g_i^{-1}Z_i)
$$
where
$$
M^{(i)}
$$
and the
$$
T^{(i)}_j
$$
are diagonal matrices. If this were valid, it would be extremely useful, because then (at least locally), one has decoupled the system into n scalar differential equations which one can easily solve to find that (locally):
$$
Z_i = g_i \exp\left(M^{(i)} \log(x_i)+\sum_{j=1}^{r_i}\frac{T^{(i)}_j}{x_i^{j}}\right).
$$
However, this does not work - because the power series solved term-for-term for g will not, in general, converge.
Jimbo, Miwa and Ueno showed that this approach nevertheless provides canonical solutions near the singularities, and can therefore be used to define extended monodromy data.
|
https://en.wikipedia.org/wiki/Isomonodromic_deformation
|
passage: Murata's constant :
$$
\prod_{p} \left(1 + \frac{1}{\left(p-1\right)^2}\right) = 2.826419...
$$
- The strongly carefree constant :
$$
\prod_{p} \left(1 - \frac{1}{\left(p+1\right)^2}\right) = 0.775883...
$$
- Artin's constant :
$$
\prod_{p} \left(1 - \frac{1}{p(p-1)}\right) = 0.373955...
$$
- Landau's totient constant :
$$
\prod_{p} \left(1 + \frac{1}{p(p-1)}\right) = \frac{315}{2\pi^4}\zeta(3) = 1.943596...
$$
- The carefree constant :
$$
\prod_{p} \left(1 - \frac{1}{p(p+1)}\right) = 0.704442...
$$
and its reciprocal :
$$
\prod_{p} \left(1 + \frac{1}{p^2+p-1}\right) = 1.419562...
$$
- The Feller–Tornier constant :
$$
\frac{1}{2}+\frac{1}{2} \prod_{p} \left(1 - \frac{2}{p^2}\right) = 0.661317...
$$
-
|
https://en.wikipedia.org/wiki/Euler_product
|
passage: To prove that every epimorphism in Top is surjective, we proceed exactly as in Set, giving {0,1} the indiscrete topology, which ensures that all considered maps are continuous.
- HComp: compact Hausdorff spaces and continuous functions. If f: X → Y is not surjective, let Since fX is closed, by Urysohn's Lemma there is a continuous function such that g1 is 0 on fX and 1 on y. We compose f with both g1 and the zero function
However, there are also many concrete categories of interest where epimorphisms fail to be surjective. A few examples are:
- In the category of monoids, Mon, the inclusion map N → Z is a non-surjective epimorphism. To see this, suppose that g1 and g2 are two distinct maps from Z to some monoid M. Then for some n in Z, g1(n) ≠ g2(n), so g1(−n) ≠ g2(−n). Either n or −n is in N, so the restrictions of g1 and g2 to N are unequal.
- In the category of algebras over commutative ring R, take R[N] → R[Z], where R[G] is the monoid ring of the monoid G and the morphism is induced by the inclusion N → Z as in the previous example. This follows from the observation that 1 generates the algebra R[Z] (note that the unit in R[Z] is given by 0 of Z), and the inverse of the element represented by n in Z is just the element represented by −n.
|
https://en.wikipedia.org/wiki/Epimorphism
|
passage: The complements to interfacial angles of external crystal faces can, on the other hand, be directly measured from a zone-axis diffraction pattern or from the Fourier transform of a high resolution TEM image that shows crossed lattice fringes.
### Lattice matching (3D)
Lattice parameters of unknown crystal phases can be obtained from X-ray, neutron, or electron diffraction data. Single-crystal diffraction experiments supply orientation matrices, from which lattice parameters can be deduced. Alternatively, lattice parameters can be obtained from powder or polycrystal diffraction data via profile fitting without structural model (so-called 'Le Bail method').
Arbitrarily defined unit cells can be transformed to a standard setting and, from there, further reduced to a primitive smallest cell. Sophisticated algorithms compare such reduced cells with corresponding database entries. More powerful algorithms also consider derivative super- and subcells. The lattice-matching process can be further sped up by precalculating and storing reduced cells for all entries. The algorithm searches for matches within a certain range of the lattice parameters. More accurate lattice parameters allow a narrower range and, thus, a better match.
Lattice matching is useful in identifying crystal phases in the early stages of single-crystal
diffraction experiments and, thus, avoiding unnecessary full data collection and structure determination procedures for already known crystal structures. The method is particularly important for single-crystalline samples that need to be preserved.
|
https://en.wikipedia.org/wiki/Crystallographic_database
|
passage: According to fitness they are then selected to reproduce with modification. The genetic operators are exactly the same that are used in a conventional unigenic system, for example, mutation, inversion, transposition, and recombination.
Decision trees with both nominal and numeric attributes are also easily induced with gene expression programming using the framework described above for dealing with random numerical constants. The chromosomal architecture includes an extra domain for encoding random numerical constants, which are used as thresholds for splitting the data at each branching node. For example, the gene below with a head size of 5 (the Dc starts at position 16):
`012345678901234567890`
`WOTHabababbbabba46336`
encodes the decision tree shown below:
In this system, every node in the head, irrespective of its type (numeric attribute, nominal attribute, or terminal), has associated with it a random numerical constant, which for simplicity in the example above is represented by a numeral 0–9. These random numerical constants are encoded in the Dc domain and their expression follows a very simple scheme: from top to bottom and from left to right, the elements in Dc are assigned one-by-one to the elements in the decision tree. So, for the following array of RNCs:
C = {62, 51, 68, 83, 86, 41, 43, 44, 9, 67}
the decision tree above results in:
which can also be represented more colorfully as a conventional decision tree:
## Criticism
GEP has been criticized for not being a major improvement over other genetic programming techniques.
|
https://en.wikipedia.org/wiki/Gene_expression_programming
|
passage: Whereas if the index increases with increasing wavelength (which is typically the case in the ultraviolet), the medium is said to have anomalous dispersion.
At the interface of such a material with air or vacuum (index of ~1), Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(). Thus, blue light, with a higher refractive index, will be bent more strongly than red light, resulting in the well-known rainbow pattern.
## Group-velocity dispersion
Beyond simply describing a change in the phase velocity over wavelength, a more serious consequence of dispersion in many applications is termed group-velocity dispersion (GVD). While phase velocity v is defined as v = c/n, this describes only one frequency component. When different frequency components are combined, as when considering a signal or a pulse, one is often more interested in the group velocity, which describes the speed at which a pulse or information superimposed on a wave (modulation) propagates. In the accompanying animation, it can be seen that the wave itself (orange-brown) travels at a phase velocity much faster than the speed of the envelope (black), which corresponds to the group velocity. This pulse might be a communications signal, for instance, and its information only travels at the group velocity rate, even though it consists of wavefronts advancing at a faster rate (the phase velocity).
|
https://en.wikipedia.org/wiki/Dispersion_%28optics%29
|
passage: It is a foolish vendor who diverts funds from a 'cash cow' when these are needed to extend the life of that 'product'. Although it is necessary to recognize a 'dog' when it appears (at least before it bites you) it would be foolish in the extreme to create one in order to balance up the picture. The vendor, who has most of their products in the 'cash cow' quadrant, should consider themselves fortunate indeed, and an excellent marketer, although they might also consider creating a few stars as an insurance policy against unexpected future developments and, perhaps, to add some extra growth. There is also a common misconception that 'dogs' are a waste of resources. In many markets 'dogs' can be considered loss-leaders that while not themselves profitable will lead to increased sales in other profitable areas.
### Alternatives
As with most marketing techniques, there are a number of alternative offerings vying with the growth–share matrix although this appears to be the most widely used. The next most widely reported technique is that developed by McKinsey and General Electric, which is a three-cell by three-cell matrix—using the dimensions of 'industry attractiveness' and 'business strengths'. This approaches some of the same issues as the growth–share matrix but from a different direction and in a more complex way (which may be why it is used less, or is at least less widely taught). Both growth-share matrix and Industry Attractiveness-Business Strength matrix developed by McKinsey and General Electric, are criticized for being static as they portray businesses as they exist at one point in time.
|
https://en.wikipedia.org/wiki/Growth%E2%80%93share_matrix
|
passage: BLOCK4 \ddot{\mathbf{O}} &= \left[\ddot r \hat {\mathbf r} + \dot r \dot \theta \hat {\boldsymbol \theta}\right] +
BLOCK5\end{align}
$$
The coefficients of
$$
\hat{\mathbf{r}}
$$
and
$$
\hat{\boldsymbol \theta}
$$
give the accelerations in the radial and transverse directions.
|
https://en.wikipedia.org/wiki/Orbit
|
passage: This situation changed with the discovery of p-adic numbers by Hensel in 1897; and now it is standard to consider all of the various possible embeddings of a number field
$$
K
$$
into its various topological completions
$$
K_{\mathfrak{p}}
$$
at once.
A place of a number field
$$
K
$$
is an equivalence class of absolute values on
$$
K
$$
pg 9. Essentially, an absolute value is a notion to measure the size of elements
$$
x
$$
of Two such absolute values are considered equivalent if they give rise to the same notion of smallness (or proximity). The equivalence relation between absolute values
$$
|\cdot|_0 \sim |\cdot|_1
$$
is given by some
$$
\lambda \in \mathbb{R}_{>0}
$$
such that
$$
|\cdot|_0 = |\cdot|_1^{\lambda}
$$
meaning we take the value of the norm
$$
|\cdot|_1
$$
to the
$$
\lambda
$$
-th power.
In general, the types of places fall into three regimes. Firstly (and mostly irrelevant), the trivial absolute value | |0, which takes the value
$$
1
$$
on all non-zero The second and third classes are
### Archimedean places
and non-Archimedean (or ultrametric) places.
|
https://en.wikipedia.org/wiki/Algebraic_number_field
|
passage: If and is a scalar then where if is Hausdorff, then equality holds: In particular, every non-zero scalar multiple of a closed set is closed.
If and if is a set of scalars such that neither contain zero then
If then is convex.
If then and so consequently, if is closed then so is
If is a real TVS and then where the left hand side is independent of the topology on moreover, if is a convex neighborhood of the origin then equality holds.
For any subset where is any neighborhood basis at the origin for
However, and it is possible for this containment to be proper (for example, if and is the rational numbers). It follows that for every neighborhood of the origin in Closed hullsIn a locally convex space, convex hulls of bounded sets are bounded. This is not true for TVSs in general.
- The closed convex hull of a set is equal to the closure of the convex hull of that set; that is, equal to
- The closed balanced hull of a set is equal to the closure of the balanced hull of that set; that is, equal to
- The closed disked hull of a set is equal to the closure of the disked hull of that set; that is, equal to
If and the closed convex hull of one of the sets or is compact then
If each have a closed convex hull that is compact (that is, and are compact) then Hulls and compactnessIn a general TVS, the closed convex hull of a compact set may to be compact.
|
https://en.wikipedia.org/wiki/Topological_vector_space
|
passage: In 1934, Chowla showed that the generalized Riemann hypothesis implies that the first prime in the arithmetic progression a mod m is at most Km2log(m)2 for some fixed constant K.
- In 1967, Hooley showed that the generalized Riemann hypothesis implies Artin's conjecture on primitive roots.
- In 1973, Weinberger showed that the generalized Riemann hypothesis implies that Euler's list of idoneal numbers is complete.
- showed that the generalized Riemann hypothesis for the zeta functions of all algebraic number fields implies that any number field with class number 1 is either Euclidean or an imaginary quadratic number field of discriminant −19, −43, −67, or −163.
- In 1976, G. Miller showed that the generalized Riemann hypothesis implies that one can test if a number is prime in polynomial time via the Miller test. In 2002, Manindra Agrawal, Neeraj Kayal and Nitin Saxena proved this result unconditionally using the AKS primality test.
- discussed how the generalized Riemann hypothesis can be used to give sharper estimates for discriminants and class numbers of number fields.
- showed that the generalized Riemann hypothesis implies that Ramanujan's integral quadratic form represents all integers that it represents locally, with exactly 18 exceptions.
- In 2021, Alexander (Alex) Dunn and Maksym Radziwill proved Patterson's conjecture on cubic Gauss sums, under the assumption of the GRH.
### Excluded middle
Some consequences of the RH are also consequences of its negation, and are thus theorems.
|
https://en.wikipedia.org/wiki/Riemann_hypothesis
|
passage: is
$$
\operatorname{dim}P
= \frac{\text{energy}}{\text{time}}
= \frac{\mathsf{T}^{-2}\mathsf{L}^2\mathsf{M}}{\mathsf{T}}
= \mathsf{T}^{-3}\mathsf{L}^2\mathsf{M} .
$$
The dimension of the physical quantity electric charge is
$$
\operatorname{dim}Q
= \text{current} \times \text{time}
= \mathsf{T}\mathsf{I} .
$$
The dimension of the physical quantity voltage is
$$
\operatorname{dim}V
= \frac{\text{power}}{\text{current}}
= \frac{\mathsf{T}^{-3}\mathsf{L}^2\mathsf{M}}{\mathsf{I}}
= \mathsf{T^{-3}}\mathsf{L}^2\mathsf{M} \mathsf{I}^{-1} .
$$
The dimension of the physical quantity capacitance is
$$
\operatorname{dim}C
= \frac{\text{electric charge}}{\text{electric potential difference}}
= \frac {\mathsf{T}\mathsf{I}}{\mathsf{T}^{-3}\mathsf{L}^2\mathsf{M}\mathsf{I}^{-1}}
= \mathsf{T^4}\mathsf{L^{-2}}\mathsf{M^{-1}}\mathsf{I^2} .
$$
|
https://en.wikipedia.org/wiki/Dimensional_analysis
|
passage: The set
$$
\sigma_{\mathrm{ess},4}(T)
$$
gives the part of the spectrum that is independent of compact perturbations, that is,
$$
\sigma_{\mathrm{ess},4}(T) = \bigcap_{K \in B_0(X)} \sigma(T+K),
$$
where
$$
B_0(X)
$$
denotes the set of compact operators on
$$
X
$$
(D.E. Edmunds and W.D. Evans, 1987).
The spectrum of a closed, densely defined operator
$$
T
$$
can be decomposed into a disjoint union
$$
\sigma(T)=\sigma_{\mathrm{ess},5}(T)\bigsqcup\sigma_{\mathrm{disc}}(T)
$$
,
where
$$
\sigma_{\mathrm{disc}}(T)
$$
is the discrete spectrum of
$$
T
$$
.
|
https://en.wikipedia.org/wiki/Essential_spectrum
|
passage: This was first proposed by Gérard de Vaucouleurs in 1948. The choice of using 50% was arbitrary, but proved to be useful in further works by R. A. Fish in 1963, where he established a luminosity concentration law that relates the brightnesses of elliptical galaxies and their respective Re, and by José Luis Sérsic in 1968 that defined a mass-radius relation in galaxies.
In defining Re, it is necessary that the overall brightness flux galaxy should be captured, with a method employed by Bershady in 2000 suggesting to measure twice the size where the brightness flux of an arbitrarily chosen radius, defined as the local flux, divided by the overall average flux equals to 0.2. Using half-light radius allows a rough estimate of a galaxy's size, but is not particularly helpful in determining its morphology.
Variations of this method exist. In particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
### Petrosian magnitude
First described by Vahe Petrosian in 1976, a modified version of this method has been used by the Sloan Digital Sky Survey (SDSS). This method employs a mathematical model on a galaxy whose radius is determined by the azimuthally (horizontal) averaged profile of its brightness flux.
|
https://en.wikipedia.org/wiki/Galaxy
|
passage: This was then disproved by , who showed that T(n) takes negative values infinitely often. A confirmation of this positivity conjecture would have led to a proof of the Riemann hypothesis, as was shown by Pál Turán.
### Generalizations
More generally, we can consider the weighted summatory functions over the Liouville function defined for any
$$
\alpha \in \mathbb{R}
$$
as follows for positive integers x where (as above) we have the special cases
$$
L(x) := L_0(x)
$$
and
$$
T(x) = L_1(x)
$$
$$
L_{\alpha}(x) := \sum_{n \leq x} \frac{\lambda(n)}{n^{\alpha}}.
$$
These
$$
\alpha^{-1}
$$
-weighted summatory functions are related to the Mertens function, or weighted summatory functions of the Moebius function. In fact, we have that the so-termed non-weighted, or ordinary function
$$
L(x)
$$
precisely corresponds to the sum
$$
L(x) = \sum_{d^2 \leq x} M\left(\frac{x}{d^2}\right) = \sum_{d^2 \leq x} \sum_{n \leq \frac{x}{d^2}} \mu(n).
$$
Moreover, these functions satisfy similar bounding asymptotic relations.
|
https://en.wikipedia.org/wiki/Liouville_function
|
passage: SNP detection Identifying single nucleotide polymorphism among alleles within or between populations. Several applications of microarrays make use of SNP detection, including genotyping, forensic analysis, measuring predisposition to disease, identifying drug-candidates, evaluating germline mutations in individuals or somatic mutations in cancers, assessing loss of heterozygosity, or genetic linkage analysis. Alternative splicing detection An exon junction array design uses probes specific to the expected or potential splice sites of predicted exons for a gene. It is of intermediate density, or coverage, to a typical gene expression array (with 1–3 probes per gene) and a genomic tiling array (with hundreds or thousands of probes per gene). It is used to assay the expression of alternative splice forms of a gene. Exon arrays have a different design, employing probes designed to detect each individual exon for known or predicted genes, and can be used for detecting different splicing isoforms. Fusion genes microarray A fusion gene microarray can detect fusion transcripts, e.g. from cancer specimens. The principle behind this is building on the alternative splicing microarrays. The oligo design strategy enables combined measurements of chimeric transcript junctions with exon-wise measurements of individual fusion partners. Tiling array Genome tiling arrays consist of overlapping probes designed to densely represent a genomic region of interest, sometimes as large as an entire human chromosome.
|
https://en.wikipedia.org/wiki/DNA_microarray
|
passage: In this case a reasonable approximation to is given by the normal distribution
$$
\mathcal{N}(np,\,np(1-p)),
$$
and this basic approximation can be improved in a simple way by using a suitable continuity correction.
The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one:
- One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if
- :
$$
\frac{|1-2p|}{\sqrt{np(1-p)}}=\frac1{\sqrt{n}}\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<0.3.
$$
This can be made precise using the Berry–Esseen theorem.
- A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
- :
$$
\mu\pm3\sigma=np\pm3\sqrt{np(1-p)}\in(0,n).
$$
This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.
|
https://en.wikipedia.org/wiki/Binomial_distribution
|
passage: At 7,000 iterations (5–10 minutes of training), their method achieves comparable quality to InstantNGP and Plenoxels.
For synthetic bounded scenes (Blender dataset), they achieved state-of-the-art results even with random initialization, starting from 100,000 uniformly random Gaussians.
### Limitations
Some limitations of the method include:
- Elongated artifacts or "splotchy" Gaussians in some areas.
- Occasional popping artifacts due to large Gaussians created by the optimization, especially in regions with view-dependent appearance.
- Higher memory consumption compared to NeRF-based solutions, though still more compact than previous point-based approaches.
- May require hyperparameter tuning (e.g., reducing position learning rate) for very large scenes.
- Peak GPU memory consumption during training can be high (over 20 GB) in the current unoptimized prototype.
The authors note that some of these limitations could potentially be addressed through future improvements like better culling approaches, antialiasing, regularization, and compression techniques.
## 3D Temporal Gaussian splatting
Extending 3D Gaussian splatting to dynamic scenes, 3D Temporal Gaussian splatting incorporates a time component, allowing for real-time rendering of dynamic scenes with high resolutions. It represents and renders dynamic scenes by modeling complex motions while maintaining efficiency. The method uses a HexPlane to connect adjacent Gaussians, providing an accurate representation of position and shape deformations.
|
https://en.wikipedia.org/wiki/Gaussian_splatting
|
passage: ## Von Neumann equation for time evolution
Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville–von Neumann equation) describes how a density operator evolves in time. The von Neumann equation dictates that
$$
i \hbar \frac{d}{dt} \rho = [H, \rho]~,
$$
where the brackets denote a commutator.
This equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference:
$$
i \hbar \frac{d}{dt} A_\text{H} = -[H, A_\text{H}]~,
$$
where
$$
A_\text{H}(t)
$$
is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value
$$
\langle A \rangle
$$
comes out the same as in the Schrödinger picture.
|
https://en.wikipedia.org/wiki/Density_matrix
|
passage: f^{(2n)}(x) \right|}_{x
\frac{m-1/2}{M}}}}}\,\,,
$$
where
$$
f^{(2n)}
$$
denotes even derivative.
For a function
$$
g(t)
$$
defined over interval
$$
(a,b)
$$
, its integral is
$$
\int_a^b g(t) \, dt = \int_0^{b-a} g(\tau+a) \, d\tau= (b-a) \int_0^1 g((b-a)x+a) \, dx.
$$
Therefore, we can apply this generalized midpoint integration formula by assuming that
$$
f(x) = (b-a) \, g((b-a)x+a)
$$
. This formula is particularly efficient for the numerical integration when the integrand
$$
f(x)
$$
is a highly oscillating function.
### Trapezoidal rule
For the trapezoidal rule, the function is approximated by the average of its values at the left and right endpoints of the subintervals.
|
https://en.wikipedia.org/wiki/Riemann_sum
|
passage: The natural frequency and damping ratio are not only important in free vibration, but also characterize how a system behaves under forced vibration.
Both the damped and undamped natural frequencies can be estimate when the mode shapes are not known using the Rayleigh Quotient.
### Forced vibration with damping
The behavior of the spring mass damper model varies with the addition of a harmonic force. A force of this type could, for example, be generated by a rotating imbalance.
$$
F= F_0 \sin(2 \pi f t). \!
$$
Summing the forces on the mass results in the following ordinary differential equation:
$$
m \ddot{x} + c\dot{x} + k x = F_0 \sin(2 \pi f t).
$$
The steady state solution of this problem can be written as:
$$
x(t)= X \sin(2 \pi f t +\phi). \!
$$
The result states that the mass will oscillate at the same frequency, f, of the applied force, but with a phase shift
$$
\phi.
$$
The amplitude of the vibration “X” is defined by the following formula.
$$
X= {F_0 \over k} {1 \over \sqrt{(1-r^2)^2 + (2 \zeta r)^2}}.
$$
Where “r” is defined as the ratio of the harmonic force frequency over the undamped natural frequency of the mass–spring–damper model.
|
https://en.wikipedia.org/wiki/Vibration
|
passage: & = \| x \|_{\infty} \sum_{k=-\infty}^\infty \left|h[n-k]\right| \\
& = \| x \|_{\infty} \sum_{k=-\infty}^\infty \left|h[k]\right|
\end{align}
$$
If
$$
h[n]
$$
is absolutely summable, then
$$
\sum_{k=-\infty}^{\infty}{\left|h[k]\right|} = \| h \|_1 \in \mathbb{R}
$$
|
https://en.wikipedia.org/wiki/BIBO_stability
|
passage: if
$$
|s| \leq 1
$$
then
$$
s x \in C.
$$
If
$$
\mathbb{K} = \R,
$$
this means that if
$$
x \in C,
$$
then
$$
C
$$
contains the line segment between
$$
x
$$
and
$$
-x.
$$
For
$$
\mathbb{K} = \Complex,
$$
it means for any
$$
x \in C,
$$
$$
C
$$
contains the disk with
$$
x
$$
on its boundary, centred on the origin, in the one-dimensional complex subspace generated by
$$
x.
$$
Equivalently, a balanced set is a "circled cone". Note that in the TVS
$$
\mathbb R^2
$$
,
$$
x=(1,1)
$$
belongs to
$$
C=({}
$$
ball centered at the origin of radius
$$
\sqrt2
$$
$$
{})
$$
, but
$$
2x=(2,2)
$$
does not belong; indeed, C is a cone, but balanced.
1. A cone (when the underlying field is ordered) if for all
$$
x \in C
$$
and
$$
t \geq 0 ,
$$
BLOCK401. Absorbent or absorbing if for every
$$
x \in X,
$$
there exists
$$
r > 0
$$
such that
$$
x \in t C
$$
|
https://en.wikipedia.org/wiki/Locally_convex_topological_vector_space
|
passage: This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit.
From the intensity profile above, if the intensity will have little dependency on hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If only
$$
\theta \approx 0
$$
would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics.
When the incident angle
$$
\theta_\text{i}
$$
of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes:
$$
I(\theta) = I_0 \, \operatorname{sinc}^2 \left[ \frac{d \pi}{\lambda} (\sin\theta \pm \sin\theta_\text{i})\right]
$$
The choice of plus/minus sign depends on the definition of the incident angle
### Diffraction grating
A diffraction grating is an optical component with a regular pattern.
|
https://en.wikipedia.org/wiki/Diffraction
|
passage: Also, we have the inequality
$$
e^x \ge x + 1
$$
for all real , with equality if and only if . Furthermore, is the unique base of the exponential for which the inequality holds for all . This is a limiting case of Bernoulli's inequality.
### Exponential-like functions
Steiner's problem asks to find the global maximum for the function
$$
f(x) = x^\frac{1}{x} .
$$
This maximum occurs precisely at . (One can check that the derivative of is zero only for this value of .)
Similarly, is where the global minimum occurs for the function
$$
f(x) = x^x .
$$
The infinite tetration
$$
x^{x^{x^{\cdot^{\cdot^{\cdot}}}}}
$$
or
$$
{^\infty}x
$$
converges if and only if , shown by a theorem of Leonhard Euler.
### Number theory
The real number is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. (See also Fourier's proof that is irrational.)
Furthermore, by the Lindemann–Weierstrass theorem, is transcendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients.
|
https://en.wikipedia.org/wiki/E_%28mathematical_constant%29
|
passage: Inductive step
Assume that () holds for . Then we need to see the same relation holding true for . Substituting the value of and in () we obtain:
$$
\begin{align}
&=b_n A_{n-1} B_{n-1} + a_n A_{n-1} B_{n-2} - b_n A_{n-1} B_{n-1} - a_n A_{n-2} B_{n-1} \\
&=a_n(A_{n-1}B_{n-2} - A_{n-2} B_{n-1})
\end{align}
$$
which is true because of our induction hypothesis.
$$
A_{n-1}B_n - A_nB_{n-1} = \left(-1\right)^na_1a_2\cdots a_n = \prod_{i=1}^n (-a_i)\,
$$
Specifically, if neither nor is zero () we can express the difference between the th and th convergents like this:
$$
x_{n-1} - x_n = \frac{A_{n-1}}{B_{n-1}} - \frac{A_n}{B_n} =
\left(-1\right)^n \frac{a_1a_2\cdots a_n}{B_nB_{n-1}} = \frac{\prod_{i=1}^n (-a_i)}{B_nB_{n-1}}.\,
$$
|
https://en.wikipedia.org/wiki/Continued_fraction
|
passage: The NMR of paramagnetic species can provide important structural information. Proton (1H) NMR is also important because the light hydrogen nucleus is not easily detected by X-ray crystallography.
- Infrared spectroscopy: Mostly for absorptions from carbonyl ligands
- Electron nuclear double resonance (ENDOR) spectroscopy
- Mössbauer spectroscopy
- Electron-spin resonance: ESR (or EPR) allows for the measurement of the environment of paramagnetic metal centres.
- Electrochemistry: Cyclic voltammetry and related techniques probe the redox characteristics of compounds.
## Synthetic inorganic chemistry
Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory.
Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in "vacuum manifolds" consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens.
|
https://en.wikipedia.org/wiki/Inorganic_chemistry
|
passage: The Fourier transform is given by
$$
\hat{f}\left(t\right)=\varphi_x\left(-2\pi t\right)= e^{\frac{-4\pi^2\sigma^2 t^2}{2}- i2\pi \mu t}\left[1-\Phi\left(-\frac{\mu}{\sigma}-i2\pi \sigma t \right) \right]+ e^{-\frac{4\pi^2 \sigma^2 t^2}{2}+i2\pi\mu t}\left[1-\Phi\left(\frac{\mu}{\sigma}-i2\pi \sigma t \right) \right]
$$
.
## Related distributions
- When , the distribution of is a half-normal distribution.
- The random variable has a noncentral chi-squared distribution with 1 degree of freedom and noncentrality equal to .
- The folded normal distribution can also be seen as the limit of the folded non-standardized t distribution as the degrees of freedom go to infinity.
- There is a bivariate version developed by Psarakis and Panaretos (2001) as well as a multivariate version developed by Chakraborty and Chatterjee (2013).
|
https://en.wikipedia.org/wiki/Folded_normal_distribution
|
passage: When the multiple parties on the other side all benefit fairly equally from the results of the negotiations, then each of the parties has the incentive to free-ride, to withhold their payments and withdraw from the negotiations because they can still receive the benefits regardless of whether or not they contribute financially. In 2016, Ellingsen and Paltseva modelled contract negotiation games and showed that the only way to avoid the free-rider problem in situations with multiple parties is to enforce mandatory participation such as through the use of court orders.
In 2009, in their seminal JEI article, Hahnel and Sheeran highlight several major misinterpretations and common assumptions, which when accounted for substantially reduce the applicability of Coase's theorem to real world policy and economic problems. First, they recognize that the solution between a single polluter and single victim is a negotiation—not a market. As such, it is subject to the extensive work on bargaining games, negotiation, and game theory (specifically a "divide the pie" game under incomplete information). This typically yields a broad range of potential negotiated solutions, making it unlikely that the efficient outcome will be the one selected. Rather it is more likely to be determined by a host of factors including the structure of the negotiations, discount rates and other factors of relative bargaining strength (cf. Ariel Rubenstein).
If the negotiation is not a single shot game, then reputation effects may also occur, which can dramatically distort outcomes and may lead to failed negotiation (cf. David M. Kreps, also the chainstore paradox).
|
https://en.wikipedia.org/wiki/Coase_theorem
|
passage: Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting type "A" proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed.
Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be material implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other, i.e. "No non-P is S" and "All non-P is non-S". The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the "mediate inferences" of contraposition and is also referred to as the "law of contraposition".
## Proof by contrapositive
Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself, it can be a powerful tool for proving mathematical theorems (especially if the truth of the contrapositive is easier to establish than the truth of the statement itself).
|
https://en.wikipedia.org/wiki/Contraposition
|
passage: Every cyclic group of prime order is a simple group, which cannot be broken down into smaller groups. In the classification of finite simple groups, one of the three infinite classes consists of the cyclic groups of prime order. The cyclic groups of prime order are thus among the building blocks from which all groups can be built.
## Definition and notation
For any element
$$
g
$$
in any group
$$
G
$$
, one can form the subgroup that consists of all its integer powers:
$$
\langle g\rangle = \{g^k\mid k\in\Z\}
$$
, called the cyclic subgroup generated by
$$
g
$$
. The order of
$$
g
$$
is
$$
|\langle g\rangle|
$$
, the number of elements in
$$
\langle g\rangle
$$
, conventionally abbreviated as
$$
|g|
$$
, or more rarely
$$
\operatorname{ord}(g)
$$
or
$$
o(g)
$$
. That is, the order of an element is equal to the order of the cyclic subgroup that it generates.
A cyclic group is a group which is equal to one of its cyclic subgroups:
$$
G=\langle g\rangle
$$
for some element
$$
g
$$
, called a generator of
$$
G
$$
.
For a finite cyclic group
$$
G
$$
of order
$$
n
$$
|
https://en.wikipedia.org/wiki/Cyclic_group
|
passage: In other words, the Lie derivative of one coordinate with respect to another is zero.
## Application to differential forms
The Clairaut-Schwarz theorem is the key fact needed to prove that for every
$$
C^\infty
$$
(or at least twice differentiable) differential form
$$
\omega\in\Omega^k(M)
$$
, the second exterior derivative vanishes:
$$
d^2\omega := d(d\omega) = 0
$$
. This implies that every differentiable exact form (i.e., a form
$$
\alpha
$$
such that
$$
\alpha = d\omega
$$
for some form
$$
\omega
$$
) is closed (i.e.,
$$
d\alpha = 0
$$
), since
$$
d\alpha = d(d\omega) = 0
$$
.
In the middle of the 18th century, the theory of differential forms was first studied in the simplest case of 1-forms in the plane, i.e.
$$
A\,dx + B\,dy
$$
, where
$$
A
$$
and
$$
B
$$
are functions in the plane. The study of 1-forms and the differentials of functions began with Clairaut's papers in 1739 and 1740. At that stage his investigations were interpreted as ways of solving ordinary differential equations.
|
https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives
|
passage: but if
$$
X
$$
is Hausdorff and
$$
K
$$
has more than one point then this prefilter has no limit points; the same is true of the filter
$$
\{K\}^{\uparrow X}
$$
that this prefilter generates.
However, every cluster point of an prefilter is a limit point. Consequently, the limit points of an prefilter
$$
\mathcal{B}
$$
are the same as its cluster points:
$$
\operatorname{lim}_X \mathcal{B} = \operatorname{cl}_X \mathcal{B};
$$
that is to say, a given point is a cluster point of an ultra prefilter
$$
\mathcal{B}
$$
if and only if
$$
\mathcal{B}
$$
converges to that point.
Although a cluster point of a filter need not be a limit point, there will always exist a finer filter that does converge to it; in particular, if
$$
\mathcal{B}
$$
clusters at
$$
x
$$
then
$$
\mathcal{B} \,(\cap)\, \mathcal{N}(x) = \{B \cap N : B \in \mathcal{B}, N \in \mathcal{N}(x)\}
$$
is a filter subbase whose generated filter converges to
|
https://en.wikipedia.org/wiki/Filters_in_topology
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.