text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: the product of its conjugates (which is still non-zero), we would get that p divides $$ \ell^p(p-1)!d_i^p $$ , which is false. So $$ J_i $$ is a non-zero algebraic integer divisible by (p − 1)!. Now $$ J_i=-\sum_{j=0}^{np-1}\sum_{t=1}^r c(t)\left(f_i^{(j)}(\alpha_{n_{t-1}+1}) + \cdots + f_i^{(j)}(\alpha_{n_t})\right). $$ Since each $$ f_i(x) $$ is obtained by dividing a fixed polynomial with integer coefficients by $$ (x-\alpha_i) $$ , it is of the form $$ f_i(x)=\sum_{m=0}^{np-1}g_m(\alpha_i)x^m, $$ where $$ g_m $$ is a polynomial (with integer coefficients) independent of i. The same holds for the derivatives $$ f_i^{(j)}(x) $$ .
https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem
passage: A top tree is a data structure based on a binary tree for unrooted dynamic trees that is used mainly for various path-related operations. It allows simple divide-and-conquer algorithms. It has since been augmented to maintain dynamically various properties of a tree such as diameter, center and median. A top tree $$ \Re $$ is defined for an underlying tree and a set $$ \partial{T} $$ of at most two vertices called as #### External Boundary Vertices ## Glossary ### Boundary Node See ### Boundary Vertex Boundary Vertex A vertex in a connected subtree is a Boundary Vertex if it is connected to a vertex outside the subtree by an edge. External Boundary Vertices Up to a pair of vertices in the top tree $$ \Re $$ can be called as External Boundary Vertices, they can be thought of as Boundary Vertices of the cluster which represents the entire top tree. ### Cluster A cluster is a connected subtree with at most two Boundary Vertices. The set of Boundary Vertices of a given cluster is denoted as $$ \partial{C}. $$ With each cluster the user may associate some meta information $$ I(\mathcal{C}), $$ and give methods to maintain it under the various internal operations. #### Path Cluster If $$ \pi(\mathcal{C}) $$ contains at least one edge then is called a Path Cluster. #### Point Cluster See
https://en.wikipedia.org/wiki/Top_tree
passage: For example, famous problems in the analysis of several complex variables preceding the introduction of modern definitions are the Cousin problems, asking precisely when local meromorphic data may be glued to obtain a global meromorphic function. These old problems can be simply solved after the introduction of sheaves and cohomology groups. Special examples of sheaves used in complex geometry include holomorphic line bundles (and the divisors associated to them), holomorphic vector bundles, and coherent sheaves. Since sheaf cohomology measures obstructions in complex geometry, one technique that is used is to prove vanishing theorems. Examples of vanishing theorems in complex geometry include the Kodaira vanishing theorem for the cohomology of line bundles on compact Kähler manifolds, and Cartan's theorems A and B for the cohomology of coherent sheaves on affine complex varieties. Complex geometry also makes use of techniques arising out of differential geometry and analysis. For example, the Hirzebruch-Riemann-Roch theorem, a special case of the Atiyah-Singer index theorem, computes the holomorphic Euler characteristic of a holomorphic vector bundle in terms of characteristic classes of the underlying smooth complex vector bundle. ## Classification in complex geometry One major theme in complex geometry is classification. Due to the rigid nature of complex manifolds and varieties, the problem of classifying these spaces is often tractable.
https://en.wikipedia.org/wiki/Complex_geometry
passage: Linear speed referred to the central point is simply the product of the distance $$ r $$ and the angular speed $$ \omega $$ versus the point: $$ v=r\omega, $$ another moment. Hence, angular momentum contains a double moment: $$ L = rmr \omega. $$ Simplifying slightly, $$ L = r^2 m\omega, $$ the quantity $$ r^2m $$ is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia. The above analogy of the translational momentum and rotational momentum can be expressed in vector form: - $$ \mathbf p = m\mathbf v $$ for linear motion - $$ \mathbf L = I\boldsymbol\omega $$ for rotation The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation. Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits.
https://en.wikipedia.org/wiki/Angular_momentum
passage: The education requirements are the same for both degrees; however, the dissertation required is different. The PhD also requires the standard research problem, where the doctor of engineering focuses on a practical dissertation. In present undergraduate engineering education, the emphasis on linear systems develops a way of thinking that dismisses nonlinear dynamics as spurious oscillations. The linear systems approach oversimplifies the dynamics of nonlinear systems. Hence, the undergraduate students and teachers should recognize the educational value of chaotic dynamics. Practicing engineers will also have more insight of nonlinear circuits and systems by having an exposure to chaotic phenomena. After graduation, continuing education courses may be needed to keep a government-issued professional engineer (PE) license valid, to keep skills fresh, to expand skills, or to keep up with new technology. ## Caribbean ### Trinidad and Tobago Engineering degree education in Trinidad and Tobago is not regulated by the Board of Professional Engineers of Trinidad and Tobago (BOETT) or the location Engineering Association (APETT). Professional Engineers registed with BOETT are given the credentials "R.Eng.". ## South America ### Argentina Engineering education programs at universities in Argentina span a variety of disciplines and typically require five–six years of studies to complete. Most degree programs begin with foundational courses in mathematics, statistics, and the physical sciences during the first and second years, then move on to courses specific to the students' plan of study. After receiving a degree, an engineering student will go on to complete an external evaluation in order to become accredited as an engineer.
https://en.wikipedia.org/wiki/Engineering_education
passage: ### United States (Food and Drug Administration) Section 201(h) of the Federal Food Drug & Cosmetic (FD&C) Act defines a device as an "instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is: - recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them - Intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or - Intended to affect the structure or any function of the body of man or other animals, and which does not achieve its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of its primary intended purposes. The term 'device' does not include software functions excluded pursuant to section 520(o). " ### European Union
https://en.wikipedia.org/wiki/Medical_device
passage: Mollusca is a phylum of protostomic invertebrate animals, whose members are known as molluscs or mollusks (). Around 76,000 extant species of molluscs are recognized, making it the second-largest animal phylum after Arthropoda. The number of additional fossil species is estimated between 60,000 and 100,000, and the proportion of undescribed species is very high. Many taxa remain poorly studied. Molluscs are the largest marine phylum, comprising about 23% of all the named marine organisms. They are highly diverse, not just in size and anatomical structure, but also in behaviour and habitat, as numerous groups are freshwater and even terrestrial species. The phylum is typically divided into 7 or 8 taxonomic classes, of which two are entirely extinct. Cephalopod molluscs, such as squid, cuttlefish, and octopuses, are among the most neurologically advanced of all invertebrates—and either the giant squid or the colossal squid is the largest known extant invertebrate species. The gastropods (snails, slugs and abalone) are by far the most diverse class and account for 80% of the total classified molluscan species. The four most universal features defining modern molluscs are a soft body composed almost entirely of muscle, a mantle with a significant cavity used for breathing and excretion, the presence of a radula (except for bivalves), and the structure of the nervous system.
https://en.wikipedia.org/wiki/Mollusca
passage: To reduce confusion, this article will adhere to the following notational conventions: Lower case letters for elements Upper case letters for subsets Upper case calligraphy letters for subsets (or equivalently, for elements such as prefilters). Upper case double-struck letters for subsets For every $$ S \subseteq X, $$ let $$ \mathbb{O}(S) := \left\{\mathcal{B} \in \mathbb{P} ~:~ S \in \mathcal{B}^{\uparrow X}\right\} $$ where $$ \mathbb{O}(X) = \mathbb{P} \text{ and } \mathbb{O}(\varnothing) = \varnothing. $$ These sets will be the basic open subsets of the Stone topology. If $$ R \subseteq S \subseteq X $$ then $$ \left\{\mathcal{B} \in \wp(\wp(X)) ~:~ R \in \mathcal{B}^{\uparrow X}\right\} ~\subseteq~ \left\{\mathcal{B} \in \wp(\wp(X)) ~:~ S \in \mathcal{B}^{\uparrow X}\right\}. $$ From this inclusion, it is possible to deduce all of the subset inclusions displayed below with the exception of $$
https://en.wikipedia.org/wiki/Filters_in_topology
passage: Via the Curry-Howard isomorphism, there is a one-to-one correspondence between the systems in the lambda cube and logical systems, namely: System of the cubeLogical Systemλ→(Zeroth-order) Propositional Calculusλ2Second-order Propositional CalculusλWeakly Higher Order Propositional CalculusλωHigher Order Propositional CalculusλP(First order) Predicate LogicλP2Second-order Predicate CalculusλPWeak Higher Order Predicate CalculusλCCalculus of Constructions All the logics are implicative (i.e. the only connectives are $$ \to $$ and $$ \forall $$ ), however one can define other connectives such as $$ \wedge $$ or $$ \neg $$ in an impredicative way in second and higher order logics. In the weak higher order logics, there are variables for higher order predicates, but no quantification on those can be done.
https://en.wikipedia.org/wiki/Lambda_cube
passage: #### Scrambling To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the same bit rate as the transmitted symbol stream, this technique is referred to as scrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is called chipping code. The chipping code is combined with the data using exclusive or before transmitting using binary phase-shift keying or a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method of spread-spectrum communication. When used only for the spread-spectrum property, this technique is called direct-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is called code-division multiple access. Neither scheme should be confused with encryption or encipherment; scrambling and spreading with LFSRs do not protect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
https://en.wikipedia.org/wiki/Linear-feedback_shift_register
passage: In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H $$ f \colon V(G) \to V(H) $$ such that any two vertices u and v of G are adjacent in G if and only if $$ f(u) $$ and $$ f(v) $$ are adjacent in H. This kind of bijection is commonly described as "edge-preserving bijection", in accordance with the general notion of isomorphism being a structure-preserving bijection. If an isomorphism exists between two graphs, then the graphs are called isomorphic and denoted as $$ G\simeq H $$ . In the case when the isomorphism is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the isomorphism is called an automorphism of G. Graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs. The question of whether graph isomorphism can be determined in polynomial time is a major unsolved problem in computer science, known as the graph isomorphism problem. The two graphs shown below are isomorphic, despite their different looking drawings. Graph G Graph H An isomorphismbetween G and Hf(a) = 1 f(b) = 6 f(c) = 8 f(d)
https://en.wikipedia.org/wiki/Graph_isomorphism
passage: The equation can be split into a linear part, $$ {\partial A_D \over \partial z} = - {i\beta_2 \over 2} {\partial^2 A \over \partial t^2} = \hat D A, $$ and a nonlinear part, $$ {\partial A_N \over \partial z} = i \gamma | A |^2 A = \hat N A. $$ Both the linear and the nonlinear parts have analytical solutions, but the nonlinear Schrödinger equation containing both parts does not have a general analytical solution. However, if only a 'small' step $$ h $$ is taken along $$ z $$ , then the two parts can be treated separately with only a 'small' numerical error. One can therefore first take a small nonlinear step, $$ A_N(t, z+h) = \exp\left[i \gamma |A(t, z)|^2 h \right] A(t, z), $$ using the analytical solution. Note that this ansatz imposes $$ |A(z)|^2=\text{const}. $$ and consequently $$ \gamma \in \mathbb{R} $$ .
https://en.wikipedia.org/wiki/Split-step_method
passage: This may be a valid approach in pregnancy, in which the other modalities would increase the risk of birth defects in the unborn child. However, a negative scan does not rule out PE, and low-radiation dose scanning may be required if the mother is deemed at high risk of having a pulmonary embolism. The main use of ultrasonography of the legs is therefore in those with clinical symptoms suggestive of deep vein thrombosis. #### Fluoroscopic pulmonary angiography Historically, the gold standard for diagnosis was pulmonary angiography by fluoroscopy, but this has fallen into disuse with the increased availability of non-invasive techniques that offer similar diagnostic accuracy. ### Electrocardiogram The primary use of the ECG is to rule out other causes of chest pain. An electrocardiogram (ECG) is routinely done on people with chest pain to quickly diagnose myocardial infarctions (heart attacks), an important differential diagnosis in an individual with chest pain. While certain ECG changes may occur with PE, none are specific enough to confirm or sensitive enough to rule out the diagnosis. An ECG may show signs of right heart strain or acute cor pulmonale in cases of large PEs – the classic signs are a large S wave in lead I, a large Q wave in lead III, and an inverted T wave in lead III (S1Q3T3), which occurs in 12–50% of people with the diagnosis, yet also occurs in 12% without the diagnosis.
https://en.wikipedia.org/wiki/Pulmonary_embolism
passage: Pointer doubling operates on an array `successor` with an entry for every vertex in the graph. Each `successor[i]` is initialized with the parent index of vertex `i` if that vertex is not a root or to `i` itself if that vertex is a root. At each iteration, each successor is updated to its successor's successor. The root is found when the successor's successor points to itself. The following pseudocode demonstrates the algorithm. algorithm Input: An array parent representing a forest of trees. parent[i] is the parent of vertex i or itself for a root Output: An array containing the root ancestor for every vertex for i ← 1 to length(parent) do in parallel successor[i] ← parent[i] while true for i ← 1 to length(successor) do in parallel successor_next[i] ← successor[successor[i]] if successor_next = successor then break for i ← 1 to length(successor) do in parallel successor[i] ← successor_next[i] return successor The following image provides an example of using pointer jumping on a small forest. On each iteration the successor points to the vertex following one more successor. After two iterations, every vertex points to its root node. ## History and examples Although the name pointer jumping would come later, JáJá attributes the first uses of the technique in early parallel graph algorithms and list ranking.
https://en.wikipedia.org/wiki/Pointer_jumping
passage: In other words, it is enough that there is a null set $$ N $$ such that the sequence $$ \{f_n(x)\} $$ non-decreases for every $$ {x\in X\setminus N}. $$ To see why this is true, we start with an observation that allowing the sequence $$ \{ f_n \} $$ to pointwise non-decrease almost everywhere causes its pointwise limit $$ f $$ to be undefined on some null set $$ N $$ . On that null set, $$ f $$ may then be defined arbitrarily, e.g. as zero, or in any other way that preserves measurability. To see why this will not affect the outcome of the theorem, note that since $$ {\mu(N)=0}, $$ we have, for every $$ k, $$ $$ \int_X f_k \,d\mu = \int_{X \setminus N} f_k \,d\mu $$ and $$ \int_X f \,d\mu = \int_{X \setminus N} f \,d\mu, $$ provided that $$ f $$ is $$ (\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}}) $$ -measurable. (These equalities follow directly from the definition of the Lebesgue integral for a non-negative function). Remark 4. The proof below does not use any properties of the Lebesgue integral except those established here.
https://en.wikipedia.org/wiki/Monotone_convergence_theorem
passage: Major figures in contemporary linguistics include Ferdinand de Saussure and Noam Chomsky. Language is thought to have gradually diverged from earlier primate communication systems when early hominins acquired the ability to form a theory of mind and shared intentionality. This development is sometimes thought to have coincided with an increase in brain volume, and many linguists see the structures of language as having evolved to serve specific communicative and social functions. Language is processed in many different locations in the human brain, but especially in Broca's and Wernicke's areas. Humans acquire language through social interaction in early childhood, and children generally speak fluently by approximately three years old. Language and culture are codependent. Therefore, in addition to its strictly communicative uses, language has social uses such as signifying group identity, social stratification, as well as use for social grooming and entertainment. Languages evolve and diversify over time, and the history of their evolution can be reconstructed by comparing modern languages to determine which traits their ancestral languages must have had in order for the later developmental stages to occur. A group of languages that descend from a common ancestor is known as a language family; in contrast, a language that has been demonstrated not to have any living or non-living relationship with another language is called a language isolate. There are also many unclassified languages whose relationships have not been established, and spurious languages may have not existed at all. Academic consensus holds that between 50% and 90% of languages spoken at the beginning of the 21st century will probably have become extinct by the year 2100.
https://en.wikipedia.org/wiki/Language
passage: The set of all unitary matrices coincides with the circle group; the unitary condition is equivalent to the condition that its element have absolute value 1. Therefore, the circle group is canonically isomorphic to the first unitary group , i.e., $$ \mathbb T \cong \mbox{U}(1). $$ The exponential function gives rise to a map $$ \exp : \R \to \mathbb T $$ from the additive real numbers to the circle group known as Euler's formula $$ \theta \mapsto e^{i\theta} = \cos\theta + i \sin \theta, $$ where $$ \theta \in \mathbb{R} $$ corresponds to the angle (in radians) on the unit circle as measured counterclockwise from the positive x-axis. The property $$ e^{i\theta_1} e^{i\theta_2} = e^{i(\theta_1+\theta_2)}, \quad \forall \theta_1 ,\theta_2 \in \mathbb{R}, $$ makes $$ \exp : \R \to \mathbb T $$ a group homomorphism. While the map is surjective, it is not injective and therefore not an isomorphism. The kernel of this map is the set of all integer multiples of .
https://en.wikipedia.org/wiki/Circle_group
passage: However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969. This network soon became the ARPANET, which by 1981 would consist of 213 nodes. ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today. ### Optical fiber Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow, England, when they showed that the losses of 1000 dB/km in existing glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants, which could potentially be removed.
https://en.wikipedia.org/wiki/Telecommunications_engineering
passage: ## Other applications Source: - Cayley's formula can be strengthened to prove the following claim: The number of spanning trees in a complete graph $$ K_n $$ with a degree $$ d_i $$ specified for each vertex $$ i $$ is equal to the multinomial coefficient $$ \binom{n-2}{d_1-1,\,d_2-1,\,\dots,\,d_n-1}=\frac{(n-2)!}{(d_1-1)!(d_2-1)!\cdots(d_{n}-1)!}. $$ The proof follows by observing that in the Prüfer sequence number $$ i $$ appears exactly $$ d_i-1 $$ times. - Cayley's formula can be generalized: a labeled tree is in fact a spanning tree of the labeled complete graph. By placing restrictions on the enumerated Prüfer sequences, similar methods can give the number of spanning trees of a complete bipartite graph. If is the complete bipartite graph with vertices 1 to in one partition and vertices to in the other partition, the number of labeled spanning trees of is $$ n_1^{n_2-1} n_2^{n_1-1} $$ , where . - Generating uniformly distributed random Prüfer sequences and converting them into the corresponding trees is a straightforward method of generating uniformly distributed random labelled trees. ## References ## External links - Prüfer code – from MathWorld Category:Enumerative combinatorics Category:Trees (graph theory)
https://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence
passage: Draft Report on the Algorithmic Language ALGOL 68 – Edited by: Adriaan van Wijngaarden, Barry J. Mailloux, John Peck and Cornelis H. A. Koster. - October 1968: Penultimate Draft Report on the Algorithmic Language ALGOL 68 — Chapters 1-9 Chapters 10-12 — Edited by: A. van Wijngaarden, B.J. Mailloux, J. E. L. Peck and C. H. A. Koster. - December 1968: Report on the Algorithmic Language ALGOL 68 — Offprint from Numerische Mathematik, 14, 79-218 (1969); Springer-Verlag. — Edited by: A. van Wijngaarden, B. J. Mailloux, J. E. L. Peck and C. H. A. Koster. - March 1970: Minority report, ALGOL Bulletin AB31.1.1 — signed by Edsger Dijkstra, Fraser Duncan, Jan Garwick, Tony Hoare, Brian Randell, Gerhard Seegmüller, Wlad Turski, and Mike Woodger. - September 1973: Revised Report on the Algorithmic Language Algol 68 — Springer-Verlag 1976 — Edited by: A. van Wijngaarden, B. Mailloux, J. Peck, K. Koster, Michel Sintzoff, Charles H. Lindsey, Lambert Meertens and Richard G. Fisker. - other WG 2.1 members active in ALGOL 68 design: Friedrich L. Bauer • Hans Bekic • Gerhard Goos • Peter Zilahy Ingerman • Peter Landin • John McCarthy • Jack Merner • Peter Naur • Manfred Paul •
https://en.wikipedia.org/wiki/ALGOL_68
passage: A soap bubble (commonly referred to as simply a bubble) is an extremely thin film of soap or detergent and water enclosing air that forms a hollow sphere with an iridescent surface. Soap bubbles usually last for only a few seconds before bursting, either on their own or on contact with another object. They are often used for children's enjoyment, but they are also used in artistic performances. Assembling many bubbles results in foam. When light shines onto a bubble it appears to change colour. Unlike those seen in a rainbow, which arise from differential refraction, the colours seen in a soap bubble arise from light wave interference, reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively. ## Mathematics Soap bubbles are physical examples of the complex mathematical problem of minimal surface. They will assume the shape of least surface area possible containing a given volume. A true minimal surface is more properly illustrated by a soap film, which has equal pressure on both sides, becoming a surface with zero mean curvature. A soap bubble is a closed soap film: due to the difference in outside and inside pressure, it is a surface of constant mean curvature.
https://en.wikipedia.org/wiki/Soap_bubble
passage: In this example, the time derivative of is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy. ## Example A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of , where is fixed, .
https://en.wikipedia.org/wiki/Hamiltonian_mechanics
passage: This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions $$ \begin{cases} u_{t} = ku_{xx}+f(x,t) & (x, t) \in [0, \infty) \times (0, \infty) \\ u(x,0)=0 & \text{IC} \\ u_x(0,t)=0 & \text{BC} \end{cases} $$ $$ u(x,t)=\int_{0}^{t}\int_{0}^{\infty} \frac{1}{\sqrt{4\pi k(t-s)}} \left(\exp\left(-\frac{(x-y)^2}{4k(t-s)}\right)+\exp\left(-\frac{(x+y)^2}{4k(t-s)}\right)\right) f(y,s)\,dy\,ds $$ Comment.
https://en.wikipedia.org/wiki/Heat_equation
passage: This is true for any base. As an example, to find the sixth element of the above sequence, we'd write 6 = 1*2 + 1*2 + 0*2 = 110, which can be inverted and placed after the decimal point to give 0.011 = 0*2 + 1*2 + 1*2 = . So the sequence above is the same as 0.1, 0.01, 0.11, 0.001, 0.101, 0.011, 0.111, 0.0001, 0.1001,... To generate the sequence for 3 for the other dimension, we divide the interval (0,1) in thirds, then ninths, twenty-sevenths, etc., which generates , , , , , , , , ,... When we pair them up, we get a sequence of points in a unit square: (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ). Even though standard Halton sequences perform very well in low dimensions, correlation problems have been noted between sequences generated from higher primes. For example, if we started with the primes 17 and 19, the first 16 pairs of points: (, ), (, ), (, ) ... (, ) would have perfect linear correlation. To avoid this, it is common to drop the first 20 entries, or some other predetermined quantity depending on the primes chosen. Several other methods have also been proposed.
https://en.wikipedia.org/wiki/Halton_sequence
passage: Thus, a sequence $$ f_n $$ converges to f uniformly if for all hyperreal x in the domain of $$ f^* $$ and all infinite n, $$ f_n^*(x) $$ is infinitely close to $$ f^*(x) $$ (see microcontinuity for a similar definition of uniform continuity). In contrast, pointwise continuity requires this only for real x. ## Examples For $$ x \in [0,1) $$ , a basic example of uniform convergence can be illustrated as follows: the sequence $$ (1/2)^{x+n} $$ converges uniformly, while $$ x^n $$ does not. Specifically, assume $$ \varepsilon=1/4 $$ . Each function $$ (1/2)^{x+n} $$ is less than or equal to $$ 1/4 $$ when $$ n \geq 2 $$ , regardless of the value of $$ x $$ . On the other hand, $$ x^n $$ is only less than or equal to $$ 1/4 $$ at ever increasing values of $$ n $$ when values of $$ x $$ are selected closer and closer to 1 (explained more in depth further below).
https://en.wikipedia.org/wiki/Uniform_convergence
passage: To address this, the Arabidopsis 2010 Project, launched by the National Science Foundation (NSF) which, aimed to characterize the function of every gene in Arabidopsis by 2010. The project was largely successful and significantly advanced the functional annotation of most genes. However, some genes remained uncharacterized due to their redundancy or subtle phenotypic effects. Since, research has continued to expand our understanding of the Arabidopsis genome. To date, 27,655 coding genes and 5,178 non-coding genes have been identified, with research continuing today. Arabidopsis is now the most well-known plant both genetically and in terms of function and has played a huge role in furthering molecular biology, medicine and genetic technology. One of the most notable applications of Arabidopsis is in Agrobacterium-mediated transformation, a technique widely used in plant biotechnology. Arabidopsis is particularly well-suited for this method, as its petals can be simply dipped in a liquid suspension of Agrobacterium, allowing for efficient genetic transformation. This approach has made Arabidopsis a cornerstone of genetic engineering since the petal dipping technique was refined in 2006. Since then, Agrobacterium-mediated transformation has contributed to advancements in many biological and medical contexts. Due to extensive research conducted on Arabidopsis thaliana, a comprehensive database called The Arabidopsis Information Resource (TAIR) has been established as a central repository for various datasets and information on the species.
https://en.wikipedia.org/wiki/Plant_genetics
passage: ### Definition One way of constructing a torus is as the quotient space $$ \mathbb{T^2} = \Reals^2 / \Z^2 $$ of a two-dimensional real vector space by the additive subgroup of integer vectors, with the corresponding projection $$ \pi : \Reals^2 \to \mathbb{T^2}. $$ Each point in the torus has as its preimage one of the translates of the square lattice $$ \Z^2 $$ in $$ \Reals^2, $$ and $$ \pi $$ factors through a map that takes any point in the plane to a point in the unit square $$ [0, 1)^2 $$ given by the fractional parts of the original point's Cartesian coordinates. Now consider a line in $$ \Reals^2 $$ given by the equation $$ y = k x. $$ If the slope $$ k $$ of the line is rational, it can be represented by a fraction and a corresponding lattice point of $$ \Z^2. $$ It can be shown that then the projection of this line is a simple closed curve on a torus. If, however, $$ k $$ is irrational, it will not cross any lattice points except 0, which means that its projection on the torus will not be a closed curve, and the restriction of $$ \pi $$ on this line is injective.
https://en.wikipedia.org/wiki/Linear_flow_on_the_torus
passage: ##### Cuckoo hashing Cuckoo hashing is a form of open addressing collision resolution technique which guarantees $$ O(1) $$ worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues. ##### Hopscotch hashing Hopscotch hashing is an open addressing based algorithm which combines the elements of cuckoo hashing, linear probing and chaining through the notion of a neighbourhood of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket. The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in concurrent settings, thus well suited for implementing resizable concurrent hash table. The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items.
https://en.wikipedia.org/wiki/Hash_table
passage: At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol. Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload. ## Layering evolution and representations in the literature The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools. The following table shows various such networking models.
https://en.wikipedia.org/wiki/Internet_protocol_suite
passage: The components of the extra vector fields (A) in the D-brane actions can be thought of as extra coordinates (X) in disguise. However, the known symmetries including supersymmetry currently restrict the spinors to 32-components—which limits the number of dimensions to 11 (or 12 if you include two time dimensions.) Some physicists (e.g., John Baez et al.) have speculated that the exceptional Lie groups E6, E7 and E8 having maximum orthogonal subgroups SO(10), SO(12) and SO(16) may be related to theories in 10, 12 and 16 dimensions; 10 dimensions corresponding to string theory and the 12 and 16 dimensional theories being yet undiscovered but would be theories based on 3-branes and 7-branes, respectively. However, this is a minority view within the string community. Since E7 is in some sense F4 quaternified and E8 is F4 octonified, the 12 and 16 dimensional theories, if they did exist, may involve the noncommutative geometry based on the quaternions and octonions, respectively. From the above discussion, it can be seen that physicists have many ideas for extending superstring theory beyond the current 10 dimensional theory, but so far all have been unsuccessful. ### Kac–Moody algebras Since strings can have an infinite number of modes, the symmetry used to describe string theory is based on infinite dimensional Lie algebras. Some Kac–Moody algebras that have been considered as symmetries for M-theory have been E10 and E11 and their supersymmetric extensions.
https://en.wikipedia.org/wiki/Superstring_theory
passage: If a morphism has both left-inverse and right-inverse, then the two inverses are equal, so f is an isomorphism, and g is called simply the inverse of f. Inverse morphisms, if they exist, are unique. The inverse g is also an isomorphism, with inverse f. Two objects with an isomorphism between them are said to be isomorphic or equivalent. While every isomorphism is a bimorphism, a bimorphism is not necessarily an isomorphism. For example, in the category of commutative rings the inclusion is a bimorphism that is not an isomorphism. However, any morphism that is both an epimorphism and a split monomorphism, or both a monomorphism and a split epimorphism, must be an isomorphism. A category, such as a Set, in which every bimorphism is an isomorphism is known as a balanced category. ### Endomorphisms and automorphisms A morphism (that is, a morphism with identical source and target) is an endomorphism of X. A split endomorphism is an idempotent endomorphism f if f admits a decomposition with . In particular, the Karoubi envelope of a category splits every idempotent morphism. An automorphism is a morphism that is both an endomorphism and an isomorphism. In every category, the automorphisms of an object always form a group, called the automorphism group of the object.
https://en.wikipedia.org/wiki/Morphism
passage: Then $$ \sum^\infty_{n=1} {\frac{1}{n(n+k)}} = \frac{H_k}{k} $$ where Hk is the kth harmonic number. - Let k and m with k $$ \neq $$ m be positive integers. Then $$ \sum^\infty_{n=1} {\frac{1}{(n+k)(n+k+1)\dots(n+m-1)(n+m)}} = \frac{1}{m-k} \cdot \frac{k!}{m!} $$ where $$ ! $$ denotes the factorial operation. - Many trigonometric functions also admit representation as differences, which may reveal telescopic canceling between the consecutive terms.
https://en.wikipedia.org/wiki/Telescoping_series
passage: These points are arbitrarily defined. They are used where GSSPs have not yet been established. Research is ongoing to define GSSPs for the base of all units that are currently defined by GSSAs. The standard international units of the geologic time scale are published by the International Commission on Stratigraphy on the International Chronostratigraphic Chart; however, regional terms are still in use in some areas. The numeric values on the International Chronostratigrahpic Chart are represented by the unit Ma (megaannum, for 'million years'). For example, Ma, the lower boundary of the Jurassic Period, is defined as 201,400,000 years old with an uncertainty of 200,000 years. Other SI prefix units commonly used by geologists are Ga (gigaannum, billion years), and ka (kiloannum, thousand years), with the latter often represented in calibrated units (before present). ## Naming of geologic time The names of geologic time units are defined for chronostratigraphic units with the corresponding geochronologic unit sharing the same name with a change to the suffix (e.g. Phanerozoic Eonothem becomes the Phanerozoic Eon). Names of erathems in the Phanerozoic were chosen to reflect major changes in the history of life on Earth: Paleozoic (old life), Mesozoic (middle life), and Cenozoic (new life).
https://en.wikipedia.org/wiki/Geologic_time_scale
passage: Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag. Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include $$ \cos \theta ' = \frac{ \cos \theta + v/c}{ 1 + (v/c)\cos \theta} $$   OR   $$ \sin \theta ' = \frac{\sin \theta}{\gamma [ 1 + (v/c) \cos \theta ]} $$   OR   $$ \tan \frac{\theta '}{2} = \left( \frac{c - v}{c + v} \right)^{1/2} \tan \frac {\theta}{2} $$
https://en.wikipedia.org/wiki/Special_relativity
passage: This quantity is undefined for parabolic and hyperbolic trajectories, as they are non-periodic. - Standard gravitational parameter () — quantity equal to the mass of the central body times the gravitational constant . This quantity is often used instead of mass, as it can be easier to measure with precision than either mass or , and will need to be calculated in any case in order to find the acceleration due to gravity. This is also often not included as part of orbital element lists, as it can assumed to be known based on the central body. - Mass of the central body () — the mass of only the central body can be used, as in most cases the mass of the orbiting body is insignificant and does not meaningfully influence the trajectory. However, when this is not the case (e.g. binary stars), the mass of the 2-body system can be used instead. Relations between elements This section contains the common relations between the set of orbital elements described above, but more relations can be derived through manipulations of one or more of these equations. The variable names used here are consistent with the ones described above. Mean motion can be calculated using the standard gravitational parameter and the semi-major axis of the orbit ( can be substituted for ). This equation returns the mean motion in radians, and will need to be converted if is desired to be in a different unit.
https://en.wikipedia.org/wiki/Orbital_elements
passage: Holdridge uses the four axes to define 30 so-called "humidity provinces", which are clearly visible in his diagram. While this scheme largely ignores soil and sun exposure, Holdridge acknowledged that these were important. ### Allee (1949) biome-types The principal biome-types by Allee (1949): - Tundra - Taiga - Deciduous forest - Grasslands - Desert - High plateaus - Tropical forest - Minor terrestrial biomes ### Kendeigh (1961) biomes The principal biomes of the world by Kendeigh (1961): - Terrestrial - Temperate deciduous forest - Coniferous forest - Woodland - Chaparral - Tundra - Grassland - Desert - Tropical savanna - Tropical forest - Marine - Oceanic plankton and nekton - Balanoid-gastropod-thallophyte - Pelecypod-annelid - Coral reef ### Whittaker (1962, 1970, 1975) biome-types Whittaker classified biomes using two abiotic factors: precipitation and temperature. His scheme can be seen as a simplification of Holdridge's; more readily accessible, but missing Holdridge's greater specificity. Whittaker based his approach on theoretical assertions and empirical sampling. He had previously compiled a review of biome classifications. #### Key definitions for understanding Whittaker's scheme - Physiognomy: sometimes referring to the plants' appearance; or the biome's apparent characteristics, outward features, or appearance of ecological communities or species – including plants.
https://en.wikipedia.org/wiki/Biome
passage: \end{align} $$ For : $$ \begin{align} e_1(X_1,X_2,X_3,X_4) &= X_1 + X_2 + X_3 + X_4,\\ e_2(X_1,X_2,X_3,X_4) &= X_1X_2 + X_1X_3 + X_1X_4 + X_2X_3 + X_2X_4 + X_3X_4,\\ e_3(X_1,X_2,X_3,X_4) &= X_1X_2X_3 + X_1X_2X_4 + X_1X_3X_4 + X_2X_3X_4,\\ e_4(X_1,X_2,X_3,X_4) &= X_1X_2X_3X_4.\,\\ \end{align} $$
https://en.wikipedia.org/wiki/Elementary_symmetric_polynomial
passage: One can consider the many-body wave function for the composite particles. If all the constituent elementary particles in one composite are simultaneously exchanged with those in another, the resulting sign change of the wave function is determined by the number of fermions within each composite. In such systems, the total spin of the composite particle arises from the quantum mechanical addition of the angular momenta of its constituents: if the number of constituent fermions is even, the composite has integer spin and behaves as a boson with a symmetric wave function; if the number is odd, the spin is half-integer and the composite behaves as a fermion with an antisymmetric wave function. Hadrons are composite subatomic particles made of quarks bound together by the strong interaction. Quarks are fermions with spin of 1/2. Hadrons fall into two main categories: baryons, which consist of an odd number of quarks (typically three), and mesons, which consist of an even number of quarks (typically a quark and an antiquark). Baryons, such as protons and neutrons, are fermions due to their odd number of constituent quarks. Mesons, like pions, are bosons because they contain an even number of quarks. The effect that quantum statistics have on composite particles is evident in the superfluid properties of the two helium isotopes, helium-3 and helium-4.
https://en.wikipedia.org/wiki/Spin%E2%80%93statistics_theorem
passage: φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is $$ Y_{\ell_1, \dots \ell_{n-1}} (\theta_1, \dots \theta_{n-1}) = \frac{1}{\sqrt{2\pi}} e^{i \ell_1 \theta_1} \prod_{j = 2}^{n-1} {}_j \bar{P}^{\ell_{j-1}}_{\ell_j} (\theta_j) $$ where the indices satisfy and the eigenvalue is . The functions in the product are defined in terms of the Legendre function $$ {}_j \bar{P}^\ell_{L} (\theta) = \sqrt{\frac{2L+j-1}{2} \frac{(L+\ell+j-2)!}{(L-\ell)!}} \sin^{\frac{2-j}{2}} (\theta) P^{-\left(\ell + \frac{j-2}{2}\right)}_{L+\frac{j-2}{2}} (\cos \theta) \,. $$ ## Connection with representation theory The space of spherical harmonics of degree is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2).
https://en.wikipedia.org/wiki/Spherical_harmonics
passage: 1. Repeat steps 1 and 2 until all of the data is in sorted 100 MB chunks (there are 900MB / 100MB = 9 chunks), which now need to be merged into one single output file. 1. Read the first 10 MB (= 100MB / (9 chunks + 1)) of each sorted chunk into input buffers in main memory and allocate the remaining 10 MB for an output buffer. (In practice, it might provide better performance to make the output buffer larger and the input buffers slightly smaller.) 1. Perform a 9-way merge and store the result in the output buffer. Whenever the output buffer fills, write it to the final sorted file and empty it. Whenever any of the 9 input buffers empties, fill it with the next 10 MB of its associated 100 MB sorted chunk until no more data from the chunk is available. The merge pass is key to making external merge sort work externally. The merge algorithm only makes one pass through each chunk, so chunks do not have to be loaded all at once; rather, sequential parts of the chunk are loaded as needed. And as long as the blocks read are relatively large (like the 10 MB in this example), the reads can be relatively efficient even on media with low random-read performance, like hard drives. Historically, instead of a sort, sometimes a replacement-selection algorithm was used to perform the initial distribution, to produce on average half as many output chunks of double the length. ### Additional passes The previous example is a two-pass sort: first sort, then merge.
https://en.wikipedia.org/wiki/External_sorting
passage: The rationality of the zeta function follows immediately. The functional equation for the zeta function follows from Poincaré duality for ℓ-adic cohomology, and the relation with complex Betti numbers of a lift follows from a comparison theorem between ℓ-adic and ordinary cohomology for complex varieties. More generally, Grothendieck proved a similar formula for the zeta function (or "generalized L-function") of a sheaf F0: $$ Z(X_0, F_0, t) = \prod_{x\in |X_0|}\det(1-F^*_xt^{\deg(x)}\mid F_0)^{-1} $$ as a product over cohomology groups: $$ Z(X_0, F_0, t) = \prod_i \det(1-F^* t\mid H^i_c(F))^{(-1)^{i+1}} $$ The special case of the constant sheaf gives the usual zeta function. ## Deligne's first proof of the Riemann hypothesis conjecture , , and gave expository accounts of the first proof of . Much of the background in ℓ-adic cohomology is described in . Deligne's first proof of the remaining third Weil conjecture (the "Riemann hypothesis conjecture") used the following steps:
https://en.wikipedia.org/wiki/Weil_conjectures
passage: Global air pollution deaths due to fossil fuels have been estimated at over 8 million people (2018, nearly 1 in 5 deaths worldwide) at 10.2 million (2019), and 5.13 million excess deaths from ambient air pollution from fossil fuel use (2023). While all energy sources inherently have adverse effects, the data show that fossil fuels cause the highest levels of greenhouse gas emissions and are the most dangerous for human health. In contrast, modern renewable energy sources appear to be safer for human health and cleaner. The death rates from accidents and air pollution in the EU are as follows per terawatt-hour (TWh): Energy source Nos. of deathsper TWh Greenhouse gasemissions(thousand tonnes/TWh)Coal24.6820Oil18.4720Natural gas2.8490Biomass4.678–230Hydropower0.0234Nuclear energy0.073Wind0.044Solar0.025 As the data shows, coal, oil, natural gas, and biomass cause higher death rates and higher levels of greenhouse gas emissions than hydropower, nuclear energy, wind, and solar power. Scientists propose that 1.8 million lives have been saved by replacing fossil fuel sources with nuclear power. ## Phase-out ### Just transition ### Divestment ## Industrial sector In 2019, Saudi Aramco was listed and it reached a US$2 trillion valuation on its second day of trading, after the world's largest initial public offering. ### Subsidies ### Lobbying activities
https://en.wikipedia.org/wiki/Fossil_fuel
passage: Wozniak called the system "my most incredible experience at Apple and the finest job I did". Later, the design of the floppy drive controller was modified to allow a byte on disk to contain up to one pair of zero bits in a row. This allowed each eight-bit byte to hold six bits of useful data, and allowed 16 sectors per track. This scheme is known as 6-and-2 encoding, and was used on Apple Pascal, Apple DOS 3.3 and ProDOS, and later with Apple FileWare drives in the Apple Lisa and the 400K and 800K 3½-inch disks on the Macintosh and Apple II. Apple did not originally call this scheme "GCR", but the term was later applied to it to distinguish it from IBM PC floppies which used the MFM encoding scheme. |+6-and-2 encoding table 6-bit value GCR code hex bin bin hex 0x00 00.0000 1001.0110 0x96 0x01 00.0001 1001.0111 0x97 0x02 00.0010 1001.1010 0x9A 0x03 00.0011 1001.1011 0x9B 0x04 00.0100 1001.1101 0x9D 0x05 00.0101 1001.1110 0x9E 0x06 00.0110 1001.1111 0x9F
https://en.wikipedia.org/wiki/Group_coded_recording
passage: ### Comparison with classical GFSR In order to achieve the $$ 2^{nw-r}-1 $$ theoretical upper limit of the period in a TGFSR, $$ \phi_{B}(t) $$ must be a primitive polynomial, $$ \phi_{B}(t) $$ being the characteristic polynomial of $$ B = \begin{pmatrix} 0 & I_w & \cdots & 0 & 0 \\ \vdots & & & & \\ I_w & \vdots & \ddots & \vdots & \vdots \\ \vdots & & & & \\ 0 & 0 & \cdots & I_w & 0 \\ 0 & 0 & \cdots & 0 & I_{w - r} \\ S & 0 & \cdots & 0 & 0 \end{pmatrix} \begin{matrix} \\ \\ \leftarrow m\text{-th row} \\ \\ \\ \\ \end{matrix} $$ $$ S = \begin{pmatrix} 0 & I_r \\ I_{w - r} & 0 \end{pmatrix} A $$ The twist transformation improves the classical GFSR with the following key properties: - The period reaches the theoretical upper limit $$ 2^{nw-r}-1 $$ (except if initialized with 0) - Equidistribution in n dimensions (e.g. linear congruential generators can at best manage reasonable distribution in five dimensions) ## Variants CryptMT is a stream cipher and cryptographically secure pseudorandom number generator which uses Mersenne Twister internally.
https://en.wikipedia.org/wiki/Mersenne_Twister
passage: as the sample size n grows. Theorem (Donsker, Skorokhod, Kolmogorov) The sequence of Gn(x), as random elements of the Skorokhod space $$ \mathcal{D}(-\infty,\infty) $$ , converges in distribution to a Gaussian process G with zero mean and covariance given by $$ \operatorname{cov}[G(s), G(t)] = E[G(s) G(t)] = \min\{F(s), F(t)\} - F(s) $$ $$ {F}(t). $$ The process G(x) can be written as B(F(x)) where B is a standard Brownian bridge on the unit interval. ### Proof sketch For continuous probability distributions, it reduces to the case where the distribution is uniform on $$ [0, 1] $$ by the inverse transform. Given any finite sequence of times $$ 0 < t_1 < t_2 < \dots < t_n < 1 $$ , we have that $$ N F_N(t_1) $$ is distributed as a binomial distribution with mean $$ Nt_1 $$ and variance $$ Nt_1(1-t_1) $$ . Similarly, the joint distribution of $$ F_N(t_1), F_N(t_2), \dots, F_N(t_n) $$ is a multinomial distribution.
https://en.wikipedia.org/wiki/Donsker%27s_theorem
passage: $$ as the Lie algebra of left-invariant vector fields on G, the bracket on $$ \mathfrak g $$ is given as: for left-invariant vector fields X, Y, $$ [X, Y] = \lim_{t \to 0} {1 \over t}(d \varphi_{-t}(Y) - Y) $$ where $$ \varphi_t: G \to G $$ denotes the flow generated by X. As it turns out, $$ \varphi_t(g) = g\varphi_t(e) $$ , roughly because both sides satisfy the same ODE defining the flow.
https://en.wikipedia.org/wiki/Adjoint_representation
passage: The number of solutions for small values of $$ k $$ , starting with $$ k=5 $$ , forms the sequence 2, 5, 18, 96 . Presently, a few solutions are known for $$ k=9 $$ and $$ k=10 $$ , but it is unclear how many solutions remain undiscovered for those values of $$ k $$ . However, there are infinitely many solutions if $$ k $$ is not fixed: showed that there are at least 39 solutions for each $$ k\ge 12 $$ , improving earlier results proving the existence of fewer solutions; conjecture that the number of solutions for each value of $$ k $$ grows monotonically with $$ k $$ . It is unknown whether there are any solutions to Znám's problem using only odd numbers. With one exception, all known solutions start with 2. If all numbers in a solution to Znám's problem or the improper Znám problem are prime, their product is a primary pseudoperfect number; it is unknown whether infinitely many solutions of this type exist.
https://en.wikipedia.org/wiki/Zn%C3%A1m%27s_problem
passage: From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law. ### Matrices The distributive law is valid for matrix multiplication. More precisely, $$ (A + B) \cdot C = A \cdot C + B \cdot C $$ for all $$ l \times m $$ -matrices $$ A, B $$ and $$ m \times n $$ -matrices $$ C, $$ as well as $$ A \cdot (B + C) = A \cdot B + A \cdot C $$ for all $$ l \times m $$ -matrices $$ A $$ and $$ m \times n $$ -matrices $$ B, C. $$ Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws. ### Other examples - Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. - The cross product is left- and right-distributive over vector addition, though not commutative. - The union of sets is distributive over intersection, and intersection is distributive over union. - Logical disjunction ("or") is distributive over logical conjunction ("and"), and vice versa. -
https://en.wikipedia.org/wiki/Distributive_property
passage: Likewise instead of $$ - \frac{\zeta '(s)}{\zeta(s)} $$ the function $$ \Phi(s) = \sum_{p\le x} \log p\,\, p^{-s} $$ is used, which is obtained by dropping some terms in the series for $$ - \frac{\zeta '(s)}{\zeta(s)} $$ . The functions $$ \Phi(s) $$ and $$ -\zeta'(s)/\zeta(s) $$ differ by a function holomorphic on $$ \Re s = 1 $$ . Since, as was shown in the previous section, $$ \zeta(s) $$ has no zeroes on the line $$ \Re s = 1 $$ , $$ \Phi(s) - \frac 1{s-1} $$ has no singularities on $$ \Re s = 1 $$ . One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that $$ \vartheta(x)/x $$ is bounded. This is proved using an ingenious and easy method due to Chebyshev. Integration by parts shows how $$ \vartheta(x) $$ and $$ \Phi(s) $$ are related.
https://en.wikipedia.org/wiki/Prime_number_theorem
passage: Birds and reptiles have relatively few skin glands, although there may be a few structures for specific purposes, such as pheromone-secreting cells in some reptiles, or the uropygial gland of most birds. ## Development Cutaneous structures arise from the epidermis and include a variety of features such as hair, feathers, claws and nails. During embryogenesis, the epidermis splits into two layers: the periderm (which is lost) and the basal layer. The basal layer is a stem cell layer and through asymmetrical divisions, becomes the source of skin cells throughout life. It is maintained as a stem cell layer through an autocrine signal, TGF alpha, and through paracrine signaling from FGF7 (keratinocyte growth factor) produced by the dermis below the basal cells. In mice, over-expression of these factors leads to an overproduction of granular cells and thick skin. It is believed that the mesoderm defines the pattern. The epidermis instructs the mesodermal cells to condense and then the mesoderm instructs the epidermis of what structure to make through a series of reciprocal inductions. Transplantation experiments involving frog and newt epidermis indicated that the mesodermal signals are conserved between species but the epidermal response is species-specific meaning that the mesoderm instructs the epidermis of its position and the epidermis uses this information to make a specific structure. ## Functions Skin performs the following functions: 1.
https://en.wikipedia.org/wiki/Skin
passage: Perpendicular bisectors are drawn to the line joining any two stations. This results in the formation of polygons around the stations. The area $$ (A_i) $$ touching station point is known as influence area of the station. The average precipitation is calculated by the formula $$ \bar{P}=\frac{\sum A_i P_i}{\sum A_i} $$ ### Humanities and social sciences - In classical archaeology, specifically art history, the symmetry of statue heads is analyzed to determine the type of statue a severed head may have belonged to. An example of this that made use of Voronoi cells was the identification of the Sabouroff head, which made use of a high-resolution polygon mesh. - In dialectometry, Voronoi cells are used to indicate a supposed linguistic continuity between survey points. - In political science, Voronoi diagrams have been used to study multi-dimensional, multi-party competition. ### Natural sciences - In biology, Voronoi diagrams are used to model a number of different biological structures, including cells and bone microarchitecture. Indeed, Voronoi tessellations work as a geometrical tool to understand the physical constraints that drive the organization of biological tissues. - In hydrology, Voronoi diagrams are used to calculate the rainfall of an area, based on a series of point measurements. In this usage, they are generally referred to as Thiessen polygons. -
https://en.wikipedia.org/wiki/Voronoi_diagram
passage: Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. ## Examples ### Squaring and square root functions The function given by is not injective because $$ (-x)^2=x^2 $$ for all $$ x\in\R $$ . Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function $$ f\colon [0,\infty)\to [0,\infty);\ x\mapsto x^2 $$ with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by $$ x\mapsto\sqrt x $$ . ### Standard inverse functions The following table shows several standard functions and their inverses: +Inverse arithmetic functions Function Inverse Notes (i.e. ) (i.e. ) (i.e. ) integer ; if is even and and and trigonometric functions inverse trigonometric functions various restrictions (see table below) hyperbolic functions inverse hyperbolic functions various restrictions ### Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse.
https://en.wikipedia.org/wiki/Inverse_function
passage: Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. Line coding A line code (also called digital baseband modulation or digital baseband transmission method) is a code chosen for use within a communications system for baseband transmission purposes. Line coding is often used for digital data transport.
https://en.wikipedia.org/wiki/Coding_theory
passage: Applying l'Hôpital's rule a single time still results in an indeterminate form. In this case, the limit may be evaluated by applying the rule three times: $$ \begin{align} \lim_{x\to 0}{\frac{2\sin(x)-\sin(2x)}{x-\sin(x)}} & \ \stackrel{\mathrm{H}}{=}\ \lim_{x\to 0}{\frac{2\cos(x)-2\cos(2x)}{1-\cos(x)}} \\[4pt] & \ \stackrel{\mathrm{H}}{=}\ \lim_{x\to 0}{\frac{-2\sin(x)+4\sin(2x)}{\sin(x)}} \\[4pt] & \ \stackrel{\mathrm{H}}{=}\ \lim_{x\to 0}{\frac{-2\cos(x)+8\cos(2x)}{\cos(x)}} ={\frac{-2+8}{1}} =6. \end{align} $$ - Here is an example involving : $$ \lim_{x\to\infty}x^n\cdot e^{-x} =\lim_{x\to\infty}{\frac{x^n}{e^x}} \ \stackrel{\mathrm{H}}{=}\ \lim_{x\to\infty}{\frac{nx^{n-1}}{e^x}} =n\cdot \lim_{x\to\infty}{\frac{x^{n-1}}{e^x}}. $$ Repeatedly apply l'Hôpital's rule until the exponent is zero (if is an integer) or negative (if is fractional) to conclude that the limit is zero.
https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule
passage: This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write: $$ \mathbf{A}(\mathbf{r}t)=\sum_{\mathbf{k}\lambda}\sqrt{\frac{2\pi\hbar c^2}{\omega_k V}}\left[a_{\mathbf{k}\lambda}(0)e^{i\mathbf{k}\cdot\mathbf{r}}+a_{\mathbf{k}\lambda}^\dagger(0)e^{-i\mathbf{k}\cdot\mathbf{r}}\right]e_{\mathbf{k}\lambda} $$ for the total vector potential in free space.
https://en.wikipedia.org/wiki/Zero-point_energy
passage: Ishikawa diagrams (also called fishbone diagrams, herringbone diagrams, cause-and-effect diagrams) are causal diagrams created by Kaoru Ishikawa that show the potential causes of a specific event. Common uses of the Ishikawa diagram are product design and quality defect prevention to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify and classify these sources of variation. ## Overview The defect, or the problem to be solved, is shown as the fish's head, facing to the right, with the causes extending to the left as fishbones; the ribs branch off the backbone for major causes, with sub-branches for root-causes, to as many levels as required. Ishikawa diagrams were popularized in the 1960s by Kaoru Ishikawa, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. The basic concept was first used in the 1920s, and is considered one of the seven basic tools of quality control. It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton. Mazda Motors famously used an Ishikawa diagram in the development of the Miata (MX5) sports car. ## Advantages of the Ishikawa Diagram 1. Visual and easy to understand Its fishbone-like structure allows for a clear and organized graphical representation of the causes of a problem. This makes it easy to understand even for people without technical experience. 1. Encourages teamwork
https://en.wikipedia.org/wiki/Ishikawa_diagram
passage: Gouesbet and Letellier used a multivariate polynomial approximation and least squares to reconstruct their vector field. This method was applied to the Rössler system, and the Lorenz system, as well as thermal lens oscillations. The Rossler system, Lorenz system, and Thermal lens oscillation follow the differential equations in the standard system as X'=Y, Y'=Z and Z'=F(X,Y,Z) where F(X,Y,Z) is known as the standard function. ## Implementation issues In some situations the model is not very efficient and difficulties can arise if the model has a large number of coefficients and demonstrates a divergent solution. For example, nonautonomous differential equations give the previously described results. In this case, the modification of the standard approach in application gives a better way of further development of global vector reconstruction. Usually, the system being modelled in this way is a chaotic dynamical system, because chaotic systems explore a large part of the phase space and the estimate of the global dynamics based on the local dynamics will be better than with a system exploring only a small part of the space. Frequently, one has only a single scalar time series measurement from a system known to have more than one degree of freedom. The time series may not even be from a system variable, but may be instead of a function of all the variables, such as temperature in a stirred tank reactor using several chemical species.
https://en.wikipedia.org/wiki/Vector_field_reconstruction
passage: In this case it is straightforward to prove that: - Jx is also a bounded element, denoted x*, and λ(x*) = λ(x)*; - a → ax is given by the bounded operator ρ(x) = Jλ(x*)J on H; - M ' is generated by the ρ(x)'s with x bounded; - λ(x) and ρ(y) commute for x, y bounded. The commutation theorem follows immediately from the last assertion. In particular $$ M = \lambda(\mathfrak{B})''. $$ The space of all bounded elements $$ \mathfrak{B} $$ forms a Hilbert algebra containing $$ \mathfrak{A} $$ as a dense *-subalgebra. It is said to be completed or full because any element in H bounded relative to $$ \mathfrak{B} $$ must actually already lie in $$ \mathfrak{B} $$ . The functional τ on M+ defined by $$ \tau(x) = (a,a) $$ if x = λ(a)*λ(a) and ∞ otherwise, yields a faithful semifinite trace on M with $$ M_0 = \mathfrak{B}. $$ Thus: {| border="1" cellspacing="0" cellpadding="5" |There is a one-one correspondence between von Neumann algebras on H with faithful semifinite trace and full Hilbert algebras with Hilbert space completion H. |}
https://en.wikipedia.org/wiki/Commutation_theorem_for_traces
passage: ## Overview Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power, which reverses direction 100 to 120 times per second, and audio signals. However, they are not generally used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line (ladder line, twisted pair), coaxial cable, and planar transmission lines such as stripline and microstrip. The higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At frequencies of microwave and higher, power losses in transmission lines become excessive, and waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves.
https://en.wikipedia.org/wiki/Transmission_line
passage: In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time. ### Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes. ## Informal definition In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1: The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates. An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits. This is however not the modern definition which only requires the result be accurate to within any given accuracy.
https://en.wikipedia.org/wiki/Computable_number
passage: In this case, $$ \hat{g}(a) = 2^{n/2}(-1)^{f(a)} $$ , so f and g are considered dual functions. Every bent function has a Hamming weight (number of times it takes the value 1) of , and in fact agrees with any affine function at one of those two numbers of points. So the nonlinearity of f (minimum number of times it equals any affine function) is , the maximum possible. Conversely, any Boolean function with nonlinearity is bent. The degree of f in algebraic normal form (called the nonlinear order of f) is at most (for ). Although bent functions are vanishingly rare among Boolean functions of many variables, they come in many different kinds. There has been detailed research into special classes of bent functions, such as the homogeneous ones or those arising from a monomial over a finite field, but so far the bent functions have defied all attempts at a complete enumeration or classification. ## Constructions There are several types of constructions for bent functions. - Combinatorial constructions: iterative constructions, Maiorana–McFarland construction, partial spreads, Dillon's and Dobbertin's bent functions, minterm bent functions, bent iterative functions - Algebraic constructions: monomial bent functions with exponents of Gold, Dillon, Kasami, Canteaut–Leander and Canteaut–Charpin–Kuyreghyan; Niho bent functions, etc.
https://en.wikipedia.org/wiki/Bent_function
passage: Commercial production of niobium–titanium supermagnet wire immediately commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium became the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium found wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total. ### Josephson effect In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
https://en.wikipedia.org/wiki/Superconductivity
passage: Although torque (N·m) and energy (J) are dimensionally equivalent, torques are never expressed in units of energy. In the CGS system, there are several different sets of electromagnetism units, of which the main ones are ESU, Gaussian, and EMU. Among these, there are two alternative (non-equivalent) units of magnetic dipole moment: $$ 1 \; \mathrm{statA {\cdot} {cm}^2} \equiv 3.33564095 \times 10^{-14} \mathrm{A {\cdot} m^2} ~~ \text{ (ESU)} $$ $$ 1 \; \mathrm{\frac{erg}{G}} \equiv 10^{-3} \mathrm{A {\cdot} m^2} ~~ \text{ (Gaussian and EMU),} $$ where statA is statamperes, cm is centimeters, erg is ergs, and G is gauss. The ratio of these two non-equivalent CGS units (EMU/ESU) is equal to the speed of light in free space, expressed in cm⋅s−1. All formulae in this article are correct in SI units; they may need to be changed for use in other unit systems. For example, in SI units, a loop of current with current and area has magnetic moment (see below), but in Gaussian units the magnetic moment is . Other units for measuring the magnetic dipole moment include the Bohr magneton and the nuclear magneton.
https://en.wikipedia.org/wiki/Magnetic_moment
passage: ``` Functions have access to scope they were created in: ```r a <- 1 f <- function() { message(a) } f() 1. 1 ``` Variables created or modified within a function stay there: ```r a <- 1 f <- function() { message(a) a <- 2 message(a) } f() 1. 1 1. 2 message(a) 1. 1 ``` Variables created or modified within a function stay there unless assignment to enclosing scope is explicitly requested: ```r a <- 1 f <- function() { message(a) a <<- 2 message(a) } f() 1. 1 1. 2 message(a) 1. 2 ``` Although R has lexical scope by default, function scopes can be changed: ```r a <- 1 f <- function() { message(a) } my_env <- new.env() my_env$a <- 2 f() 1. 1 environment(f) <- my_env f() 1. 2 ```
https://en.wikipedia.org/wiki/Scope_%28computer_science%29
passage: ### Chronic Proprioception, a sense vital for rapid and proper body coordination, can be permanently lost or impaired as a result of genetic conditions, disease, viral infections, and injuries. For instance, patients with joint hypermobility or Ehlers–Danlos syndromes, genetic conditions that result in weak connective tissue throughout the body, have chronic impairments to proprioception. Autism spectrum disorder and Parkinson's disease can also cause chronic disorder of proprioception. In regards to Parkinson's disease, it remains unclear whether the proprioceptive-related decline in motor function occurs due to disrupted proprioceptors in the periphery or signaling in the spinal cord or brain. In rare cases, viral infections result in a loss of proprioception. Ian Waterman and Charles Freed are two such people that lost their sense of proprioception from the neck down from supposed viral infections (i.e. gastric flu and a rare viral infection). After losing their sense of proprioception, Ian and Charles could move their lower body, but could not coordinate their movements. However, both individuals regained some control of their limbs and body by consciously planning their movements and relying solely on visual feedback. Interestingly, both individuals can still sense pain and temperature, indicating that they specifically lost proprioceptive feedback, but not tactile and nociceptive feedback. The impact of losing the sense of proprioception on daily life is perfectly illustrated when Ian Waterman stated, "What is an active brain without mobility". Proprioception is also permanently lost in people who lose a limb or body part through injury or amputation.
https://en.wikipedia.org/wiki/Proprioception
passage: This formula is often called Stevin's law. One could arrive to the above formula also by considering the first particular case of the equation for a conservative body force field: in fact the body force field of uniform intensity and direction: $$ \rho \mathbf{g}(x,y,z) = - \rho g \hat k $$ is conservative, so one can write the body force density as: $$ \rho \mathbf{g} = \nabla (- \rho g z) $$ Then the body force density has a simple scalar potential: $$ \phi(z) = - \rho g z $$ And the pressure difference follows another time the Stevin's law: $$ \Delta p = - \Delta \phi = \rho g \Delta z $$ The reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant and . For example, the absolute pressure compared to vacuum is $$ p = \rho g \Delta z + p_\mathrm{0}, $$ where $$ \Delta z $$ is the total height of the liquid column above the test area to the surface, and is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism. Hydrostatic pressure has been used in the preservation of foods in a process called pascalization.
https://en.wikipedia.org/wiki/Hydrostatics
passage: Its kernel is a homogeneous ideal and this defines an isomorphism of graded algebra between $$ R_n/I $$ and . Thus, the graded algebras generated by elements of degree 1 are exactly, up to an isomorphism, the quotients of polynomial rings by homogeneous ideals. Therefore, the remainder of this article will be restricted to the quotients of polynomial rings by ideals. ## Properties of Hilbert series ### Additivity Hilbert series and Hilbert polynomial are additive relatively to exact sequences. More precisely, if $$ 0 \;\rightarrow\; A\;\rightarrow\; B\;\rightarrow\; C \;\rightarrow\; 0 $$ is a short exact sequence of graded or filtered modules, then we have $$ HS_B=HS_A+HS_C $$ and $$ HP_B=HP_A+HP_C. $$ This follows immediately from the same property for the dimension of vector spaces. ### Quotient by a non-zero divisor Let be a graded algebra and a homogeneous element of degree in which is not a zero divisor.
https://en.wikipedia.org/wiki/Hilbert_series_and_Hilbert_polynomial
passage: The windings may be water-cooled, using a chilled water supply in order to facilitate the removal of the high thermal duty. ### Apertures Apertures are annular metallic plates, through which electrons that are further than a fixed distance from the optic axis may be excluded. These consist of a small metallic disc that is sufficiently thick to prevent electrons from passing through the disc, whilst permitting axial electrons. This permission of central electrons in a TEM causes two effects simultaneously: firstly, apertures decrease the beam intensity as electrons are filtered from the beam, which may be desired in the case of beam sensitive samples. Secondly, this filtering removes electrons that are scattered to high angles, which may be due to unwanted processes such as spherical or chromatic aberration, or due to diffraction from interaction within the sample. Apertures are either a fixed aperture within the column, such as at the condenser lens, or are a movable aperture, which can be inserted or withdrawn from the beam path, or moved in the plane perpendicular to the beam path. Aperture assemblies are mechanical devices which allow for the selection of different aperture sizes, which may be used by the operator to trade off intensity and the filtering effect of the aperture. Aperture assemblies are often equipped with micrometers to move the aperture, required during optical calibration.
https://en.wikipedia.org/wiki/Transmission_electron_microscopy
passage: $$ or, omitting ∇ and writing multiplication as juxtaposition, $$ (x_1 \otimes x_2)(y_1 \otimes y_2) = x_1 y_1 \otimes x_2 y_2 $$ ; similarly, (K, Δ0, ε0) is a coalgebra in an obvious way and B ⊗ B is a coalgebra with counit and comultiplication $$ \epsilon_2 := (\epsilon \otimes \epsilon) : (B \otimes B) \to K \otimes K \equiv K $$ $$ \Delta_2 := (id \otimes \tau \otimes id) \circ (\Delta \otimes \Delta) : (B \otimes B) \to (B \otimes B) \otimes (B \otimes B) $$ .
https://en.wikipedia.org/wiki/Bialgebra
passage: The exception to this behavior applies when collapsing a parent of the current directory, in which case the selection is refocused on the collapsed parent directory, thus altering the list in the Contents pane. The process of moving from one location to another need not open a new window. Several instances of the file manager can be opened simultaneously and communicate with each other via drag-and-drop and clipboard operations, so it is possible to view several directories simultaneously and perform cut-and paste operations between instances. File operations are based on drag-and-drop and editor metaphors: users can select and copy files or directories onto the clipboard and then paste them in a different place in the filesystem or even in a different instance of the file manager. Notable examples of navigational file managers include: - Directory Opus - Dolphin in KDE - DOS Shell in MS-DOS/PC DOS - File Manager in Windows - macOS Finder - Nautilus in GNOME (default since v2.30) - File Explorer (Windows Explorer) - PC Shell in PC Tools - ViewMAX in DR DOS - XTree / ZTreeWin ## Spatial file manager Spatial file managers use a spatial metaphor to represent files and directories as if they were actual physical objects. A spatial file manager imitates the way people interact with physical objects. Some ideas behind the concept of a spatial file manager are: 1. A single window represents each opened directory 1. Each window is unambiguously and irrevocably tied to a particular directory. 1.
https://en.wikipedia.org/wiki/File_manager
passage: If the chain's samples are highly correlated, the sum of autocorrelations is large, leading to a much bigger variance for $$ \bar{X}_N $$ than in the independent case. ### Effective sample size (ESS) The effective sample size $$ N_{\text{eff}} $$ is a useful diagnostic that translates the autocorrelation in a chain into an equivalent number of independent samples. It is defined by the formula: $$ N_{\text{eff}} = \frac{N}{1 + 2 \sum_{k=1}^{\infty} \rho_k} $$ so that $$ N_{\text{eff}} $$ is the number of independent draws that would yield the same estimation precision as the $$ N $$ dependent draws from the Markov chain. For example, if $$ 1 + 2\sum_{k=1}^{\infty} \rho_k = 5 $$ , then $$ N_{\text{eff}} = N/5 $$ , meaning the chain of length $$ N $$ carries information equivalent to $$ N/5 $$ independent samples. In an ideal scenario with no correlation, $$ \rho_k=0 $$ and thus $$ N_{\text{eff}}\approx N $$ . But in a poorly mixing chain with strong autocorrelation, $$ N_{\text{eff}} $$ can be much smaller than $$ N $$ .
https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo
passage: Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure. ## Protein structure databases A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein.
https://en.wikipedia.org/wiki/Protein_structure
passage: Microsoft security-engineers introduced the term "cross-site scripting" in January 2000. The expression "cross-site scripting" originally referred to the act of loading the attacked, third-party web application from an unrelated attack-site, in a manner that executes a fragment of JavaScript prepared by the attacker in the security context of the targeted domain (taking advantage of a reflected or non-persistent XSS vulnerability). The definition gradually expanded to encompass other modes of code injection, including persistent and non-JavaScript vectors (including ActiveX, Java, VBScript, Flash, or even HTML scripts), causing some confusion to newcomers to the field of information security. XSS vulnerabilities have been reported and exploited since the 1990s. Prominent sites affected in the past include the social-networking sites Twitter and Facebook. Cross-site scripting flaws have since surpassed buffer overflows to become the most common publicly reported security vulnerability, with some researchers in 2007 estimating as many as 68% of websites are likely open to XSS attacks. ## Types There is no single, standardized classification of cross-site scripting flaws, but most experts distinguish between at least two primary flavors of XSS flaws: non-persistent and persistent. Some sources further divide these two groups into traditional (caused by server-side code flaws) and DOM-based (in client-side code). ### Non-persistent (reflected) The non-persistent (or reflected) cross-site scripting vulnerability is by far the most basic type of web vulnerability.
https://en.wikipedia.org/wiki/Cross-site_scripting
passage: Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa were awarded the 2000 Nobel Prize in Chemistry for the development of polyacetylene and related conductive polymers. Polyacetylene itself did not find practical applications, but organic light-emitting diodes (OLEDs) emerged as one application of conducting polymers. Teaching and research programs in polymer chemistry were introduced in the 1940s. An Institute for Macromolecular Chemistry was founded in 1940 in Freiburg, Germany under the direction of Staudinger. In America, a Polymer Research Institute (PRI) was established in 1941 by Herman Mark at the Polytechnic Institute of Brooklyn (now Polytechnic Institute of NYU). ## Polymers and their properties Polymers are high molecular mass compounds formed by polymerization of monomers. They are synthesized by the polymerization process and can be modified by the additive of monomers. The additives of monomers change polymers mechanical property, processability, durability and so on. The simple reactive molecule from which the repeating structural units of a polymer are derived is called a monomer. A polymer can be described in many ways: its degree of polymerisation, molar mass distribution, tacticity, copolymer distribution, the degree of branching, by its end-groups, crosslinks, crystallinity and thermal properties such as its glass transition temperature and melting temperature. Polymers in solution have special characteristics with respect to solubility, viscosity, and gelation.
https://en.wikipedia.org/wiki/Polymer_chemistry
passage: ### Biological computers A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems. ### Cell transformation Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels. Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution.
https://en.wikipedia.org/wiki/Synthetic_biology
passage: Because of that, $$ b $$ and $$ c $$ may have lost some factors that were in $$ j $$ and $$ r $$ . This can be remedied by rerunning the quantum order-finding subroutine an arbitrary number of times, to produce a list of fraction approximations $$ \frac{b_1}{c_1}, \frac{b_2}{c_2}, \ldots, \frac{b_s}{c_s}, $$ where $$ s $$ is the number of times the subroutine was run. Each $$ c_k $$ will have different factors taken out of it because the circuit will (likely) have measured multiple different possible values of $$ j $$ . To recover the actual $$ r $$ value, we can take the least common multiple of each $$ c_k $$ : $$ \operatorname{lcm}(c_1, c_2, \ldots, c_s). $$ The least common multiple will be the order $$ r $$ of the original integer $$ a $$ with high probability. In practice, a single run of the quantum order-finding subroutine is in general enough if more advanced post-processing is used.
https://en.wikipedia.org/wiki/Shor%27s_algorithm
passage: φc = 0.61467(2), pc = 0.037428(2), 0.03745(2), 6x6 touching lattice squares* 220 pc = 0.02663(1), 10x10 touching lattice squares* 436 φc = 0.63609(2), pc = 0.0100576(5) within 11 x 11 square (r=5) 120 0.01048079(6) within 15 x 15 square (r=7) 224 0.005287692(22) 20x20 touching lattice squares*1676 φc = 0.65006(2), pc = 0.0026215(3) within 31 x 31 square (r=15) 960 0.001131082(5) 100x100 touching lattice squares*40396 φc = 0.66318(2), pc = 0.000108815(12) 1000x1000 touching lattice squares*4003996 φc = 0.66639(1), pc = 1.09778(6)E-06 Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers. - For overlapping or touching squares, $$ p_c $$ (site) given here is the net fraction of sites occupied $$ \phi_c $$ similar to the $$ \phi_c $$ in continuum percolation.
https://en.wikipedia.org/wiki/Percolation_threshold
passage: For any $$ k>0 $$ and $$ O(n^c(\log n)^k) $$ is a subset of $$ O(n^{c+\varepsilon}) $$ for any so may be considered as a polynomial with some bigger order. ## Related asymptotic notations Big O is widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations. ### Little-o notation Intuitively, the assertion " is " (read " is little-o of " or " is of inferior order to ") means that grows much faster than , or equivalently grows much slower than .
https://en.wikipedia.org/wiki/Big_O_notation
passage: Added across all equivalence classes in $$ [x]_Q $$ , the numerator above represents the total number of objects which – based on attribute set $$ P $$ – can be positively categorized according to the classification induced by attributes $$ Q $$ . The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependency $$ \gamma_{P}(Q) $$ "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in $$ P $$ to determine the values of attributes in $$ Q $$ ". Another, intuitive, way to consider dependency is to take the partition induced by $$ Q $$ as the target class $$ C $$ , and consider $$ P $$ as the attribute set we wish to use in order to "re-construct" the target class $$ C $$ . If $$ P $$ can completely reconstruct $$ C $$ , then $$ Q $$ depends totally upon $$ P $$ ; if $$ P $$ results in a poor and perhaps a random reconstruction of $$ C $$ , then $$ Q $$ does not depend upon $$ P $$ at all. Thus, this measure of dependency expresses the degree of functional (i.e., deterministic) dependency of attribute set $$ Q $$ on attribute set $$ P $$ ; it is not symmetric.
https://en.wikipedia.org/wiki/Rough_set
passage: If successful through the stages of clinical development, the vaccine licensing process is followed by a Biologics License Application which must provide a scientific review team (from diverse disciplines, such as physicians, statisticians, microbiologists, chemists) and comprehensive documentation for the vaccine candidate having efficacy and safety throughout its development. Also during this stage, the proposed manufacturing facility is examined by expert reviewers for GMP compliance, and the label must have a compliant description to enable health care providers' definition of vaccine-specific use, including its possible risks, to communicate and deliver the vaccine to the public. After licensure, monitoring of the vaccine and its production, including periodic inspections for GMP compliance, continue as long as the manufacturer retains its license, which may include additional submissions to the FDA of tests for potency, safety, and purity for each vaccine manufacturing step. ### India In India, the Drugs Controller General, the head of department of the Central Drugs Standard Control Organization, India's national regulatory body for cosmetics, pharmaceuticals and medical devices, is responsible for the approval of licences for specified categories of drugs such as vaccines and other medicinal items, such as blood or blood products, IV fluids, and sera. ### Postmarketing surveillance Until a vaccine is in use amongst the general population, all potential adverse events from the vaccine may not be known, requiring manufacturers to conduct PhaseIV studies for postmarketing surveillance of the vaccine while it is used widely in the public. The WHO works with UN member states to implement post-licensing surveillance.
https://en.wikipedia.org/wiki/Vaccine
passage: Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation (or equivalently, the opposite inequality for concave transformations). Jensen's inequality generalizes the statement that the secant line of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for t ∈ [0,1]), $$ t f(x_1) + (1-t) f(x_2), $$ while the graph of the function is the convex function of the weighted means, $$ f(t x_1 + (1-t) x_2). $$ Thus, Jensen's inequality in this case is $$ f(t x_1 + (1-t) x_2) \leq t f(x_1) + (1-t) f(x_2). $$ In the context of probability theory, it is generally stated in the following form: if X is a random variable and is a convex function, then $$ \varphi(\operatorname{E}[X]) \leq \operatorname{E} \left[\varphi(X)\right]. $$ The difference between the two sides of the inequality, $$ \operatorname{E} \left[\varphi(X)\right] - \varphi\left(\operatorname{E}[X]\right) $$ , is called the Jensen gap.
https://en.wikipedia.org/wiki/Jensen%27s_inequality
passage: The correlations constructed in this manner are of order two, that is, polarities. #### Algebraic formulation We shall describe this polarity algebraically by following the above construction in the case that is the unit circle (i.e., ) centered at the origin. An affine point , other than the origin, with Cartesian coordinates has as its inverse in the unit circle the point with coordinates, $$ \left ( \frac{a}{a^2 + b^2}, \frac{b}{a^2 + b^2} \right). $$ The line passing through that is perpendicular to the line has equation . Switching to homogeneous coordinates using the embedding , the extension to the real projective plane is obtained by permitting the last coordinate to be 0. Recalling that point coordinates are written as column vectors and line coordinates as row vectors, we may express this polarity by: $$ \pi : \mathbb{R}P^2 \rightarrow \mathbb{R}P^2 $$ such that $$ \pi \left ( (x,y,z)^{\mathsf{T}} \right ) = (x, y, -z). $$ Or, using the alternate notation, .
https://en.wikipedia.org/wiki/Duality_%28projective_geometry%29
passage: There are several results in this area: - the contraction principle tells one how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function; - the Dawson-Gärtner theorem tells one how a sequence of large deviation principles on a sequence of spaces passes to the projective limit. - the tilted large deviation principle gives a large deviation principle for integrals of exponential functionals. - exponentially equivalent measures have the same large deviation principles. ## History and basic development The notion of a rate function emerged in the 1930s with the Swedish mathematician Harald Cramér's study of a sequence of i.i.d. random variables (Zi)i∈. Namely, among some considerations of scaling, Cramér studied the behavior of the distribution of the average $$ X_n = \frac 1 n \sum_{i=1}^n Z_i $$ as n→∞. He found that the tails of the distribution of Xn decay exponentially as e−nλ(x) where the factor λ(x) in the exponent is the Legendre–Fenchel transform (a.k.a. the convex conjugate) of the cumulant-generating function $$ \Psi_Z(t)=\log \operatorname E e^{tZ}. $$ For this reason this particular function λ(x) is sometimes called the Cramér function.
https://en.wikipedia.org/wiki/Rate_function
passage: The prepolar or absolute prepolar of a subset $$ B $$ of $$ Y $$ is the set: $$ {}^{\circ} B = \left\{x \in X ~:~ \sup_{b \in B} |\langle x, b \rangle| \leq 1\right\} = \{x \in X ~:~ \sup |\langle x, B \rangle| \leq 1\} $$ Very often, the prepolar of a subset $$ B $$ of $$ Y $$ is also called the polar or absolute polar of $$ B $$ and denoted by $$ B^{\circ} $$ ; in practice, this reuse of notation and of the word "polar" rarely causes any issues (such as ambiguity) and many authors do not even use the word "prepolar". The bipolar of a subset $$ A $$ of $$ X, $$ often denoted by $$ A^{\circ \circ}, $$ is the set $$ {}^{\circ}\left(A^{\circ}\right) $$ ; that is, $$ A^{\circ \circ} := {}^{\circ}\left(A^{\circ}\right) = \left\{x \in X ~:~ \sup_{y \in A^{\circ}} |\langle x, y \rangle| \leq 1\right\}. $$
https://en.wikipedia.org/wiki/Polar_set
passage: = Pń − Pń-1; ΔB = UB − LB = Pń − P0 = ΔńP = ŃΔ1P. ## The primary difference quotient (Ń = 1) $$ \frac{\Delta F(P_0)}{\Delta P}=\frac{F(P_{\acute{n}})-F(P_0)}{\Delta_{\acute{n}}P}=\frac{F(P_1)-F(P_0)}{\Delta _1P}=\frac{F(P_1)-F(P_0)}{P_1-P_0}.\,\! $$ ### As a derivative The difference quotient as a derivative needs no explanation, other than to point out that, since P0 essentially equals P1 = P2 = ... = Pń (as the differences are infinitesimal), the Leibniz notation and derivative expressions do not distinguish P to P0 or Pń: $$ \frac{dF(P)}{dP}=\frac{F(P_1)-F(P_0)}{dP}=F'(P)=G(P).\,\! $$ There are other derivative notations, but these are the most recognized, standard designations. ### As a divided difference A divided difference, however, does require further elucidation, as it equals the average derivative between and including LB and UB:
https://en.wikipedia.org/wiki/Difference_quotient
passage: Aphanicin, Chlorellaxanthin β,β-Carotene-4,4'-dione - Capsanthin (3R,3'S,5'R)-3,3'-Dihydroxy-β,κ-caroten-6'-one - Capsorubin (3S,5R,3'S,5'R)-3,3'-Dihydroxy-κ,κ-carotene-6,6'-dione - Cryptocapsin (3'R,5'R)-3'-Hydroxy-β,κ-caroten-6'-one - 2,2'-Diketospirilloxanthin 1,1'-Dimethoxy-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-2,2'-dione - Echinenone β,β-Caroten-4-one - 3'-Hydroxyechinenone - Flexixanthin 3,1'-Dihydroxy-3',4'-didehydro-1',2'-dihydro-β,γ-caroten-4-one - 3-OH-Canthaxanthin a.k.a. Adonirubin a.k.a.
https://en.wikipedia.org/wiki/Carotenoid
passage: ### Inertial frames and rotation In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form: $$ \mathbf{F} = m \mathbf{a} \ , $$ with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces.
https://en.wikipedia.org/wiki/Inertial_frame_of_reference
passage: If a monoid has the cancellation property and is finite, then it is in fact a group. The right- and left-cancellative elements of a monoid each in turn form a submonoid (i.e. are closed under the operation and obviously include the identity). This means that the cancellative elements of any commutative monoid can be extended to a group. The cancellative property in a monoid is not necessary to perform the Grothendieck construction – commutativity is sufficient. However, if a commutative monoid does not have the cancellation property, the homomorphism of the monoid into its Grothendieck group is not injective. More precisely, if , then and have the same image in the Grothendieck group, even if . In particular, if the monoid has an absorbing element, then its Grothendieck group is the trivial group. ### Types of monoids An inverse monoid is a monoid where for every in , there exists a unique in such that and . If an inverse monoid is cancellative, then it is a group. In the opposite direction, a zerosumfree monoid is an additively written monoid in which implies that and : equivalently, that no element other than zero has an additive inverse. ## Acts and operator monoids Let be a monoid, with the binary operation denoted by and the identity element denoted by . Then a (left) -act (or left act over ) is a set together with an operation which is compatible with the monoid structure as follows: - for all in : ; - for all , in and in : .
https://en.wikipedia.org/wiki/Monoid
passage: Saccharopine then undergoes a dehydration reaction, catalysed by SDH in the presence of NAD+, to produce AAS and glutamate. AAS dehydrogenase (AASD) (E.C 1.2.1.31) then further dehydrates the molecule into AAA. Subsequently, PLP-AT catalyses the reverse reaction to that of the AAA biosynthesis pathway, resulting in AAA being converted to α-ketoadipate. The product, α‑ketoadipate, is decarboxylated in the presence of NAD+ and coenzyme A to yield glutaryl-CoA, however the enzyme involved in this is yet to be fully elucidated. Some evidence suggests that the 2-oxoadipate dehydrogenase complex (OADHc), which is structurally homologous to the E1 subunit of the oxoglutarate dehydrogenase complex (OGDHc) (E.C 1.2.4.2), is responsible for the decarboxylation reaction. Finally, glutaryl-CoA is oxidatively decarboxylated to crotonyl-CoA by glutaryl-CoA dehydrogenase (E.C 1.3.8.6), which goes on to be further processed through multiple enzymatic steps to yield acetyl-CoA; an essential carbon metabolite involved in the tricarboxylic acid cycle (TCA). ## Nutritional value Lysine is an essential amino acid in humans. The human daily nutritional requirement varies from ~60 mg/kg in infancy to ~30 mg/kg in adults.
https://en.wikipedia.org/wiki/Lysine
passage: If this integral were not path independent, then entropy would not be a state variable. ### Irreversible engines Consider two engines, $$ M $$ and $$ L $$ , which are irreversible and reversible respectively. We construct the machine shown in the right figure, with $$ M $$ driving $$ L $$ as a heat pump. Then if $$ M $$ is more efficient than $$ L $$ , the machine will violate the second law of thermodynamics. Since a Carnot heat engine is a reversible heat engine, and all reversible heat engines operate with the same efficiency between the same reservoirs, we have the first part of Carnot's theorem: No irreversible heat engine is more efficient than a Carnot heat engine operating between the same two thermal reservoirs. ## Definition of thermodynamic temperature The efficiency of a heat engine is the work done by the engine divided by the heat introduced to the engine per engine cycle or where $$ w_\text{cy} $$ is the work done by the engine, $$ q_C $$ is the heat to the cold reservoir from the engine, and $$ q_H $$ is the heat to the engine from the hot reservoir, per cycle. Thus, the efficiency depends only on $$ \frac{q_C}{q_H} $$ . Because all reversible heat engines operating between temperatures $$ T_1 $$ and $$ T_2 $$ must have the same efficiency, the efficiency of a reversible heat engine is a function of only the two reservoir temperatures:
https://en.wikipedia.org/wiki/Carnot%27s_theorem_%28thermodynamics%29
passage: The algorithm then returns . The element found on the children level needs to be composed with the high bits to form a complete next element. function FindNext(T, x) if x < T.min then return T.min if x ≥ T.max then // no next element return M i = floor(x/ $$ \sqrt{M} $$ ) lo = x mod $$ \sqrt{M} $$ if lo < T.children[i].max then return ( $$ \sqrt{M} $$ i) + FindNext(T.children[i], lo) j = FindNext(T.aux, i) return ( $$ \sqrt{M} $$ j) + T.children[j].min end Note that, in any case, the algorithm performs $$ O(1) $$ work and then possibly recurses on a subtree over a universe of size $$ M^{1/2} $$ (an $$ m/2 $$ bit universe). This gives a recurrence for the running time of $$ T(m) = T(m/2) + O(1) $$ , which resolves to $$ O(\log m) = O(\log \log M) $$ . Insert The call that inserts a value into a vEB tree operates as follows: 1. If T is empty then we set and we are done. 1. Otherwise, if then we insert into the subtree responsible for and then set . If was previously empty, then we also insert into 1. Otherwise, if then we insert into the subtree responsible for and then set .
https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
passage: $$ i\hbar \frac{d}{dt} \left| \psi \right\rangle = \hat{H} \left| \psi \right\rangle $$ The Hamiltonian (in quantum mechanics) H is a self-adjoint operator acting on the state space, $$ | \psi \rangle $$ (see Dirac notation) is the instantaneous quantum state vector at time t, position r, i is the unit imaginary number, is the reduced Planck constant. | rowspan="2" scope="col" style="width:300px;"|Wave–particle duality Planck–Einstein law: the energy of photons is proportional to the frequency of the light (the constant is the Planck constant, h). $$ E = h\nu = \hbar \omega $$ De Broglie wavelength: this laid the foundations of wave–particle duality, and was the key concept in the Schrödinger equation, $$ \mathbf{p} = \frac{h}{\lambda}\mathbf{\hat{k}} = \hbar \mathbf{k} $$ Heisenberg uncertainty principle: Uncertainty in position multiplied by uncertainty in momentum is at least half of the reduced Planck constant, similarly for time and energy; $$ \Delta x \, \Delta p \ge \frac{\hbar}{2},\, \Delta E \, \Delta t \ge \frac{\hbar}{2} $$ The uncertainty principle can be generalized to any pair of observables – see main article.
https://en.wikipedia.org/wiki/Scientific_law
passage: This is called a "plane tree" because an ordering of the children is equivalent to an embedding of the tree in the plane, with the root at the top and the children of each vertex lower than that vertex. Given an embedding of a rooted tree in the plane, if one fixes a direction of children, say left to right, then an embedding gives an ordering of the children. Conversely, given an ordered tree, and conventionally drawing the root at the top, then the child vertices in an ordered tree can be drawn left-to-right, yielding an essentially unique planar embedding. ## Properties - Every tree is a bipartite graph. A graph is bipartite if and only if it contains no cycles of odd length. Since a tree contains no cycles at all, it is bipartite. - Every tree with only countably many vertices is a planar graph. - Every connected graph G admits a spanning tree, which is a tree that contains every vertex of G and whose edges are edges of G. More specific types spanning trees, existing in every connected finite graph, include depth-first search trees and breadth-first search trees. Generalizing the existence of depth-first-search trees, every connected graph with only countably many vertices has a Trémaux tree. However, some uncountable-order graphs do not have such a tree. - Every finite tree with n vertices, with , has at least two terminal vertices (leaves).
https://en.wikipedia.org/wiki/Tree_%28graph_theory%29%23Definitions
passage: More over, by using the first equation one arrives to the useful result: $$ \dfrac{\delta^2}{\delta J(a)\delta J(b)}\left(\dfrac{W[J]}{W[0]}\right)\Bigg|_{J=0}= K^{-1}(a; b)\;; $$ Putting these results together and backing to the original notation we have: $$ \frac{\displaystyle\int f(a)f(b)\exp\left\lbrace-\frac{1}{2} \int_{\mathbb{R}^2} f(x) K(x;y) f(y)\, dx\,dy\right\rbrace \mathcal{D}[f]} BLOCK1 K^{-1}(a;b)\,. $$ Another useful integral is the functional delta function: $$ \int \exp\left\lbrace \int_{\mathbb{R}} f(x) g(x)dx\right\rbrace \mathcal{D}[f] = \delta[g] = \prod_x\delta\big(g(x)\big), $$ which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions $$ \psi(x) $$ , where $$ \psi(x) \psi(y) = -\psi(y) \psi(x) $$ , which is useful in quantum electrodynamics for calculations involving fermions. ## Approaches to path integrals Functional integrals where the space of integration consists of paths (ν = 1) can be defined in many different ways.
https://en.wikipedia.org/wiki/Functional_integration
passage: Having an index block which is slightly larger than the storage system's actual block represents a significant performance decrease; therefore erring on the side of caution is preferable. If nodes of the B+ tree are organized as arrays of elements, then it may take a considerable time to insert or delete an element as half of the array will need to be shifted on average. To overcome this problem, elements inside a node can be organized in a binary tree or a B+ tree instead of an array. B+ trees can also be used for data stored in RAM. In this case a reasonable choice for block size would be the size of processor's cache line. Space efficiency of B+ trees can be improved by using some compression techniques. One possibility is to use delta encoding to compress keys stored into each block. For internal blocks, space saving can be achieved by either compressing keys or pointers. For string keys, space can be saved by using the following technique: Normally the i-th entry of an internal block contains the first key of block . Instead of storing the full key, we could store the shortest prefix of the first key of block that is strictly greater (in lexicographic order) than last key of block i. There is also a simple way to compress pointers: if we suppose that some consecutive blocks are stored contiguously, then it will suffice to store only a pointer to the first block and the count of consecutive blocks. All the above compression techniques have some drawbacks. First, a full block must be decompressed to extract a single element.
https://en.wikipedia.org/wiki/B%2B_tree
passage: ### Other moments #### Moment generating function It also follows that the moment generating function is $$ \begin{align} M_X(\alpha; \beta; t) &= \operatorname{E}\left[e^{tX}\right] \\[4pt] &= \int_0^1 e^{tx} f(x;\alpha,\beta)\,dx \\[4pt] &= {}_1F_1(\alpha; \alpha+\beta; t) \\[4pt] &= \sum_{n=0}^\infty \frac {\alpha^{(n)}} {(\alpha+\beta)^{(n)}}\frac {t^n}{n!}\\[4pt] &= 1 +\sum_{k=1}^{\infty} \left( \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} \right) \frac{t^k}{k!}. \end{align} $$ In particular MX(α; β; 0) = 1. #### Higher moments Using the moment generating function, the k-th raw moment is given by the factor $$ \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} $$ multiplying the (exponential series) term $$ \left(\frac{t^k}{k!}\right) $$ in the series of the moment generating function $$ \operatorname{E}[X^k]= \frac{\alpha^{(k)}}{(\alpha + \beta)^{(k)}} = \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} $$ where (x)(k) is a Pochhammer symbol representing rising factorial.
https://en.wikipedia.org/wiki/Beta_distribution
passage: - An integrated photonic platform for quantum information with continuous variables is documented. - On December 17, 2018, the company IonQ introduces the first commercial trapped-ion quantum computer, with a program length of over 60 two-qubit gates, 11 fully connected qubits, 55 addressable pairs, one-qubit gate error of <0.03% and two-qubit gate error of <1.0%. - On December 21, 2018, the US National Quantum Initiative Act was signed into law by US President Donald Trump, establishing the goals and priorities for a 10-year plan to accelerate the development of quantum information science and technology applications in the United States. ### 2019 - IBM unveils its first commercial quantum computer, the IBM Q System One, designed by UK-based Map Project Office and Universal Design Studio and manufactured by Goppion. - Austrian physicists demonstrate self-verifying, hybrid, variational quantum simulation of lattice models in condensed matter and high-energy physics using a feedback loop between a classical computer and a quantum co-processor. - Griffith University, University of New South Wales (UNSW), Sydney, Australia, and UTS, in partnership with seven universities in the United States, develop noise cancelling for quantum bits via machine learning, taking quantum noise in a quantum chip down to 0%. - Quantum Darwinism is observed in diamond at room temperature. - Google reveals its Sycamore processor, consisting of 53 qubits. A paper by Google's quantum computer research team is briefly available in late September 2019, claiming the project had reached quantum supremacy. Google also develops a cryogenic chip for controlling qubits from within a dilution refrigerator.
https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_and_communication
passage: As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells. ## Biological functions DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In an alternative fashion, a cell may copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.
https://en.wikipedia.org/wiki/DNA
passage: The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clustering algorithm are: 1. Divide data space into a finite number of cells. 1. Randomly select a cell ‘c’, where c should not be traversed beforehand. 1. Calculate the density of ‘c’ 1. If the density of ‘c’ greater than threshold density 1. Mark cell ‘c’ as a new cluster 1. Calculate the density of all the neighbors of ‘c’ 1. If the density of a neighboring cell is greater than threshold density then, add the cell in the cluster and repeat steps 4.2 and 4.3 till there is no neighbor with a density greater than threshold density. 1. Repeat steps 2,3 and 4 till all the cells are traversed. 1. Stop. ### Recent developments In recent years, considerable effort has been put into improving the performance of existing algorithms. Among them are CLARANS, and BIRCH. With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering.
https://en.wikipedia.org/wiki/Cluster_analysis
passage: The growth of microcracks is not the growth of the original crack or imperfection. The cracks that nucleate do so perpendicular to the original crack and are known as secondary cracks. The figure below emphasizes this point for wingtip cracks. These secondary cracks can grow to as long as 10-15 times the length of the original cracks in simple (uniaxial) compression. However, if a transverse compressive load is applied. The growth is limited to a few integer multiples of the original crack's length. #### Shear bands If the sample size is large enough such that the worse defect's secondary cracks cannot grow large enough to break the sample, other defects within the sample will begin to grow secondary cracks as well. This will occur homogeneously over the entire sample. These micro-cracks form an echelon that can form an “intrinsic” fracture behavior, the nucleus of a shear fault instability. Shown right: Eventually this leads the material deforming non-homogeneously. That is the strain caused by the material will no longer vary linearly with the load. Creating localized shear bands on which the material will fail according to deformation theory. “The onset of localized banding does not necessarily constitute final failure of a material element, but it presumably is at least the beginning of the primary failure process under compressive loading.”
https://en.wikipedia.org/wiki/Compressive_strength