text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: Sets are uniquely characterized by their elements; this means that two sets that have precisely the same elements are equal (they are the same set). In a formalized set theory, this is usually defined by an axiom called the Axiom of extensionality. For example, using set builder notation, the following states that "The set of all integers $$ (\Z) $$ greater than 0 but not more than 3 is equal to the set containing only 1, 2, and 3", despite the differences in formulation. $$ \{x\in \Z \mid 0<x\le 3\} = \{1,2,3\}, $$ The term extensionality, as used in 'Axiom of Extensionality has its roots in logic and grammar (cf. Extension (semantics)). In grammar, an intensional definition describes the necessary and sufficient conditions for a term to apply to an object. For example: "A Platonic solid is a convex, regular polyhedron in three-dimensional Euclidean space." An extensional definition instead lists all objects where the term applies. For example: "A Platonic solid is one of the following: Tetrahedron, Cube, Octahedron, Dodecahedron, or Icosahedron." In logic, the extension of a predicate is the set of all objects for which the predicate is true. Further, the logical principle of extensionality judges two objects to be equal if they satisfy the same external properties.
https://en.wikipedia.org/wiki/Equality_%28mathematics%29
passage: A proper example of a smooth Bump function would be: $$ u(x)=\begin{cases} 1,\text{if } x=0, \\ 0, \text{if } |x|\geq 1, \\ \frac{1}{1+e^{\frac{1-2|x|}{x^2-|x|}}}, \text{otherwise}, \end{cases} $$ A proper example of a smooth transition function will be: $$ w(x)=\begin{cases}\frac{1}{1+e^{\frac{2x-1}{x^2-x}}}&\text{if }0<x<1,\\ 0&\text{if } x\leq 0,\\ 1&\text{if } x\geq 1,\end{cases} $$ where could be noticed that it can be represented also through Hyperbolic functions: $$ \frac{1}{1+e^{\frac{2x-1}{x^2-x}}} = \frac{1}{2}\left( 1-\tanh\left(\frac{2x-1}{2(x^2-x)} \right) \right) $$ ## Existence of bump functions It is possible to construct bump functions "to specifications". Stated formally, if $$ K $$ is an arbitrary compact set in $$ n $$ dimensions and $$ U $$ is an open set containing $$ K, $$ there exists a bump function $$ \phi $$ which is $$ 1 $$ on $$ K $$ and $$ 0 $$ outside of $$ U. $$ Since $$ U $$ can be taken to be a very small neighborhood of $$ K, $$
https://en.wikipedia.org/wiki/Bump_function
passage: Rank 3GroupCartanGroupCartanG22<5,3,2>2 G23[5,3]G24 [1 1 14]4 G253[3]3[3]3G263[3]3[4]2G27 [1 1 15]4 + Rank 4GroupCartanGroupCartanG28[3,4,3]G29[1 1 2]4G30[5,3,3]G323[3]3[3]3 + Rank 5GroupCartanGroupCartanG31O4 G33[1 2 2]3
https://en.wikipedia.org/wiki/Complex_reflection_group
passage: #### Example augmented matrix Suppose you have three points that define a non-degenerate triangle in a plane, or four points that define a non-degenerate tetrahedron in 3-dimensional space, or generally points , ..., that define a non-degenerate simplex in -dimensional space. Suppose you have corresponding destination points , ..., , where these new points can lie in a space with any number of dimensions. (Furthermore, the new points need not be distinct from each other and need not form a non-degenerate simplex.) The unique augmented matrix that achieves the affine transformation $$ \begin{bmatrix}\mathbf{y}_i\\1\end{bmatrix} = M \begin{bmatrix}\mathbf{x}_i\\1\end{bmatrix} $$ for every is $$ M = \begin{bmatrix}\mathbf{y}_1&\cdots&\mathbf{y}_{n+1}\\1&\cdots&1\end{bmatrix} \begin{bmatrix}\mathbf{x}_1&\cdots&\mathbf{x}_{n+1}\\1&\cdots&1\end{bmatrix}^{-1}. $$ ## Properties ### Properties preserved An affine transformation preserves: 1. collinearity between points: three or more points which lie on the same line (called collinear points) continue to be collinear after the transformation.
https://en.wikipedia.org/wiki/Affine_transformation
passage: This inequation is satisfied for any x, so the largest such x is 1. Furthermore, the rule of modus ponens allows us to derive the formula Q from the formulas P and P→Q. But in any Heyting algebra, if P has the value 1, and P→Q has the value 1, then it means that $$ P \land 1 \le Q $$ , and so $$ 1 \land 1 \le Q $$ ; it can only be that Q has the value 1. This means that if a formula is deducible from the laws of intuitionistic logic, being derived from its axioms by way of the rule of modus ponens, then it will always have the value 1 in all Heyting algebras under any assignment of values to the formula's variables. However one can construct a Heyting algebra in which the value of Peirce's law is not always 1. Consider the 3-element algebra {0,,1} as given above. If we assign to P and 0 to Q, then the value of Peirce's law ((P→Q)→P)→P is . It follows that Peirce's law cannot be intuitionistically derived. See Curry–Howard isomorphism for the general context of what this implies in type theory. The converse can be proven as well: if a formula always has the value 1, then it is deducible from the laws of intuitionistic logic, so the intuitionistically valid formulas are exactly those that always have a value of 1.
https://en.wikipedia.org/wiki/Heyting_algebra
passage: Until 2005, those wishing to become a general practitioner of medicine had to do a minimum of the following postgraduate training: - One year as a pre-registration house officer (PRHO) (formerly called a house officer), in which the trainee would usually spend six months on a general surgical ward and six months on a general medical ward in a hospital; - Two years as a senior house officer (SHO) – often on a General Practice Vocational Training Scheme (GP-VTS) in which the trainee would normally complete four six-month jobs in hospital specialties such as obstetrics and gynaecology, paediatrics, geriatric medicine, accident and emergency or psychiatry; - One year as a general practice registrar on a GPST. This process changed under the programme Modernising Medical Careers. Medical practitioners graduating from 2005 onwards have to do a minimum of five years postgraduate training: - Two years of Foundation Training, in which the trainee will do a rotation around either six four-month jobs or eight three-month jobs – these include at least three months in general medicine and three months in general surgery, but will also include jobs in other areas; - A three-year "run-through" GP Speciality Training Programme containing (GPSTP): This comprises a minimum of twelve months as a hospital based Specialty Trainee during which time the trainee completes a mixture of jobs in specialties such as obstetrics and gynaecology, paediatrics, geriatric medicine, accident and emergency or psychiatry; eighteen to twenty-four months as a GP Specialty Trainee working in General Practice.
https://en.wikipedia.org/wiki/General_practitioner
passage: Bits are deleted according to a puncturing matrix. The following puncturing matrices are the most frequently used: Code rate Puncturing matrix Free distance (for NASA standard K=7 convolutional code) 1/2(No perf.) 1 1 10 2/3 1 0 1 1 6 3/4 1 0 1 1 1 0 5 5/6 1 0 1 0 1 1 1 0 1 0 4 7/8 1 0 0 0 1 0 1 1 1 1 1 0 1 0 3 For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard. Punctured convolutional codes are widely used in the satellite communications, for example, in Intelsat systems and Digital Video Broadcasting. Punctured convolutional codes are also called "perforated". ## Turbo codes: replacing convolutional codes Simple Viterbi-decoded convolutional codes are now giving way to turbo codes, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by Shannon's theorem with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance. Concatenation with an outer algebraic code (e.g., Reed–Solomon) addresses the issue of error floors inherent to turbo code designs.
https://en.wikipedia.org/wiki/Convolutional_code
passage: In mathematics, the theory of fiber bundles with a structure group $$ G $$ (a topological group) allows an operation of creating an associated bundle, in which the typical fiber of a bundle changes from $$ F_1 $$ to $$ F_2 $$ , which are both topological spaces with a group action of $$ G $$ . For a fiber bundle $$ F $$ with structure group $$ G $$ , the transition functions of the fiber (i.e., the cocycle) in an overlap of two coordinate systems $$ U_\alpha $$ and $$ U_\beta $$ are given as a $$ G $$ -valued function $$ g_{\alpha\beta} $$ on $$ U_\alpha \cap U_\beta $$ . One may then construct a fiber bundle $$ F' $$ as a new fiber bundle having the same transition functions, but possibly a different fiber. ## An example A simple case comes with the Möbius strip, for which $$ G $$ is the cyclic group of order 2, $$ \mathbb{Z}_2 $$ . We can take $$ F $$ to be any of the following: the real number line $$ \mathbb{R} $$ , the interval $$ [-1,\ 1] $$ , the real number line less the point 0, or the two-point set $$ \{-1,\ 1\} $$ .
https://en.wikipedia.org/wiki/Associated_bundle
passage: For example, if a language allows new types to be declared, a CFG cannot predict the names of such types nor the way in which they should be used. Even if a language has a predefined set of types, enforcing proper usage usually requires some context. Another example is duck typing, where the type of an element can change depending on context. Operator overloading is yet another case where correct usage and final function are context-dependent. ### Design The design of an AST is often closely linked with the design of a compiler and its expected features. Core requirements include the following: - Variable types must be preserved, as well as the location of each declaration in source code. - The order of executable statements must be explicitly represented and well defined. - Left and right components of binary operations must be stored and correctly identified. - Identifiers and their assigned values must be stored for assignment statements. These requirements can be used to design the data structure for the AST. Some operations will always require two elements, such as the two terms for addition. However, some language constructs require an arbitrarily large number of children, such as argument lists passed to programs from the command shell. As a result, an AST used to represent code written in such a language has to also be flexible enough to allow for quick addition of an unknown quantity of children. To support compiler verification it should be possible to unparse an AST into source code form. The source code produced should be sufficiently similar to the original in appearance and identical in execution, upon recompilation.
https://en.wikipedia.org/wiki/Abstract_syntax_tree
passage: HealthLeaders Media, May 27, 2009. Funding for primary care varies a great deal between different countries: general taxation, national insurance systems, private insurance and direct payment by patients are all used, sometimes in combination. The payment system for primary care physicians also varies. Some are paid by fee-for-service and some by capitation for a list of registered patients. ## Primary care by region ### Canada In Canada, access to primary and other healthcare services is guaranteed for all citizens through the Canada Health Act. ### Hong Kong The Hong Kong Special Administrative Region Government's 2016 Policy Address recommended strengthening the development of primary care and establishing an electronic database of the "Primary Care Guide" to facilitate public consultation. The Department of Health developed reference profiles for preventive care for some chronic diseases. In 2017, the policy address recommended the establishment of a primary health care development steering committee to comprehensively review the planning of primary health care services and provide community medical services through regional medical and social cooperation. The 2018 policy address proposed the establishment of the first district health centre and promoted the establishment of district centre in other districts. The Hong Kong Food and Health Bureau established the Primary Healthcare Office on March 1, 2019, to monitor and supervise the development of primary health care services. In the process of developing the district health centers, regional health stations will be set up in various districts as transitional units offering the public with primary care services. ### Nigeria In Nigeria, healthcare is a concurrent responsibility of three tiers of government.
https://en.wikipedia.org/wiki/Primary_care
passage: In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint language Γ, CSP(Γ) is NP-complete or in P. This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary: Corollary (Bulatov 2017; Zhuk 2017): The H-coloring problem on directed graphs, for a fixed H, is either in P or NP-complete. ### Homomorphisms from a fixed family of graphs The homomorphism problem with a single fixed graph G on left side of input instances can be solved by brute-force in time |V(H)|O(|V(G)|), so polynomial in the size of the input graph H. In other words, the problem is trivially in P for graphs G of bounded size. The interesting question is then what other properties of G, beside size, make polynomial algorithms possible. The crucial property turns out to be treewidth, a measure of how tree-like the graph is. For a graph G of treewidth at most k and a graph H, the homomorphism problem can be solved in time |V(H)|O(k) with a standard dynamic programming approach. In fact, it is enough to assume that the core of G has treewidth at most k.
https://en.wikipedia.org/wiki/Graph_homomorphism
passage: ## Modifications - At the extreme values of the coefficient BM25 turns into ranking functions known as BM11 (for $$ b=1 $$ ) and BM15 (for $$ b=0 $$ ). - BM25F (or the BM25 model with Extension to Multiple Weighted Fields) is a modification of BM25 in which the document is considered to be composed from several fields (such as headlines, main text, anchor text) with possibly different degrees of importance, term relevance saturation and length normalization. BM25F defines each type of field as a stream, applying a per-stream weighting to scale each stream against the calculated score. - BM25+ is an extension of BM25. BM25+ was developed to address one deficiency of the standard BM25 in which the component of term frequency normalization by document length is not properly lower-bounded; as a result of this deficiency, long documents which do match the query term can often be scored unfairly by BM25 as having a similar relevancy to shorter documents that do not contain the query term at all.
https://en.wikipedia.org/wiki/Okapi_BM25
passage: Depending on the operating system, utility and remote file system, a file transfer might silently strip data streams. A safe way of copying or moving files is to use the BackupRead and BackupWrite system calls, which allow programs to enumerate streams, to verify whether each stream should be written to the destination volume and to knowingly skip unwanted streams. ### Resident vs. non-resident attributes To optimize the storage and reduce the I/O overhead for the very common case of attributes with very small associated value, NTFS prefers to place the value within the attribute itself (if the size of the attribute does not then exceed the maximum size of an MFT record), instead of using the MFT record space to list clusters containing the data; in that case, the attribute will not store the data directly but will just store an allocation map (in the form of data runs) pointing to the actual data stored elsewhere on the volume. When the value can be accessed directly from within the attribute, it is called "resident data" (by computer forensics workers). The amount of data that fits is highly dependent on the file's characteristics, but 700 to 800 bytes is common in single-stream files with non-lengthy filenames and no ACLs. - Some attributes (such as the preferred filename, the basic file attributes) cannot be made non-resident.
https://en.wikipedia.org/wiki/NTFS
passage: Using Euler's formula and taking only the real part of the solution it is the same cosine solution for the 1 DOF system. The exponential solution is only used because it is easier to manipulate mathematically. The equation then becomes: $$ \begin{bmatrix}-\omega^2 \begin{bmatrix} M \end{bmatrix} + \begin{bmatrix} K \end{bmatrix} \end{bmatrix} \begin{Bmatrix}X\end{Bmatrix}e^{i\omega t}=0. $$ Since $$ e^{i\omega t} $$ cannot equal zero the equation reduces to the following. $$ \begin{bmatrix}\begin{bmatrix}K\end{bmatrix}-\omega^2 \begin{bmatrix} M \end{bmatrix} \end{bmatrix} \begin{Bmatrix} X \end{Bmatrix}=0. $$
https://en.wikipedia.org/wiki/Vibration
passage: Aureomycin was the best known of the second generation. Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished. Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated hormone replacement therapy (HRT) regimens during the 1990s. Though not designed to cure any disease, HRT is prescribed to improve quality of life and as a preventative measure, such as treating post-menopausal symptoms. In the 1960s and early 1970s, more physicians began to prescribe estrogen for their female patients.
https://en.wikipedia.org/wiki/Medication
passage: This is in fact the first printed version of Green's theorem in the form appearing in modern textbooks. George Green, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). Green did not actually derive the form of "Green's theorem" which appears in this article; rather, he derived a form of the "divergence theorem", which appears on pages 10–12 of his Essay. In 1846, the form of "Green's theorem" which appears in this article was first published, without proof, in an article by Augustin Cauchy: A. Cauchy (1846) "Sur les intégrales qui s'étendent à tous les points d'une courbe fermée" (On integrals that extend over all of the points of a closed curve), Comptes rendus, 23: 251–255. (The equation appears at the bottom of page 254, where (S) denotes the line integral of a function k along the curve s that encloses the area S.) A proof of the theorem was finally provided in 1851 by Bernhard Riemann in his inaugural dissertation: Bernhard Riemann (1851) Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse (Basis for a general theory of functions of a variable complex quantity), (Göttingen, (Germany): Adalbert Rente, 1867); see pages 8–9.
https://en.wikipedia.org/wiki/Green%27s_theorem
passage: Another compounding factor was that fossils of apparently marine animals were found in parts of the world that were well above sea-level. Some suggested that these fossils had accumulated in horizontal layers under the sea and that subsequent tectonic activity had displaced them from their original positions. As these observations were made over time, it was eventually understood that fossils could be used to make inferences about the history of life from their presence or absence in particular areas over time. The fossil record is the main tool used by scientists to study the history of life and assess the diversification of life over time. Very little is known about the origins of life and the oldest life forms, and this is likely a result of the poor quality of fossil preservation in older rocks. Older rocks preserve less information on average than those deposited closer to the present, and this effect is compounded across the billions of years that life is believed to have existed. Most fossils are made up of the hard parts of an organism that have been recrystallized by minerals, preserving bone, wood, or shells in a material than can be harder or denser than in life. While the hard parts are the most likely to fossilize, soft tissues can also leave impressions on sediment before they fully decompose, allowing non-mineralized parts of an organisms anatomy to be preserved. Even more rarely, a complete organism can be encased in sediment before decomposition, preserving it completely. While most fossils are body fossils (made of the actual body parts of a dead organism), some fossils can also consist of traces of the behaviour or life of organisms.
https://en.wikipedia.org/wiki/Paleontology
passage: But since $$ f \in L^1(\mathbb R^n) $$ , fact 5 says that $$ \lim_{\varepsilon\to 0}(\varphi_{\varepsilon} * f) (x) = f(x). $$ Putting together the above we have shown that $$ \int_{\mathbb{R}^n} e^{2\pi i x\cdot\xi}(\mathcal{F}f)(\xi)\,d\xi = f(x). \qquad\square $$ Fourier integral theorem The theorem can be restated as $$ f(x)=\int_{\mathbb{R}} \int_{\mathbb{R}} e^{2\pi i(x-y)\cdot\xi} \, f(y)\,dy\,d\xi. $$ By taking the real part of each side of the above we obtain $$ f(x)=\int_{\mathbb{R}} \int_{\mathbb{R}} \cos (2\pi (x-y)\cdot\xi) \, f(y)\,dy\,d\xi. $$
https://en.wikipedia.org/wiki/Fourier_inversion_theorem
passage: Examples of such formulas encountered in practice can be very large, for example with 100,000 variables and 1,000,000 conjuncts. A formula in CNF can be converted into an equisatisfiable formula in "kCNF" (for k≥3) by replacing each conjunct with more than k variables $$ X_1 \vee \ldots \vee X_k \vee \ldots \vee X_n $$ by two conjuncts $$ X_1 \vee \ldots \vee X_{k-1} \vee Z $$ and $$ \neg Z \vee X_k \lor \ldots \vee X_n $$ with a new variable, and repeating as often as necessary. ## First-order logic In first order logic, conjunctive normal form can be taken further to yield the clausal normal form of a logical formula, which can be then used to perform first-order resolution. In resolution-based automated theorem-proving, a CNF formula , is commonly represented as a set of sets . See below for an example. ### Converting from first-order logic To convert first-order logic to CNF: 1. Convert to negation normal form. 1. Eliminate implications and equivalences: repeatedly replace $$ P \rightarrow Q $$ with $$ \lnot P \lor Q $$ ; replace $$ P \leftrightarrow Q $$ with $$ (P \lor \lnot Q) \land (\lnot P \lor Q) $$ .
https://en.wikipedia.org/wiki/Conjunctive_normal_form
passage: With the emergence of biochemistry, classifications of organisms are now often based on DNA sequence data or a combination of DNA and morphology. Many systematists contend that only monophyletic taxa should be recognized as named groups. The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters (synapomorphies); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between inferred patterns of common ancestry and evolutionary distinctness. ## Inference of a phylogenetic tree Usual methods of phylogenetic inference involve computational approaches implementing an optimality criterion and methods of parsimony, maximum likelihood (ML), and MCMC-based Bayesian inference. All these depend upon an implicit or explicit mathematical model describing the relative probabilities of character state transformation within and among the characters observed. Phenetics, popular in the mid-20th century but now largely obsolete, used distance matrix-based methods to construct trees based on overall similarity in morphology or similar observable traits, which was often assumed to approximate phylogenetic relationships. Neighbor Joining is a phenetic method that is often used for building similarity trees for DNA barcodes.
https://en.wikipedia.org/wiki/Phylogenetics
passage: The Dirac $$ \delta $$ -function stands for the conservation of energy. In addition, the term $$ \langle k|H'|k' \rangle $$ , generally referred to as the matrix element, mathematically represents an inner product of the initial and final wave functions of the carrier: $$ \langle k|H'|k' \rangle = \frac{1}{Vol} \int_\mathrm{Vol} \psi_k (r) H' \psi^*_{k'} (r) \, dr $$ In a crystal lattice, the wavefunctions $$ \psi_k (r) $$ and $$ \psi_{k'} (r) $$ are simply Bloch waves. When it is possible, analytic expression of the Matrix elements are commonly found by Fourier expanding the Hamiltonian H', as in the case of Impurity scattering or acoustic phonon scattering. In the important case of a transition from an energy state E to an energy state E' due to a phonon of wave vector q and frequency $$ \omega_q $$ , the energy and momentum change is: $$ E' - E = E(k') - E(k) \pm \hbar \omega_q \, $$ $$ k' - k \pm q = \begin{cases} 0 & \text{ } \\ R & \text{Umklapp-process} \end{cases} $$ where R is a reciprocal lattice vector.
https://en.wikipedia.org/wiki/Monte_Carlo_methods_for_electron_transport
passage: Contours Contours are the class of curves on which we define contour integration. A contour is a directed curve which is made up of a finite sequence of directed smooth curves whose endpoints are matched to give a single direction. This requires that the sequence of curves $$ \gamma_1,\dots,\gamma_n $$ be such that the terminal point of $$ \gamma_i $$ coincides with the initial point of $$ \gamma_{i+1} $$ for all $$ i $$ such that $$ 1\leq i<n $$ . This includes all directed smooth curves. Also, a single point in the complex plane is considered a contour. The symbol $$ + $$ is often used to denote the piecing of curves together to form a new curve. Thus we could write a contour $$ \Gamma $$ that is made up of $$ n $$ curves as $$ \Gamma = \gamma_1 + \gamma_2 + \cdots + \gamma_n. $$ ## Contour integrals The contour integral of a complex function $$ f:\C\to\C $$ is a generalization of the integral for real-valued functions. ### For continuous functions in the complex plane, the contour integral can be defined in analogy to the line integral by first defining the integral along a directed smooth curve in terms of an integral over a real valued parameter.
https://en.wikipedia.org/wiki/Contour_integration
passage: In quantum mechanics, it often occurs that little or no information about the inner product of two arbitrary (state) kets is present, while it is still possible to say something about the expansion coefficients and of those vectors with respect to a specific (orthonormalized) basis. In this case, it is particularly useful to insert the unit operator into the bracket one time or more. For more information, see Resolution of the identity, $$ {\mathbb I} = \int\! dx~ | x \rangle \langle x |= \int\! dp ~| p \rangle \langle p |, $$ where $$ |p\rangle = \int dx \frac{e^{ixp / \hbar} |x\rangle}{\sqrt{2\pi\hbar}}. $$ Since , plane waves follow, $$ \langle x | p \rangle = \frac{e^{ixp / \hbar}}{\sqrt{2\pi\hbar}}. $$ In his book (1958), Ch. III.20, Dirac defines the standard ket which, up to a normalization, is the translationally invariant momentum eigenstate $$ |\varpi\rangle=\lim_{p\to 0} |p\rangle $$ in the momentum representation, i.e., $$ \hat{p}|\varpi\rangle=0 $$ .
https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation
passage: This can be corrected by dividing by the square root of the right hand side of the equation above, when $$ n=m $$ . Although it does not yield an orthonormal basis, an alternative normalization is sometimes preferred due to its simplicity: $$ P_n^{(\alpha, \beta)} (1) = {n+\alpha\choose n}. $$ ### Symmetry relation The polynomials have the symmetry relation $$ P_n^{(\alpha, \beta)} (-z) = (-1)^n P_n^{(\beta, \alpha)} (z); $$ thus the other terminal value is $$ P_n^{(\alpha, \beta)} (-1) = (-1)^n { n+\beta\choose n}. $$ ### Derivatives The $$ k $$ th derivative of the explicit expression leads to $$ \frac{d^k}{dz^k} P_n^{(\alpha,\beta)} (z) = \frac{\Gamma (\alpha+\beta+n+1+k)}{2^k \Gamma (\alpha+\beta+n+1)} P_{n-k}^{(\alpha+k, \beta+k)} (z). $$
https://en.wikipedia.org/wiki/Jacobi_polynomials
passage: Let be a curve of class and let and denote the circular points at infinity. Draw the tangents to through each of and . There are two sets of lines which will have points of intersection, with exceptions in some cases due to singularities, etc. These points of intersection are the defined to be the foci of . In other words, a point is a focus if both and are tangent to . When is a real curve, only the intersections of conjugate pairs are real, so there are in a real foci and imaginary foci. When is a conic, the real foci defined this way are exactly the foci which can be used in the geometric construction of . ## Confocal curves Let be given as foci of a curve of class . Let be the product of the tangential equations of these points and the product of the tangential equations of the circular points at infinity. Then all the lines which are common tangents to both and are tangent to . So, by the AF+BG theorem, the tangential equation of has the form . Since has class , must be a constant and but have degree less than or equal to . The case can be eliminated as degenerate, so the tangential equation of can be written as where is an arbitrary polynomial of degree . For example, let , , and . The tangential equations are $$ \begin{align} X + 1 &= 0 \\ X - 1 &= 0 \end{align} $$ so .
https://en.wikipedia.org/wiki/Focus_%28geometry%29
passage: However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter. ### Sieves Before computers, mathematical tables listing all of the primes or prime factorizations up to a given limit were commonly printed. The oldest known method for generating a list of primes is called the sieve of Eratosthenes. The animation shows an optimized variant of this method. Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. In advanced mathematics, sieve theory applies similar methods to other problems. ### Primality testing versus primality proving Some of the fastest modern tests for whether an arbitrary given number is prime are probabilistic (or Monte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer. For instance the Solovay–Strassen primality test on a given number chooses a number randomly from 2 through $$ p-2 $$ and uses modular exponentiation to check whether $$ a^{(p-1)/2}\pm 1 $$ is divisible by . If so, it answers yes and otherwise it answers no. If really is prime, it will always answer yes, but if is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. If this test is repeated times on the same number, the probability that a composite number could pass the test every time is at most .
https://en.wikipedia.org/wiki/Prime_number
passage: $$ \text{s} = \int_{a}^{b} \sqrt{\mathrm{d}x^2+\mathrm{d}y^2} = \int_{a}^{b} \sqrt{1+y'^2}\,\mathrm{d}x, $$ the integrand function being $$ L(x,y, y') = \sqrt{1+y'^2} $$ . The partial derivatives of L are: $$ \frac{\partial L(x, y, y')}{\partial y'} = \frac{y'}{\sqrt{1 + y'^2}} \quad \text{and} \quad \frac{\partial L(x, y, y')}{\partial y} = 0. $$ By substituting these into the Euler–Lagrange equation, we obtain $$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x} \frac{y'(x)}{\sqrt{1 + (y'(x))^2}} &= 0 \\ \frac{y'(x)}{\sqrt{1 + (y'(x))^2}} &= C = \text{constant} \\ \Rightarrow y'(x)&= \frac{C}{\sqrt{1-C^2}} =: A \\ \Rightarrow y(x) &= Ax + B \end{align} $$ that is, the function must have a constant first derivative, and thus its graph is a straight line.
https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation
passage: ## History The idea of an isoperimetric function for a finitely presented group goes back to the work of Max Dehn in 1910s. Dehn proved that the word problem for the standard presentation of the fundamental group of a closed oriented surface of genus at least two is solvable by what is now called Dehn's algorithm. A direct consequence of this fact is that for this presentation the Dehn function satisfies Dehn(n) ≤ n. This result was extended in 1960s by Martin Greendlinger to finitely presented groups satisfying the C'(1/6) small cancellation condition. The formal notion of an isoperimetric function and a Dehn function as it is used today appeared in late 1980s – early 1990s together with the introduction and development of the theory of word-hyperbolic groups. In his 1987 monograph "Hyperbolic groups" Gromov proved that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality, that is, if and only if the Dehn function of this group is equivalent to the function f(n) = n. Gromov's proof was in large part informed by analogy with filling area functions for compact Riemannian manifolds where the area of a minimal surface bounding a null-homotopic closed curve is bounded in terms of the length of that curve. The study of isoperimetric and Dehn functions quickly developed into a separate major theme in geometric group theory, especially since the growth types of these functions are natural quasi-isometry invariants of finitely presented groups.
https://en.wikipedia.org/wiki/Dehn_function
passage: They are probabilistically complete, meaning the probability that they will produce a solution approaches 1 as more time is spent. However, they cannot determine if no solution exists. Given basic visibility conditions on Cfree, it has been proven that as the number of configurations N grows higher, the probability that the above algorithm finds a solution approaches 1 exponentially. Visibility is not explicitly dependent on the dimension of C; it is possible to have a high-dimensional space with "good" visibility or a low-dimensional space with "poor" visibility. The experimental success of sample-based methods suggests that most commonly seen spaces have good visibility. There are many variants of this basic scheme: - It is typically much faster to only test segments between nearby pairs of milestones, rather than all pairs. - Nonuniform sampling distributions attempt to place more milestones in areas that improve the connectivity of the roadmap. - Quasirandom samples typically produce a better covering of configuration space than pseudorandom ones, though some recent work argues that the effect of the source of randomness is minimal compared to the effect of the sampling distribution. - Employs local-sampling by performing a directional Markov chain Monte Carlo random walk with some local proposal distribution. - It is possible to substantially reduce the number of milestones needed to solve a given problem by allowing curved eye sights (for example by crawling on the obstacles that block the way between two milestones). - If only one or a few planning queries are needed, it is not always necessary to construct a roadmap of the entire space.
https://en.wikipedia.org/wiki/Motion_planning
passage: Note that all irreducible representations belonging to the same isotype appear with a multiplicity equal to $$ \dim (\text{Hom}_G(V_\eta,V_I))=\langle V_\eta,V_I \rangle_G. $$ Let $$ (\rho, V_\rho) $$ be a representation of $$ G, $$ then there exists a canonical isomorphism $$ T: \text{Hom}_G(V_\rho, I^G_H(\eta))\to \text{Hom}_H(V_\rho|_H, V_\eta). $$ The Frobenius reciprocity transfers, together with the modified definitions of the inner product and of the bilinear form, to compact groups. The theorem now holds for square integrable functions on $$ G $$ instead of class functions, but the subgroup $$ H $$ must be closed. ### The Peter-Weyl Theorem Another important result in the representation theory of compact groups is the Peter-Weyl Theorem. It is usually presented and proven in harmonic analysis, as it represents one of its central and fundamental statements. The Peter-Weyl Theorem. Let $$ G $$ be a compact group.
https://en.wikipedia.org/wiki/Representation_theory_of_finite_groups
passage: Viable inbred offspring are also likely to be inflicted with physical deformities and genetically inherited diseases. Studies have confirmed an increase in several genetic disorders due to inbreeding such as blindness, hearing loss, neonatal diabetes, limb malformations, disorders of sex development, schizophrenia and several others. Moreover, there is an increased risk for congenital heart disease depending on the inbreeding coefficient (See coefficient of inbreeding) of the offspring, with significant risk accompanied by an F =.125 or higher. ### Prevalence The general negative outlook and eschewal of inbreeding that is prevalent in the Western world today has roots from over 2000 years ago. Specifically, written documents such as the Bible illustrate that there have been laws and social customs that have called for the abstention from inbreeding. Along with cultural taboos, parental education and awareness of inbreeding consequences have played large roles in minimizing inbreeding frequencies in areas like Europe. That being so, there are less urbanized and less populated regions across the world that have shown continuity in the practice of inbreeding. The continuity of inbreeding is often either by choice or unavoidably due to the limitations of the geographical area. When by choice, the rate of consanguinity is highly dependent on religion and culture. In the Western world, some Anabaptist groups are highly inbred because they originate from small founder populations that have bred as a closed population.
https://en.wikipedia.org/wiki/Inbreeding
passage: Actual impedances and admittances must be normalised before using them on a Smith chart. Once the result is obtained it may be de-normalised to obtain the actual result. ### The normalised impedance Smith chart Using transmission-line theory, if a transmission line is terminated in an impedance ( $$ Z_\text{T}\, $$ ) which differs from its characteristic impedance ( $$ Z_0\, $$ ), a standing wave will be formed on the line comprising the resultant of both the incident or forward ( $$ V_\text{F}\, $$ ) and the reflected or reversed ( $$ V_\text{R}\, $$ ) waves.
https://en.wikipedia.org/wiki/Smith_chart
passage: ### Calculations of moments The moment-generating function is so called because if it exists on an open interval around , then it is the exponential generating function of the moments of the probability distribution: $$ m_n = \operatorname{E}\left[ X^n \right] = M_X^{(n)}(0) = \left. \frac{d^n M_X}{dt^n}\right|_{t=0}. $$ That is, with being a nonnegative integer, the -th moment about 0 is the -th derivative of the moment generating function, evaluated at . ## Other properties Jensen's inequality provides a simple lower bound on the moment-generating function: $$ M_X(t) \geq e^{\mu t}, $$ where $$ \mu $$ is the mean of . The moment-generating function can be used in conjunction with Markov's inequality to bound the upper tail of a real random variable . This statement is also called the Chernoff bound.
https://en.wikipedia.org/wiki/Moment-generating_function
passage: ### France Léandre Pourcelot, a researcher and teacher at INSA (Institut National des Sciences Appliquées), Lyon, co-published a report in 1965 at the Académie des sciences, "Effet Doppler et mesure du débit sanguin" ("Doppler effect and measure of the blood flow"), the basis of his design of a Doppler flow meter in 1967. ### Scotland Parallel developments in Glasgow, Scotland by Professor Ian Donald and colleagues at the Glasgow Royal Maternity Hospital (GRMH) led to the first diagnostic applications of the technique. Donald was an obstetrician with a self-confessed "childish interest in machines, electronic and otherwise", who, having treated the wife of one of the company's directors, was invited to visit the Research Department of boilermakers Babcock & Wilcox at Renfrew. He adapted their industrial ultrasound equipment to conduct experiments on various anatomical specimens and assess their ultrasonic characteristics. Together with the medical physicist Tom Brown. and fellow obstetrician John MacVicar, Donald refined the equipment to enable differentiation of pathology in live volunteer patients. These findings were reported in The Lancet on 7 June 1958 as "Investigation of Abdominal Masses by Pulsed Ultrasound" – possibly one of the most important papers published in the field of diagnostic medical imaging. At GRMH, Professor Donald and James Willocks then refined their techniques to obstetric applications including fetal head measurement to assess the size and growth of the fetus. With the opening of the new Queen Mother's Hospital in Yorkhill in 1964, it became possible to improve these methods even further.
https://en.wikipedia.org/wiki/Medical_ultrasound
passage: When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer. The equation for momentum transfer is Newton's law of viscosity written as follows: $$ \tau_{zx}=-\rho \nu \frac{\partial \upsilon_x }{\partial z} $$ where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed. ## Mass transfer When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.'
https://en.wikipedia.org/wiki/Transport_phenomena
passage: In a pie chart, the arc length of each slice (and consequently its central angle and area), is proportional to the quantity it represents. - For example, as shown in the graph to the right, the proportion of English native speakers worldwide Line chart- x position - y position - symbol/glyph - color - size- Represents information as a series of data points called 'markers' connected by straight line segments. - Similar to a scatter plot except that the measurement points are ordered (typically by their x-axis value) and joined with straight line segments. - Often used to visualize a trend in data over intervals of time – a time series – thus the line is often drawn chronologically. Semi-log or log-log (non-linear) charts- x position - y position - symbol/glyph - color - connections- Represents data as lines or series of points spanning large ranges on one or both axes - One or both axes are represented using a non-linear logarithmic scale Streamgraph (type of area chart)- width - color - time (flow)- A type of stacked area chart that is displaced around a central axis, resulting in a flowing shape. - Unlike a traditional stacked area chart in which the layers are stacked on top of an axis, in a streamgraph the layers are positioned to minimize their "wiggle". - Streamgraphs display data with only positive values, and are not able to represent both negative and positive values.
https://en.wikipedia.org/wiki/Data_and_information_visualization
passage: This led to Darwin adopting some Lamarckian ideas in later editions of On the Origin of Species and his later biological works. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) rather than suggesting mechanisms. Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits. The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails. ## History Scientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that "seeds" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a "nurse for the young life sown within her". Ancient understandings of heredity transitioned to two debated doctrines in the 18th century.
https://en.wikipedia.org/wiki/Heredity
passage: Cooperation can occur willingly between individuals when both benefit directly as well. Cooperative breeding, where one individual cares for the offspring of another, occurs in several species, including wedge-capped capuchin monkeys. Cooperative behavior may also be enforced, where their failure to cooperate results in negative consequences. One of the best examples of this is worker policing, which occurs in social insect colonies. The cooperative pulling paradigm is a popular experimental design used to assess if and under which conditions animals cooperate. It involves two or more animals pulling rewards towards themselves via an apparatus they can not successfully operate alone. #### Between species Cooperation can occur between members of different species. For interspecific cooperation to be evolutionarily stable, it must benefit individuals in both species. Examples include pistol shrimp and goby fish, nitrogen fixing microbes and legumes, ants and aphids. In ants and aphids, aphids secrete a sugary liquid called honeydew, which ants eat. The ants provide protection to the aphids against predators, and, in some instances, raise the aphid eggs and larvae inside the ant colony. This behavior is analogous to human domestication. The genus of goby fish, Elacatinus also demonstrate cooperation by removing and feeding on ectoparasites of their clients. The species of wasp Polybia rejecta and ants Azteca chartifex show a cooperative behavior protecting one another's nests from predators.
https://en.wikipedia.org/wiki/Behavioral_ecology
passage: Such a tensor $$ A \in (K^{n})^{\otimes d} $$ defines polynomial maps $$ K^n \to K^n $$ and $$ \mathbb{P}^{n-1} \to \mathbb{P}^{n-1} $$ with coordinates: $$ \psi_i(x_1, \ldots, x_n) = \sum_{j_2=1}^n \sum_{j_3=1}^n \cdots \sum_{j_d = 1}^n a_{i j_2 j_3 \cdots j_d} x_{j_2} x_{j_3}\cdots x_{j_d} \;\; \mbox{for } i = 1, \ldots, n $$ Thus each of the $$ n $$ coordinates of $$ \psi $$ is a homogeneous polynomial $$ \psi_i $$ of degree $$ d-1 $$ in . The eigenvectors of $$ A $$ are the solutions of the constraint: $$ \mbox{rank} \begin{pmatrix} BLOCK0 \end{pmatrix} \leq 1 $$ and the eigenconfiguration is given by the variety of the $$ 2 \times 2 $$ minors of this matrix. ## Other examples of tensor products ### Topological tensor products Hilbert spaces generalize finite-dimensional vector spaces to arbitrary dimensions.
https://en.wikipedia.org/wiki/Tensor_product
passage: #### Isolated vesicles Producing membrane vesicles is one of the methods to investigate various membranes of the cell. After the living tissue is crushed into suspension, various membranes form tiny closed bubbles. Big fragments of the crushed cells can be discarded by low-speed centrifugation and later the fraction of the known origin (plasmalemma, tonoplast, etc.) can be isolated by precise high-speed centrifugation in the density gradient. Using osmotic shock, it is possible temporarily open vesicles (filling them with the required solution) and then centrifugate down again and resuspend in a different solution. Applying ionophores like valinomycin can create electrochemical gradients comparable to the gradients inside living cells. Vesicles are mainly used in two types of research: - To find and later isolate membrane receptors that specifically bind hormones and various other important substances. - To investigate transport of various ions or other substances across the membrane of the given type. While transport can be more easily investigated with patch clamp techniques, vesicles can also be isolated from objects for which a patch clamp is not applicable. ### Artificial vesicles Artificial vesicles are classified into three groups based on their size: small unilamellar liposomes/vesicles (SUVs) with a size range of 20–100 nm, large unilamellar liposomes/vesicles (LUVs) with a size range of 100–1000 nm and giant unilamellar liposomes/vesicles (GUVs) with a size range of 1–200 μm.
https://en.wikipedia.org/wiki/Vesicle_%28biology_and_chemistry%29
passage: In January 2017, the spiral was tweeted by Bernie Sanders and the U.S. National Park Service, both conveying how almost all recorded warmest years have been recent years. A 2022 study emphasized the importance of user-centered design in climate data visualizations, highlighting tools like the climate spiral as effective means to enhance public understanding of climate change. ### Extensions of the climate spiral concept In May 2016 United States Geological Survey scientist Jay Alder extended Hawkins' historical spiral to the year 2100, creating a predictive spiral graphic showing a possible future trajectory of global warming given the then-current carbon emission trend. Hawkins extended his two-dimensional spiral design to a three-dimensional version in which the graphic appears as an expanding cone-shaped structure. Hawkins' original climate spiral application (global average temperature change) has been expanded to represent other time-varying quantities such as atmospheric CO2 concentration, carbon budget, and arctic sea ice volume. ## Critical response The day of the climate spiral's first publication (9 May 2016), Brad Plumer wrote in Vox that the "mesmerizing" GIF was "one of the clearest visualizations of global warming" he had ever seen. The following day (10 May), Jason Samenow wrote in The Washington Post that the spiral graph was "the most compelling global warming visualization ever made", and, likewise, former Climate Central senior science writer Andrew Freedman wrote in Mashable that it was "the most compelling climate change visualization we’ve ever seen".
https://en.wikipedia.org/wiki/Climate_spiral
passage: For any term $$ t $$ , $$ (\mathbf{\lambda} x . x x x) t \rightarrow t t t $$ But consider what happens when we apply $$ \lambda x . x x x $$ to itself: $$ \begin{align} (\mathbf{\lambda} x . x x x) (\lambda x . x x x) & \rightarrow (\mathbf{\lambda} x . x x x) (\lambda x . x x x) (\lambda x . x x x) \\ & \rightarrow (\mathbf{\lambda} x . x x x) (\lambda x . x x x) (\lambda x . x x x) (\lambda x . x x x) \\ & \rightarrow (\mathbf{\lambda} x . x x x) (\lambda x . x x x) (\lambda x . x x x) (\lambda x . x x x) (\lambda x . x x x) \\ & \rightarrow \ \cdots\, \end{align} $$ Therefore, the term $$ (\lambda x . x x x) (\lambda x . x x x) $$ is not strongly normalizing. And this is the only reduction sequence, hence it is not weakly normalizing either. ### Typed lambda calculus Various systems of typed lambda calculus including the simply typed lambda calculus, Jean-Yves Girard's System F, and Thierry Coquand's calculus of constructions are strongly normalizing.
https://en.wikipedia.org/wiki/Normal_form_%28abstract_rewriting%29
passage: gives several more precise versions of this result, called zero density estimates, which bound the number of zeros in regions with imaginary part at most T and real part at least . ### Hardy–Littlewood conjectures In 1914 Godfrey Harold Hardy proved that $$ \zeta\left(\tfrac{1}{2}+it\right) $$ has infinitely many real zeros. The next two conjectures of Hardy and John Edensor Littlewood on the distance between real zeros of $$ \zeta\left(\tfrac{1}{2}+it\right) $$ and on the density of zeros of $$ \zeta\left(\tfrac{1}{2}+it\right) $$ on the interval $$ (T,T+H] $$ for sufficiently large $$ T > 0 $$ , and $$ H = T^{a + \varepsilon} $$ and with as small as possible value of $$ a > 0 $$ , where $$ \varepsilon > 0 $$ is an arbitrarily small number, open two new directions in the investigation of the Riemann zeta function: 1.
https://en.wikipedia.org/wiki/Riemann_hypothesis
passage: #### Natural language Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives).
https://en.wikipedia.org/wiki/Boolean_algebra
passage: Another example is through the use of utility applications. Some AR applications, such as Augment, enable users to apply digital objects into real environments, allowing businesses to use augmented reality devices as a way to preview their products in the real world. Similarly, it can also be used to demo what products may look like in an environment for customers, as demonstrated by companies such as Mountain Equipment Co-op or Lowe's who use augmented reality to allow customers to preview what their products might look like at home. Augmented reality (AR) differs from virtual reality (VR) in the sense that in AR, the surrounding environment is 'real' and AR is just adding virtual objects to the real environment. On the other hand, in VR, the surrounding environment is completely virtual and computer generated. A demonstration of how AR layers objects onto the real world can be seen with augmented reality games. WallaMe is an augmented reality game application that allows users to hide messages in real environments, utilizing geolocation technology in order to enable users to hide messages wherever they may wish in the world. ## History - 1901: Author L. Frank Baum, in his science-fiction novel The Master Key, first mentions the idea of an electronic display/spectacles that overlays data onto real life (in this case 'people').
https://en.wikipedia.org/wiki/Augmented_reality
passage: Developers may request additional documentation such as a real-time video of the bug's manifestation. - Analysis. The developer responsible for the bug, such as an artist, programmer or game designer checks the malfunction. This is outside the scope of game tester duties, although inconsistencies in the report may require more information or evidence from the tester. - Verification. After the developer fixes the issue, the tester verifies that the bug no longer occurs. Not all bugs are addressed by the developer, for example, some bugs may be claimed as features (expressed as "NAB" or "not a bug"), and may also be "waived" (given permission to be ignored) by producers, game designers, or even lead testers, according to company policy. ## Methodology There is no standard method for game testing, and most methodologies are developed by individual video game developers and publishers. Methodologies are continuously refined and may differ for different types of games (for example, the methodology for testing an MMORPG will be different from testing a casual game). Many methods, such as unit testing, are borrowed directly from general software testing techniques. Outlined below are the most important methodologies, specific to video games. - Functionality testing is most commonly associated with the phrase "game testing", as it entails playing the game in some form. Functionality testing does not require extensive technical knowledge. Functionality testers look for general problems within the game itself or its user interface, such as stability issues, game mechanic issues, and game asset integrity. - Compliance testing is the reason for the existence of game testing labs.
https://en.wikipedia.org/wiki/Game_testing
passage: If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms. ### Comparison to other methods Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions, which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods. ## Non-linear networks Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship; $$ i = I_o \left(e^{{v}/{V_T}}-1\right) $$ where; - i and v are the instantaneous current and voltage. - Io is an arbitrary parameter called the reverse leakage current whose value depends on the construction of the device. - VT is a parameter proportional to temperature called the thermal voltage and equal to about 25mV at room temperature. There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present.
https://en.wikipedia.org/wiki/Network_analysis_%28electrical_circuits%29
passage: Much of corporate finance theory, by contrast, considers investment under "certainty" (Fisher separation theorem, "theory of investment value", and Modigliani–Miller theorem). Here, theory and methods are developed for the decisioning about funding, dividends, and capital structure discussed above. A recent development is to incorporate uncertainty and contingency—and thus various elements of asset pricing—into these decisions, employing for example real options analysis. ### Financial mathematics Financial mathematics is the field of applied mathematics concerned with financial markets; Louis Bachelier's doctoral thesis, defended in 1900, is considered to be the first scholarly work in this area. The field is largely focused on the modeling of derivatives—with much emphasis on interest rate- and credit risk modeling—while other important areas include insurance mathematics and quantitative portfolio management. Relatedly, the techniques developed are applied to pricing and hedging a wide range of asset-backed, government, and corporate-securities. As above, in terms of practice, the field is referred to as quantitative finance and / or mathematical finance, and comprises primarily the three areas discussed.
https://en.wikipedia.org/wiki/Finance
passage: ### Factorial For any positive integer n, the product of the integers less than or equal to n is a unary operation called factorial. In the context of complex numbers, the gamma function is a unary operation extension of factorial. ### Trigonometry In trigonometry, the trigonometric functions, such as $$ \sin $$ , $$ \cos $$ , and $$ \tan $$ , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result. ### Examples from programming languages Below is a table summarizing common unary operators along with their symbols, description, and examples: Operator Symbol Description Example Increment `++` Increases the value of a variable by 1 `x = 2; ++x; // x is now 3` Decrement `−-` Decreases the value of a variable by 1 `y = 10; --y; // y is now 9` Unary Plus `+` Indicates a positive value `a = -5; b = +a; // b is -5` Unary Minus `-` Indicates a negative value `c = 4; d = -c; // d is -4` Logical NOT `!` Negates the truth value of a Boolean expression `flag = true; result = !flag; // result is false` Bitwise NOT `~` Bitwise negation, flips the bits of an integer `num = 5; result = ~num; // result is -6` #### JavaScript In JavaScript
https://en.wikipedia.org/wiki/Unary_operation
passage: The property of categoriality (categoricity) ensures the completeness of a system, however the converse is not true: Completeness does not ensure the categoriality (categoricity) of a system, since two models can differ in properties that cannot be expressed by the semantics of the system. ### Example As an example, observe the following axiomatic system, based on first-order logic with additional semantics of the following countably infinitely many axioms added (these can be easily formalized as an axiom schema): $$ \exist x_1: \exist x_2: \lnot (x_1=x_2) $$ (informally, there exist two different items). $$ \exist x_1: \exist x_2: \exist x_3: \lnot (x_1=x_2) \land \lnot (x_1=x_3) \land \lnot (x_2=x_3) $$ (informally, there exist three different items). $$ ... $$ Informally, this infinite set of axioms states that there are infinitely many different items. However, the concept of an infinite set cannot be defined within the system — let alone the cardinality of such a set. The system has at least two different models – one is the natural numbers (isomorphic to any other countably infinite set), and another is the real numbers (isomorphic to any other set with the cardinality of the continuum). In fact, it has an infinite number of models, one for each cardinality of an infinite set. However, the property distinguishing these models is their cardinality — a property which cannot be defined within the system.
https://en.wikipedia.org/wiki/Axiomatic_system
passage: Rather, an external laser injects counter-propagating beams into an optical fiber ring, where rotation causes a relative phase shift between those beams when interfered after their pass through the fiber ring. The phase shift is proportional to the rate of rotation. This is less sensitive in a single traverse of the ring than the RLG, in which the externally observed phase shift is proportional to the accumulated rotation itself, not its derivative. However, the sensitivity of the fiber optic gyro is enhanced by having a long optical fiber, coiled for compactness, in which the Sagnac effect is multiplied according to the number of turns. ## Example applications - Airbus A320 - Agni III and Agni-IV - Agni-V - ASM-135 US Anti-satellite missile - Boeing 757-200 - Boeing 777 - B-52H with the AMI upgrade - EF-111 Raven - F-15E Strike Eagle - F-16 Fighting Falcon - HAL Tejas - MC-130E Combat Talon I and MC-130H Combat Talon II - MQ-1C Warrior - MK39 Ship's Internal Navigation System used in NATO surface ships and submarines - P3 Orion (with upgrade) - Shaurya missile. - MH-60R, MH-60S, SH60F and SH60B Seahawk helicopters - Sukhoi Su-30MKI - Trident I and Trident II Missiles - PARALIGN, used for roller alignment - International Space Station - JF-17 Thunder
https://en.wikipedia.org/wiki/Ring_laser_gyroscope
passage: In reality, a diagnostic procedure may involve components of multiple methods. ### Differential diagnosis The method of differential diagnosis is based on finding as many candidate diseases or conditions as possible that can possibly cause the signs or symptoms, followed by a process of elimination or at least of rendering the entries more or less probable by further medical tests and other processing, aiming to reach the point where only one candidate disease or condition remains as probable. The result may also remain a list of possible conditions, ranked in order of probability or severity. Such a list is often generated by computer-aided diagnosis systems. The resultant diagnostic opinion by this method can be regarded more or less as a diagnosis of exclusion. Even if it does not result in a single probable disease or condition, it can at least rule out any imminently life-threatening conditions. Unless the provider is certain of the condition present, further medical tests, such as medical imaging, are performed or scheduled in part to confirm or disprove the diagnosis but also to document the patient's status and keep the patient's medical history up to date. If unexpected findings are made during this process, the initial hypothesis may be ruled out and the provider must then consider other hypotheses. ### Pattern recognition In a pattern recognition method the provider uses experience to recognize a pattern of clinical characteristics. It is mainly based on certain symptoms or signs being associated with certain diseases or conditions, not necessarily involving the more cognitive processing involved in a differential diagnosis.
https://en.wikipedia.org/wiki/Medical_diagnosis
passage: It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight is displaced upwards or downwards a vertical distance , the work done on the object is: $$ W = F_g (y_2 - y_1) = F_g\Delta y = mg\Delta y $$ where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. #### Gravity in 3D space The force of gravity exerted by a mass on another mass is given by $$ \mathbf{F} = -\frac{GMm}{r^2} \hat\mathbf{r} = -\frac{GMm}{r^3}\mathbf{r}, $$ where is the position vector from to and is the unit vector in the direction of .
https://en.wikipedia.org/wiki/Work_%28physics%29
passage: - It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. Examples Examples of distributed systems and applications of distributed computing include the following: - telecommunications networks: - telephone networks and cellular networks, - computer networks such as the Internet, - wireless sensor networks, - routing algorithms; - network applications: - World Wide Web and peer-to-peer networks, - massively multiplayer online games and virtual reality communities, - distributed databases and distributed database management systems, - network file systems, - distributed cache such as burst buffers, - distributed information processing systems such as banking systems and airline reservation systems; - real-time process control: - aircraft control systems, - industrial control systems; - parallel computation: - scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects, - distributed rendering in computer graphics. - peer-to-peer ## Reactive distributed systems According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive. ## Theoretical foundations
https://en.wikipedia.org/wiki/Distributed_computing
passage: Finally, we substitute into our $$ AX + UY = I $$ , and we have $$ AX + U\left(C^{-1} + VA^{-1}U\right)^{-1}VA^{-1} = I $$ . Thus, $$ (A + UCV)^{-1} = X = A^{-1} - A^{-1}U\left(C^{-1} + VA^{-1}U\right)^{-1}VA^{-1}. $$ We have derived the Woodbury matrix identity. We start by the matrix $$ \begin{bmatrix} A & U \\ V & C \end{bmatrix} $$ By eliminating the entry under the A (given that A is invertible) we get $$ \begin{bmatrix} I & 0 \\ -VA^{-1} & I \end{bmatrix} \begin{bmatrix} A & U \\ V & C \end{bmatrix} = \begin{bmatrix} A & U \\ 0 & C - VA^{-1}U \end{bmatrix} $$ Likewise, eliminating the entry above C gives $$ \begin{bmatrix} A & U \\ V & C \end{bmatrix} \begin{bmatrix} I & -A^{-1}U \\ 0 & I \end{bmatrix} = \begin{bmatrix} A & 0 \\ V & C-VA^{-1}U \end{bmatrix} $$ Now combining the above two, we get $$
https://en.wikipedia.org/wiki/Woodbury_matrix_identity
passage: People with schizophrenia perform worse on these behavioral tasks, which relate to perception and continuous recognition memory. The neurobiological basis of gamma dysfunction in schizophrenia is thought to lie with GABAergic interneurons involved in known brain wave rhythm-generating networks. Antipsychotic treatment, which diminishes some behavioral symptoms of schizophrenia, does not restore gamma synchrony to normal levels. ### Epilepsy Gamma oscillations are observed in the majority of seizures and may contribute to their onset in epilepsy. Visual stimuli such as large, high-contrast gratings that are known to trigger seizures in photosensitive epilepsy also drive gamma oscillations in visual cortex. During a focal seizure event, maximal gamma rhythm synchrony of interneurons is always observed in the seizure onset zone, and synchrony propagates from the onset zone over the whole epileptogenic zone. Alzheimer's disease Enhanced gamma band power and lagged gamma responses have been observed in patients with Alzheimer's disease (AD). Interestingly, the tg APP-PS1 mouse model of AD exhibits decreased gamma oscillation power in the lateral entorhinal cortex, which transmits various sensory inputs to the hippocampus and thus participates in memory processes analogous to those affected by human AD. Decreased hippocampal slow gamma power has also been observed in the 3xTg mouse model of AD. Gamma stimulation may have therapeutic potential for AD and other neurodegenerative diseases.
https://en.wikipedia.org/wiki/Gamma_wave
passage: || || || || |- ! 4 | 1 || 3 || 6 || 11 || 3 || || || || || || || || || || || |- ! 5 | 1 || 4 || 12 || 35 || 48 || 20 || || || || || || || || || || |- ! 6 | 1 || 5 || 20 || 79 || 199 || 281 || 133 || 2 || || || || || || || || |- ! 7 | 1 || 6 || 30 || 149 || 543 || 1357 || 1903 || 1016 || 35 || || || || || || || |- ! 8 | 1 || 7 || 42 || 251 || 1191 || 4281 || 10561 || 15011 || 8520 || 455 || ||
https://en.wikipedia.org/wiki/Pancake_sorting
passage: In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value increases to infinity as its argument approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. A barrier function is also called an interior penalty function, as it is a penalty function that forces the solution to remain within the interior of the feasible region. The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods. ## Motivation Consider the following constrained optimization problem: minimize subject to where is some constant. If one wishes to remove the inequality constraint, the problem can be reformulated as minimize , where if , and zero otherwise. This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function , and therefore the objective function , is discontinuous, preventing the use of calculus to solve it. A barrier function, now, is a continuous approximation to that tends to infinity as approaches from above. Using such a function, a new optimization problem is formulated, viz. minimize where is a free parameter. This problem is not equivalent to the original, but as approaches zero, it becomes an ever-better approximation.
https://en.wikipedia.org/wiki/Barrier_function
passage: In particular, the above is equivalent to $$ \Delta f = \frac{\partial^2 f}{\partial r^2} + \frac{2}{r}\frac{\partial f}{\partial r} + \frac{1}{r^2}\Delta_{S^2} f , $$ where $$ \Delta_{S^2}f $$ is the Laplace-Beltrami operator on the unit sphere. In general curvilinear coordinates (): $$ \Delta = \nabla \xi^m \cdot \nabla \xi^n \frac{\partial^2}{\partial \xi^m \, \partial \xi^n} + \nabla^2 \xi^m \frac{\partial}{\partial \xi^m } = g^{mn} \left(\frac{\partial^2}{\partial\xi^m \, \partial\xi^n} - \Gamma^{l}_{mn}\frac{\partial}{\partial\xi^l} \right), $$ where summation over the repeated indices is implied, is the inverse metric tensor and are the Christoffel symbols for the selected coordinates.
https://en.wikipedia.org/wiki/Laplace_operator
passage: ### Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. The term "data mining" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support systems, including artificial intelligence (e.g., machine learning) and business intelligence. Often the more general terms (large scale) data analysis and analytics—or, when referring to actual methods, artificial intelligence and machine learning—are more appropriate.
https://en.wikipedia.org/wiki/Data_mining
passage: According to proponents of climate justice, the costs of climate adaptation should be paid by those most responsible for climate change, while the beneficiaries of payments should be those suffering impacts. One way this can be addressed in practice is to have wealthy nations pay poorer countries to adapt. Oxfam found that in 2023 the wealthiest 10% of people were responsible for 50% of global emissions, while the bottom 50% were responsible for just 8%. Production of emissions is another way to look at responsibility: under that approach, the top 21 fossil fuel companies would owe cumulative climate reparations of $5.4 trillion over the period 2025–2050. To achieve a just transition, people working in the fossil fuel sector would also need other jobs, and their communities would need investments. ### International climate agreements Nearly all countries in the world are parties to the 1994 United Nations Framework Convention on Climate Change (UNFCCC). The goal of the UNFCCC is to prevent dangerous human interference with the climate system. As stated in the convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained. The UNFCCC does not itself restrict emissions but rather provides a framework for protocols that do. Global emissions have risen since the UNFCCC was signed. Its yearly conferences are the stage of global negotiations. The 1997 Kyoto Protocol extended the UNFCCC and included legally binding commitments for most developed countries to limit their emissions.
https://en.wikipedia.org/wiki/Climate_change
passage: This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces. Another fruitful point of view is to define the differential directly as a kind of directional derivative: $$ df(\mathbf{x},\mathbf{h}) = \lim_{t\to 0}\frac{f(\mathbf{x}+t\mathbf{h})-f(\mathbf{x})}{t} = \left.\frac{d}{dt} f(\mathbf{x}+t\mathbf{h})\right|_{t=0}, $$ which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so gives a linear function on the tangent space: a differential form. With this interpretation, the differential of is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity.
https://en.wikipedia.org/wiki/Differential_of_a_function
passage: ### Marketing documentation For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. This form of documentation has three purposes: 1. To excite the potential user about the product and instill in them a desire to become more involved with it. 1. To inform them about what exactly the product does, so that their expectations are in line with what they will be receiving. 1. To explain the position of this product with respect to other alternatives. ## Documentation and agile development controversy "The resistance to documentation among developers is well known and needs no emphasis." This situation is particularly prevalent in agile software development because these methodologies try to avoid any unnecessary activities that do not directly bring value. Specifically, the Agile Manifesto advocates valuing "working software over comprehensive documentation", which could be interpreted cynically as "We want to spend all our time coding. Remember, real programmers don't write documentation. " A survey among software engineering experts revealed, however, that documentation is by no means considered unnecessary in agile development. Yet it is acknowledged that there are motivational problems in development, and that documentation methods tailored to agile development (e.g. through Reputation systems and Gamification) may be needed. Selic, Bran. "Agile documentation, anyone?" In: IEEE Software, vol. 26, no. 6, pp. 11-12, 2009 ### Docs as Code Docs as Code is an approach to documentation that treats it with the same rigor and processes as software code. This includes: 1.
https://en.wikipedia.org/wiki/Software_documentation
passage: The singular points of a degenerate quadric are the points whose projective coordinates belong to the null space of the matrix . A quadric is reducible if and only if the rank of is one (case of a double hyperplane) or two (case of two hyperplanes). ## Normal form of projective quadrics In real projective space, by Sylvester's law of inertia, a non-singular quadratic form P(X) may be put into the normal form $$ P(X) = \pm X_0^2 \pm X_1^2 \pm\cdots\pm X_{D+1}^2 $$ by means of a suitable projective transformation (normal forms for singular quadrics can have zeros as well as ±1 as coefficients). For two-dimensional surfaces (dimension D = 2) in three-dimensional space, there are exactly three non-degenerate cases: $$ P(X) = \begin{cases} X_0^2+X_1^2+X_2^2+X_3^2\\ X_0^2+X_1^2+X_2^2-X_3^2\\ X_0^2+X_1^2-X_2^2-X_3^2 \end{cases} $$ The first case is the empty set. The second case generates the ellipsoid, the elliptic paraboloid or the hyperboloid of two sheets, depending on whether the chosen plane at infinity cuts the quadric in the empty set, in a point, or in a nondegenerate conic respectively. These all have positive Gaussian curvature.
https://en.wikipedia.org/wiki/Quadric
passage: So the electrons in the circuit flow in the opposite direction to the direction of conventional current. From the standpoint of electric power, components in an electric circuit can be divided into two categories: ### Active devices (power sources) If conventional electric current (positive charge) is forced to flow through the device in the direction from the lower electric potential to the higher, against the opposing force of the electric field between the terminals, (this is equivalent to the negatively charged electrons moving from the positive terminal to the negative terminal), work will be done on the charges. So energy is being converted to electric potential energy from some other type of energy, such as mechanical energy or chemical energy. Devices in which this occurs are called active devices or power sources; such as electric generators and batteries. Some devices can be either a source or a load, depending on the voltage and current through them. For example, a rechargeable battery acts as a source when it provides power to a circuit, but as a load when it is connected to a battery charger and is being recharged. ### Passive devices (loads) If conventional current flows through the device in a direction from higher potential to lower potential (equivalent to the negative electrons moving from the negative terminal to the positive terminal), in the same direction as the force of the electric field, work is done by the charges on the device. The potential energy of the charges due to the voltage between the terminals is converted to kinetic energy in the device. These devices are called passive components or loads; they 'consume' electric power from the circuit, converting it to other forms of energy such as mechanical work, heat, light, etc.
https://en.wikipedia.org/wiki/Electric_power
passage: $$ If the auxiliary worldsheet metric tensor $$ \sqrt{-h} $$ is calculated from the equations of motion: $$ \sqrt{-h} = \frac{2 \sqrt{-G}}{h^{cd} G_{cd}} $$ and substituted back to the action, it becomes the Nambu–Goto action: $$ S = {T \over 2}\int \mathrm{d}^2 \sigma \sqrt{-h} h^{ab} G_{ab} = {T \over 2}\int \mathrm{d}^2 \sigma \frac{2 \sqrt{-G}}{h^{cd} G_{cd}} h^{ab} G_{ab} = T \int \mathrm{d}^2 \sigma \sqrt{-G}. $$ However, the Polyakov action is more easily quantized because it is linear.
https://en.wikipedia.org/wiki/Polyakov_action
passage: In mathematics, Hurwitz's theorem is a theorem of Adolf Hurwitz (1859–1919), published posthumously in 1923, solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a nondegenerate positive-definite quadratic form. The theorem states that if the quadratic form defines a homomorphism into the positive real numbers on the non-zero part of the algebra, then the algebra must be isomorphic to the real numbers, the complex numbers, the quaternions, or the octonions, and that there are no other possibilities. Such algebras, sometimes called Hurwitz algebras, are examples of composition algebras. The theory of composition algebras has subsequently been generalized to arbitrary quadratic forms and arbitrary fields. Hurwitz's theorem implies that multiplicative formulas for sums of squares can only occur in 1, 2, 4 and 8 dimensions, a result originally proved by Hurwitz in 1898. It is a special case of the Hurwitz problem, solved also in . Subsequent proofs of the restrictions on the dimension have been given by using the representation theory of finite groups and by and using Clifford algebras. Hurwitz's theorem has been applied in algebraic topology to problems on vector fields on spheres and the homotopy groups of the classical groups and in quantum mechanics to the classification of simple Jordan algebras. ## Euclidean Hurwitz algebras
https://en.wikipedia.org/wiki/Hurwitz%27s_theorem_%28composition_algebras%29
passage: Expanding both numerators on the right hand side of this formula into sums of divisors of $$ n_i $$ results in the desired Egyptian fraction representation. use a similar technique involving a different sequence of practical numbers to show that every rational number $$ x/y $$ has an Egyptian fraction representation in which the largest denominator is $$ O(y\log^2 y/\log\log y) $$ . According to a September 2015 conjecture by Zhi-Wei Sun, every positive rational number has an Egyptian fraction representation in which every denominator is a practical number. The conjecture was proved by . ## Analogies with prime numbers One reason for interest in practical numbers is that many of their properties are similar to properties of the prime numbers. Indeed, theorems analogous to Goldbach's conjecture and the twin prime conjecture are known for practical numbers: every positive even integer is the sum of two practical numbers, and there exist infinitely many triples of practical numbers $$ (x-2,x,x+2) $$ . Melfi also showed that there are infinitely many practical Fibonacci numbers ; and Sanna proved that at least $$ C n / \log n $$ of the first $$ n $$ terms of every Lucas sequence are practical numbers, where $$ C > 0 $$ is a constant and $$ n $$ is sufficiently large. The analogous questions of the existence of infinitely many Fibonacci primes, or prime in a Lucas sequence, are open.
https://en.wikipedia.org/wiki/Practical_number
passage: Smooth normals are specified per vertex. .ply Polygon File Format Stanford University Various Binary and ASCII .pmd Polygon Movie Maker data Yu Higuchi MikuMikuDance Proprietary binary file format for storing humanoid model geometry with rigging, material, and physics information. .stl Stereolithography Format 3D Systems Many Binary and ASCII format originally designed to aid in CNC. .amf Additive Manufacturing File Format ASTM International N/A Like the STL format, but with added native color, material, and constellation support. .wrl Virtual Reality Modeling Language Web3D Consortium Web Browsers ISO Standard 14772-1:1997 .wrz VRML Compressed Web3D Consortium Web Browsers .x3d, .x3db, .x3dv Extensible 3D Web3D Consortium Web Browsers XML-based, open source, royalty-free, extensible, and interoperable; also supports color, texture, and scene information. ISO Standard 19775/19776/19777 .x3dz, .x3dbz, .x3dvz X3D Compressed Binary Web3D Consortium Web Browsers .c4d Cinema 4D File Maxon CINEMA 4D .lwo LightWave 3D object File NewTek LightWave 3D .smbSCOREC apfRPI SCORECPUMIOpen source parallel adaptive unstructured 3D meshes for PDE based simulation workflows. .msh
https://en.wikipedia.org/wiki/Polygon_mesh
passage: Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). ## Terminology The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory. Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve th roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations.
https://en.wikipedia.org/wiki/Algebraic_equation
passage: The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant. Newton's law was later superseded by Albert Einstein's theory of general relativity, but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun). ## History Before Newton's law of gravity, there were many theories explaining gravity. Philosophers made observations about things falling down − and developed theories why they do – as early as Aristotle who thought that rocks fall to the ground because seeking the ground was an essential part of their nature. Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology.
https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
passage: &= 963^3 + 804^3 \\ &= 1134^3 - 357^3 \\ &= 1155^3 - 504^3 \\ &= 1246^3 - 805^3 \\ &= 2115^3 - 2004^3 \\ &= 4746^3 - 4725^3 \\[6pt] \mathrm{Cabtaxi}(7) =& \ 11302198488 \\ &= 1926^3 + 1608^3 \\ &= 1939^3 + 1589^3 \\ &= 2268^3 - 714^3 \\ &= 2310^3 - 1008^3 \\ &= 2492^3 - 1610^3 \\ &= 4230^3 - 4008^3 \\ &= 9492^3 - 9450^3 \\[6pt] \mathrm{Cabtaxi}(8) =& \ 137513849003496 \\ &= 22944^3 + 50058^3 \\ &= 36547^3 + 44597^3 \\ &= 36984^3 + 44298^3 \\ &= 52164^3 - 16422^3 \\ &= 53130^3 - 23184^3 \\ &= 57316^3 - 37030^3 \\ &= 97290^3 - 92184^3 \\ &= 218316^3 - 217350^3 \\[6pt] \mathrm{Cabtaxi}(9) =& \ 424910390480793000 \\ &= 645210^3 + 538680^3 \\ &= 649565^3 + 532315^3 \\ &= 752409^3 - 101409^3 \\ &= 759780^3 - 239190^3 \\
https://en.wikipedia.org/wiki/Cabtaxi_number
passage: The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition. Additionally, the assumption (known as microcausality) that spacelike-separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations and boosts. A boost transfers to a frame of reference with a different velocity and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions and is termed Euclidean. ### Exchange symmetry or permutation symmetry Bosons are particles whose wavefunction is symmetric under such an exchange or permutation, so if we swap the particles, the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons.
https://en.wikipedia.org/wiki/Spin%E2%80%93statistics_theorem
passage: {| | $$ p_{0,0}(x) = y_0 \, $$ |- | || $$ p_{0,1}(x) \, $$ |- | $$ p_{1,1}(x) = y_1 \, $$ || || $$ p_{0,2}(x) \, $$ |- | || $$ p_{1,2}(x) \, $$ || || $$ p_{0,3}(x) \, $$ |- | $$ p_{2,2}(x) = y_2 \, $$ || || $$ p_{1,3}(x) \, $$ || || style="border: 1px solid;" | $$ p_{0,4}(x) \, $$ |- | || $$ p_{2,3}(x) \, $$ || || $$ p_{1,4}(x) \, $$ |- | $$ p_{3,3}(x) = y_3 \, $$ || ||
https://en.wikipedia.org/wiki/Neville%27s_algorithm
passage: ## Proof that the Moore plane is not normal The fact that this space $$ \Gamma $$ is not normal can be established by the following counting argument (which is very similar to the argument that the Sorgenfrey plane is not normal): 1. On the one hand, the countable set $$ S:=\{(p,q) \in \mathbb Q\times \mathbb Q: q>0\} $$ of points with rational coordinates is dense in $$ \Gamma $$ ; hence every continuous function $$ f:\Gamma \to \mathbb R $$ is determined by its restriction to $$ S $$ , so there can be at most $$ |\mathbb R|^{|S|} = 2^{\aleph_0} $$ many continuous real-valued functions on $$ \Gamma $$ . 1. On the other hand, the real line $$ L:=\{(p,0): p\in \mathbb R\} $$ is a closed discrete subspace of $$ \Gamma $$ with $$ 2^{\aleph_0} $$ many points. So there are $$ 2^{2^{\aleph_0}} > 2^{\aleph_0} $$ many continuous functions from L to $$ \mathbb R $$ . Not all these functions can be extended to continuous functions on $$ \Gamma $$ . 1.
https://en.wikipedia.org/wiki/Moore_plane
passage: Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces: $$ \frac{\mathbb{E}{\left[x^2\right]}}{2t} = D = \mu k_\text{B} T # \frac{\mu R T}{N_\text{A}} \frac{RT}{6\pi\eta r N_\text{A}}. $$ Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as , and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant , the temperature , the viscosity , and the particle radius , the Avogadro constant can be determined. The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".
https://en.wikipedia.org/wiki/Brownian_motion
passage: These are often called Calabi–Yau manifolds. However, the term is often used in slightly different ways by various authors — for example, some uses may refer to the complex manifold while others might refer to a complex manifold together with a particular Ricci-flat Kähler metric. This special case can equivalently be regarded as the complete existence and uniqueness theory for Kähler–Einstein metrics of zero scalar curvature on compact complex manifolds. The case of nonzero scalar curvature does not follow as a special case of Calabi's conjecture, since the 'right-hand side' of the Kähler–Einstein problem depends on the 'unknown' metric, thereby placing the Kähler–Einstein problem outside the domain of prescribing Ricci curvature. However, Yau's analysis of the complex Monge–Ampère equation in resolving the Calabi conjecture was sufficiently general so as to also resolve the existence of Kähler–Einstein metrics of negative scalar curvature. The third and final case of positive scalar curvature was resolved in the 2010s, in part by making use of the Calabi conjecture. ## Outline of the proof of the Calabi conjecture Calabi transformed the Calabi conjecture into a non-linear partial differential equation of complex Monge–Ampère type, and showed that this equation has at most one solution, thus establishing the uniqueness of the required Kähler metric. Yau proved the Calabi conjecture by constructing a solution of this equation using the continuity method.
https://en.wikipedia.org/wiki/Calabi_conjecture
passage: ReRAM involves generating defects in a thin oxide layer, known as oxygen vacancies (oxide bond locations where the oxygen has been removed), which can subsequently charge and drift under an electric field. The motion of oxygen ions and vacancies in the oxide would be analogous to the motion of electrons and holes in a semiconductor. Although ReRAM was initially seen as a replacement technology for flash memory, the cost and performance benefits of ReRAM have not been enough for companies to proceed with the replacement. Apparently, a broad range of materials can be used for ReRAM. However, the discovery that the popular high-κ gate dielectric HfO2 can be used as a low-voltage ReRAM has encouraged researchers to investigate more possibilities. ## Mechanically addressed systems Mechanically addressed systems use a recording head to read and write on a designated storage medium. Since the access time depends on the physical location of the data on the device, mechanically addressed systems may be sequential access. For example, magnetic tape stores data as a sequence of bits on a long tape; transporting the tape past the recording head is required to access any part of the storage. Tape media can be removed from the drive and stored, giving indefinite capacity at the cost of the time required to retrieve a dismounted tape. Hard disk drives use a rotating magnetic disk to store data; access time is longer than for semiconductor memory, but the cost per stored data bit is very low, and they provide random access to any location on the disk. Formerly, removable disk packs were common, allowing storage capacity to be expanded.
https://en.wikipedia.org/wiki/Non-volatile_memory
passage: The proof sketched in the previous paragraph that the consistency of ZFC implies the consistency of ZFC + "there is not an inaccessible cardinal" can be formalized in ZFC. However, assuming that ZFC is consistent, no proof that the consistency of ZFC implies the consistency of ZFC + "there is an inaccessible cardinal" can be formalized in ZFC. This follows from Gödel's second incompleteness theorem, which shows that if ZFC + "there is an inaccessible cardinal" is consistent, then it cannot prove its own consistency. Because ZFC + "there is an inaccessible cardinal" does prove the consistency of ZFC, if ZFC proved that its own consistency implies the consistency of ZFC + "there is an inaccessible cardinal" then this latter theory would be able to prove its own consistency, which is impossible if it is consistent. There are arguments for the existence of inaccessible cardinals that cannot be formalized in ZFC. One such argument, presented by , is that the class of all ordinals of a particular model M of set theory would itself be an inaccessible cardinal if there was a larger model of set theory extending M and preserving powerset of elements of M. ## Existence of a proper class of inaccessibles There are many important axioms in set theory which assert the existence of a proper class of cardinals which satisfy a predicate of interest. In the case of inaccessibility, the corresponding axiom is the assertion that for every cardinal μ, there is an inaccessible cardinal which is strictly larger, .
https://en.wikipedia.org/wiki/Inaccessible_cardinal
passage: Then the family of waves in question consists of all functions $$ F $$ that satisfy those constraints – that is, all solutions of the equation. This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if $$ F(x,t) $$ is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation $$ \frac{\partial F}{\partial t}(x,t) = \alpha \left(\frac{\partial^2 F}{\partial x_1^2}(x,t) + \frac{\partial^2 F}{\partial x_2^2}(x,t) + \frac{\partial^2 F}{\partial x_3^2}(x,t) \right) + \beta Q(x,t) $$ where $$ Q(p,f) $$ is the heat that is being generated per unit of volume and time in the neighborhood of $$ x $$ at time $$ t $$ (for example, by chemical reactions happening there); $$ x_1,x_2,x_3 $$ are the Cartesian coordinates of the point $$ x $$ ; $$ \partial F/\partial t $$ is the (first) derivative of $$ F $$ with respect to $$ t $$ ; and $$ \partial^2 F/\partial x_i^2 $$ is the second derivative of $$ F $$ relative to $$ x_i $$ .
https://en.wikipedia.org/wiki/Wave
passage: Given a topological space ... a atlas is a collection of charts such that covers , and such that for all and in , the transition map is a map a smooth or atlas a smooth map an analytic or atlas a real-analytic map a holomorphic atlas a holomorphic map Since every real-analytic map is smooth, and every smooth map is for any , one can see that any analytic atlas can also be viewed as a smooth atlas, and every smooth atlas can be viewed as a atlas. This chain can be extended to include holomorphic atlases, with the understanding that any holomorphic map between open subsets of can be viewed as a real-analytic map between open subsets of . Given a differentiable atlas on a topological space, one says that a chart is differentiably compatible with the atlas, or differentiable relative to the given atlas, if the inclusion of the chart into the collection of charts comprising the given differentiable atlas results in a differentiable atlas. A differentiable atlas determines a maximal differentiable atlas, consisting of all charts which are differentiably compatible with the given atlas. A maximal atlas is always very large. For instance, given any chart in a maximal atlas, its restriction to an arbitrary open subset of its domain will also be contained in the maximal atlas. A maximal smooth atlas is also known as a smooth structure; a maximal holomorphic atlas is also known as a complex structure. An alternative but equivalent definition, avoiding the direct use of maximal atlases, is to consider equivalence classes of differentiable atlases, in which two differentiable atlases are considered equivalent if every chart of one atlas is differentiably compatible with the other atlas.
https://en.wikipedia.org/wiki/Differentiable_manifold
passage: In a ring R, the set R itself forms a two-sided ideal of R called the unit ideal. It is often also denoted by $$ (1) $$ since it is precisely the two-sided ideal generated (see below) by the unity . Also, the set $$ \{ 0_R \} $$ consisting of only the additive identity 0R forms a two-sided ideal called the zero ideal and is denoted by . Every (left, right or two-sided) ideal contains the zero ideal and is contained in the unit ideal. - An (left, right or two-sided) ideal that is not the unit ideal is called a proper ideal (as it is a proper subset). Note: a left ideal $$ \mathfrak{a} $$ is proper if and only if it does not contain a unit element, since if $$ u \in \mathfrak{a} $$ is a unit element, then $$ r = (r u^{-1}) u \in \mathfrak{a} $$ for every . Typically there are plenty of proper ideals. In fact, if R is a skew-field, then $$ (0), (1) $$ are its only ideals and conversely: that is, a nonzero ring R is a skew-field if $$ (0), (1) $$ are the only left (or right) ideals.
https://en.wikipedia.org/wiki/Ideal_%28ring_theory%29
passage: Optimize the whole Modern software systems are not simply the sum of their parts, but also the product of their interactions. Defects in software tend to accumulate during the development process – by decomposing the big tasks into smaller tasks, and by standardizing different stages of development, the root causes of defects should be found and eliminated. The larger the system, the more organizations that are involved in its development and the more parts are developed by different teams, the greater the importance of having well defined relationships between different vendors, in order to produce a system with smoothly interacting components. During a longer period of development, a stronger subcontractor network is far more beneficial than short-term profit optimizing, which does not enable win-win relationships. Lean thinking has to be understood well by all members of a project, before implementing in a concrete, real-life situation. "Think big, act small, fail fast; learn rapidly" – these slogans summarize the importance of understanding the field and the suitability of implementing lean principles along the whole software development process. Only when all of the lean principles are implemented together, combined with strong "common sense" with respect to the working environment, is there a basis for success in software development. ## Lean software practices Lean software development practices, or what the Poppendiecks call "tools" are restated slightly from the original equivalents in agile software development.
https://en.wikipedia.org/wiki/Lean_software_development
passage: Dirichlet's theorem on primes in arithmetic progressions then tells us that $$ \pi(x;q,a) \sim \frac{\pi(x)}{\varphi(q)}\ \ (x\rightarrow\infty) $$ where $$ \varphi $$ is Euler's totient function. If we then define the error function $$ E(x;q) = \max_{\text{gcd}(a,q) = 1} \left|\pi(x;q,a) - \frac{\pi(x)}{\varphi(q)}\right| $$ where the max is taken over all $$ a $$ coprime to $$ q $$ , then the Elliott–Halberstam conjecture is the assertion that for every $$ \theta < 1 $$ and $$ A > 0 $$ there exists a constant $$ C > 0 $$ such that $$ \sum_{1 \leq q \leq x^\theta} E(x;q) \leq \frac{C x}{\log^A x} $$ for all $$ x > 2 $$ . This conjecture was proven for all $$ \theta < 1/2 $$ by Enrico Bombieri and A. I. Vinogradov (the Bombieri–Vinogradov theorem, sometimes known simply as "Bombieri's theorem"); this result is already quite useful, being an averaged form of the generalized Riemann hypothesis.
https://en.wikipedia.org/wiki/Elliott%E2%80%93Halberstam_conjecture
passage: Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding. Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion. An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry.
https://en.wikipedia.org/wiki/Computer-assisted_proof
passage: For any single operation, the bottom-up technique is advantageous if the number of downward movements is at least of the height of the tree (when the number of comparisons is times the height for both techniques), and it turns out that this is more than true on average, even for worst-case inputs. A naïve implementation of this conceptual algorithm would cause some redundant data copying, as the sift-up portion undoes part of the sifting down. A practical implementation searches downward for a leaf where −∞ would be placed, then upward for where the root should be placed. Finally, the upward traversal continues to the root's starting position, performing no more comparisons but exchanging nodes to complete the necessary rearrangement. This optimized form performs the same number of exchanges as top-down . Because it goes all the way to the bottom and then comes back up, it is called heapsort with bounce by some authors.
https://en.wikipedia.org/wiki/Heapsort
passage: The two writes could have been done by the same processor or by different processors. As in sequential consistency, reads do not need to reflect changes instantaneously, however, they need to reflect all changes to a variable sequentially. Sequence 1 2 3 4 1 W(x)1 R(x)1 R(x)1 R(x)1 2 W(x)2 3 W(x)3 R(x)3 R(x)2 4 R(x)2 R(x)3 W(x)2 happens after W(x)1 due to the read made by P2 to x before W(x)2, hence this example is causally consistent under Hutto and Ahamad's definition (although not under Tanenbaum et al.'s, because W(x)2 and W(x)3 are not seen in the same order for all processes). However R(x)2 and R(x)3 happen in a different order on P3 and P4, hence this example is sequentially inconsistent. ### Processor consistency In order for consistency in data to be maintained and to attain scalable processor systems where every processor has its own memory, the processor consistency model was derived. All processors need to be consistent in the order in which they see writes done by one processor and in the way they see writes by different processors to the same location (coherence is maintained). However, they do not need to be consistent when the writes are by different processors to different locations. Every write operation can be divided into several sub-writes to all memories.
https://en.wikipedia.org/wiki/Consistency_model
passage: $$ - (See Integral of the secant function. This result was a well-known conjecture in the 17th century.) - $$ \int \csc{x} \, dx = -\ln{\left| \csc{x} + \cot{x}\right|} + C = \ln{\left| \csc{x} - \cot{x}\right|} + C = \ln{\left| \tan {\frac{x}{2}} \right|} + C $$ - $$ \int \sec^2 x \, dx = \tan x + C $$ - $$ \int \csc^2 x \, dx = -\cot x + C $$ - $$ \int \sec{x} \, \tan{x} \, dx = \sec{x} + C $$ - $$ \int \csc{x} \, \cot{x} \, dx = -\csc{x} + C $$ - $$ \int \sin^2 x \, dx = \frac{1}{2}\left(x - \frac{\sin 2x}{2} \right) + C = \frac{1}{2}(x - \sin x\cos x ) + C $$ -
https://en.wikipedia.org/wiki/Lists_of_integrals
passage: Set a tag array equal to the original array size and initialize to a false value. 1. [Main Sort] Determines whether all buckets of the original array have been sorted. If the sorting is not completed, the [Divide function] is executed. 1. [Divide function] Find the maximum and minimum values in the bucket. If the maximum value is equal to the minimum value, the sorting is completed and the division is stopped. 1. Set up a two-dimensional array as all the empty buckets. Divide into the bucket according to the interpolation number. 1. After dividing into the bucket, mark the starting position of the bucket as a true value in the tag array. And put the items back into the original array one by one from all the buckets that are not empty. 1. Return to [Main Sort].
https://en.wikipedia.org/wiki/Interpolation_sort
passage: Designers do not work this way – extensive empirical evidence has demonstrated that designers do not act as the rational model suggests. 1. Unrealistic assumptions – goals are often unknown when a design project begins, and the requirements and constraints continue to change. ### Action-centric model The action-centric perspective is a label given to a collection of interrelated concepts, which are antithetical to the rational model. It posits that: 1. Designers use creativity and emotion to generate design candidates. 1. The design process is improvised. 1. No universal sequence of stages is apparent – analysis, design, and implementation are contemporary and inextricably linked. The action-centric perspective is based on an empiricist philosophy and broadly consistent with the agile approach and methodical development. Substantial empirical evidence supports the veracity of this perspective in describing the actions of real designers. Like the rational model, the action-centric model sees design as informed by research and knowledge. At least two views of design activity are consistent with the action-centric perspective. Both involve these three basic activities: - In the reflection-in-action paradigm, designers alternate between "framing", "making moves", and "evaluating moves". "Framing" refers to conceptualizing the problem, i.e., defining goals and objectives. A "move" is a tentative design decision. The evaluation process may lead to further moves in the design. - In the sensemaking–coevolution–implementation framework, designers alternate between its three titular activities.
https://en.wikipedia.org/wiki/Design
passage: In particular, the estimated effects may be biased if CCTV is introduced in response to crime trends. In 2012, cities such as Manchester in the UK are using DVR-based technology to improve accessibility for crime prevention. In 2013, City of Philadelphia Auditor found that the $15 million system was operational only 32% of the time. There is anecdotal evidence that CCTV aids in detection and conviction of offenders; for example, UK police forces routinely seek CCTV recordings after crimes. Cameras have also been installed on public transport in the hope of deterring crime. A 2017 review published in the Journal of Scandinavian Studies in Criminology and Crime Prevention compiles seven studies that use such research designs. The studies found that CCTV reduced crime by 24–28% in public streets and urban subway stations. It also found that CCTV could decrease unruly behaviour in football stadiums and theft in supermarkets/mass merchant stores. However, there was no evidence of CCTV having desirable effects in parking facilities or suburban subway stations. Furthermore, the review indicates that CCTV is more effective in preventing property crimes than in violent crimes. However, a 2019, 40-year-long systematic review study reported that the most consistent effects of crime reduction of CCTV were in car parks. A more open question is whether most CCTV is cost-effective. While low-quality domestic kits are cheap, the professional installation and maintenance of high definition CCTV is expensive.
https://en.wikipedia.org/wiki/Closed-circuit_television
passage: This is a general issue with area graphs, and area is hard to judge – see "Cleveland's hierarchy". (summarized) For example, the alternating data 9, 1, 9, 1, 9, 1 yields a spiking radar chart (which goes in and out), while reordering the data as 9, 9, 9, 1, 1, 1 instead yields two distinct wedges (sectors). In some cases there is a natural structure, and radar charts can be well-suited. For example, for diagrams of data that vary over a 24-hour cycle, the hourly data is naturally related to its neighbor, and has a cyclic structure, so it can naturally be displayed as a radar chart. One set of guidelines on the use of radar charts (or rather the closely related "polar area graph") is: - you don't mind reading stacked areas instead of position along a common scale (see Cleveland's Hierarchy), - the data set is truly cyclic, not linear, and - there are two series to compare, one much smaller than the other ### Data set size Radar charts are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming. Further, when using radar charts with multiple dimensions or samples, the radar chart may become cluttered and harder to interpret as the number of samples grows.
https://en.wikipedia.org/wiki/Radar_chart
passage: Using the same logic, Alice also steps forward on turn two. Assume that there is a third child, Charlie. If only Alice is muddy ( $$ X=1 $$ ), she will see no muddy faces and will step forward on turn one. If both Alice and Bob are muddy ( $$ X=2 $$ ), neither can step forward on turn one but each will know by turn two that the other saw a muddy face—which they can see is not Charlie's—so their own face must be muddy and both will step forward on turn two. Charlie, seeing two muddy faces, does not know on turn two whether his own face is muddy or not until Alice and Bob both step forward (indicating that his own face is clean). If all three are muddy ( $$ X=3 $$ ), each is in the position of Charlie when $$ X=2 $$ : when two people fail to step forward on turn two, each knows that the other sees two muddy faces meaning that their own face must be muddy, and each steps forward on turn three. It can be proven that $$ X $$ muddy children will step forward at turn $$ X $$ . ### Game-theoretic solution Muddy children puzzle can also be solved using backward induction from game theory. Muddy children puzzle can be represented as an extensive form game of imperfect information. Every player has two actions — stay back and step forwards. There is a move by nature at the start of the game, which determines the children with and without muddy faces. Children do not communicate as in non-cooperative games. Every stroke is a simultaneous move by children.
https://en.wikipedia.org/wiki/Induction_puzzles
passage: In the Jordan normal form, we have written $$ V = \bigoplus_{i = 1}^r V_i $$ where $$ r $$ is the number of Jordan blocks and $$ x |_{V_i} $$ is one Jordan block. Now let $$ f(t) = \operatorname{det}(t I - x) $$ be the characteristic polynomial of $$ x $$ . Because $$ f $$ splits, it can be written as $$ f(t) = \prod_{i=1}^r (t - \lambda_i)^{d_i} $$ , where $$ r $$ is the number of Jordan blocks, $$ \lambda_i $$ are the distinct eigenvalues, and $$ d_i $$ are the sizes of the Jordan blocks, so $$ d_i = \dim V_i $$ . Now, the Chinese remainder theorem applied to the polynomial ring $$ k[t] $$ gives a polynomial $$ p(t) $$ satisfying the conditions $$ p(t) \equiv 0 \bmod t,\, p(t) \equiv \lambda_i \bmod (t - \lambda_i)^{d_i} $$ (for all i). (There is a redundancy in the conditions if some $$ \lambda_i $$ is zero but that is not an issue; just remove it from the conditions.)
https://en.wikipedia.org/wiki/Jordan%E2%80%93Chevalley_decomposition
passage: Here is a table showing the conditional probabilities of being hit, depending on the state of the lights. (Note that the columns in this table must add up to 1 because the probability of being hit or not hit is 1 regardless of the state of the light.) + Conditional distribution: RedYellowGreen Not Hit 0.99 0.9 0.2 Hit 0.01 0.1 0.8 To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2×3 block of entries. (Note that the cells in this 2×3 block add up to 1). + Joint distribution: RedYellowGreenMarginal probability P(H) Not Hit 0.198 0.09 0.14 0.428 Hit 0.002 0.01 0.56 0.572 Total 0.2 0.1 0.7 1 The marginal probability P(H = Hit) is the sum 0.572 along the H = Hit row of this joint distribution table, as this is the probability of being hit when the lights are red OR yellow OR green. Similarly, the marginal probability that P(H = Not Hit) is the sum along the H = Not Hit row. ## Multivariate distributions For multivariate distributions, formulae similar to those above apply with the symbols X and/or Y being interpreted as vectors.
https://en.wikipedia.org/wiki/Marginal_distribution
passage: The Kaplansky density theorem can be used to formulate some approximations with respect to the strong operator topology. 1) If h is a positive operator in (A−)1, then h is in the strong-operator closure of the set of self-adjoint operators in (A+)1, where A+ denotes the set of positive operators in A. 2) If A is a C*-algebra acting on the Hilbert space H and u is a unitary operator in A−, then u is in the strong-operator closure of the set of unitary operators in A. In the density theorem and 1) above, the results also hold if one considers a ball of radius r > 0, instead of the unit ball. ## Proof The standard proof uses the fact that a bounded continuous real-valued function f is strong-operator continuous. In other words, for a net {aα} of self-adjoint operators in A, the continuous functional calculus a → f(a) satisfies, $$ \lim f(a_{\alpha}) = f (\lim a_{\alpha}) $$ in the strong operator topology. This shows that self-adjoint part of the unit ball in A− can be approximated strongly by self-adjoint elements in A. A matrix computation in M2(A) considering the self-adjoint operator with entries 0 on the diagonal and a and a* at the other positions, then removes the self-adjointness restriction and proves the theorem.
https://en.wikipedia.org/wiki/Kaplansky_density_theorem
passage: If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses. Some earlier systems with smaller real memory sizes, such as the SDS 940, used page registers instead of page tables in memory for address translation. ### Paging supervisor This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using a page replacement algorithm, e.g., a least recently used (LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables. Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details: - If the virtual address is invalid, the paging supervisor treats it as an error. - If the page is valid and the page information is not loaded into the MMU, the page information will be stored into one of the page registers.
https://en.wikipedia.org/wiki/Virtual_memory
passage: // E (1, 2)4.// E (3)5.// I (2, 4) (discharging 2)6. // I (1, 5) (discharging 1) - Import-export: $$ P \to (Q \to R) \equiv (P \land Q) \to R $$ - Negated conditionals: $$ \neg(P \to Q) \equiv P \land \neg Q $$ - Or-and-if: $$ P \to Q \equiv \neg P \lor Q $$ - Commutativity of antecedents: $$ \big(P \to (Q \to R)\big) \equiv \big(Q \to (P \to R)\big) $$ -
https://en.wikipedia.org/wiki/Material_conditional
passage: ATP is synthesized by the ATP synthase enzyme when the chemiosmotic gradient is used to drive the phosphorylation of ADP. The electrons are finally transferred to exogenous oxygen and, with the addition of two protons, water is formed. ## Efficiency of ATP production The table below describes the reactions involved when one glucose molecule is fully oxidized into carbon dioxide. It is assumed that all the reduced coenzymes are oxidized by the electron transport chain and used for oxidative phosphorylation. Stepcoenzyme yieldATP yieldSource of ATPGlycolysis preparatory phase −2Phosphorylation of glucose and fructose 6-phosphate uses two ATP from the cytoplasm. Glycolysis pay-off phase4Substrate-level phosphorylation2 NADH3 or 5Oxidative phosphorylation: Each NADH produces net 1.5 ATP (instead of usual 2.5) due to NADH transport over the mitochondrial membraneOxidative decarboxylation of pyruvate 2 NADH 5Oxidative phosphorylationKrebs cycle2Substrate-level phosphorylation6 NADH15 Oxidative phosphorylation2 FADH23 Oxidative phosphorylationTotal yield30 or 32 ATPFrom the complete oxidation of one glucose molecule to carbon dioxide and oxidation of all the reduced coenzymes.
https://en.wikipedia.org/wiki/Cellular_respiration
passage: Let A0, A1, ... An be vector fields on Rd. They are said to satisfy Hörmander's condition if, for every point x ∈ Rd, the vectors $$ \begin{align} &A_{j_0} (x)~,\\ &[A_{j_{0}} (x), A_{j_{1}} (x)]~,\\ &[[A_{j_{0}} (x), A_{j_{1}} (x)], A_{j_{2}} (x)]~,\\ &\quad\vdots\quad \end{align} \qquad 0 \leq j_{0}, j_{1}, \ldots, j_{n} \leq n $$ span Rd. They are said to satisfy the parabolic Hörmander condition if the same holds true, but with the index $$ j_0 $$ taking only values in 1,...,n.
https://en.wikipedia.org/wiki/H%C3%B6rmander%27s_condition