text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: There are different definitions of divisors, but in general they form an abstraction of a codimension-one subvariety of an algebraic variety, the set of solution points of a system of polynomial equations. In the case where the system of equations has one degree of freedom (its solutions form an algebraic curve or Riemann surface), a subvariety has codimension one when it consists of isolated points, and in this case a divisor is again a signed multiset of points from the variety. The meromorphic functions on a compact Riemann surface have finitely many zeros and poles, and their divisors form a subgroup of a free abelian group over the points of the surface, with multiplication or division of functions corresponding to addition or subtraction of group elements. To be a divisor, an element of the free abelian group must have multiplicities summing to zero, and meet certain additional constraints depending on the surface. ### Group rings The integral group ring $$ \Z[G] $$ , for any group $$ G $$ , is a ring whose additive group is the free abelian group over $$ G $$ . When $$ G $$ is finite and abelian, the multiplicative group of units in $$ \Z[G] $$ has the structure of a direct product of a finite group and a finitely generated free abelian group. ## References Category:Abelian group theory Category: Properties of groups Category:Free algebraic structures
https://en.wikipedia.org/wiki/Free_abelian_group
passage: Anaspida (without shield) is an extinct group of primitive jawless vertebrates that lived during the Silurian and Devonian periods. They are classically regarded as the ancestors of lampreys. Anaspids were small marine agnathans that lacked heavy bony shield and paired fins, but have a striking highly hypocercal tail. They first appeared in the Early Silurian, and flourished until the Late Devonian extinction, where most species, save for lampreys, became extinct due to the environmental upheaval during that time. †Cephalaspido-morphi(extinct) Cephalaspidomorphi is a broad group of extinct armored agnathans found in Silurian and Devonian strata of North America, Europe, and China, and is named in reference to the osteostracan genus Cephalaspis. Most biologists regard this taxon as extinct, but the name is sometimes used in the classification of lampreys, as lampreys are sometimes thought to be related to cephalaspids. If lampreys are included, they would extend the known range of the group from the early Silurian period through the Mesozoic, and into the present day. Cephalaspidomorphi were, like most contemporary fish, very well armoured. Particularly, the head shield was well developed, protecting the head, gills and the anterior section of the innards. The body was in most forms well armoured as well.
https://en.wikipedia.org/wiki/Agnatha
passage: In 1924, Frank E. Denny discovered that it was the molecule ethylene emitted by the kerosene lamps that induced the ripening. Reporting in the Botanical Gazette, he wrote:Ethylene was very effective in bringing about the desired result, concentrations as low as one part (by volume) of ethylene in one million parts of air being sufficient to cause green lemons to turn yellow in about six to ten days... Furthermore, coloring with either ethylene or gas from the kerosene stoves caused the loss of the "buttons" (calyx, receptacle, and a portion of the peduncle)... Yellowing of the ethylene treated fruit became visible about the third or fourth day, and full yellow color was developed in six to ten days. Untreated fruit remained green during the same period of time. The same year, Denny published the experimental details separately, and also experimentally showed that use of ethylene was more advantageous than that of kerosene. In 1934, British biologist Richard Gane discovered that the chemical constituent in ripe bananas could cause ripening of green bananas, as well as faster growth of pea. He showed that the same growth effect could be induced by ethylene. Reporting in Nature that ripe fruit (in this case Worcester Pearmain apple) produced ethylene he said:The amount of ethylene produced [by the apple] is very small—perhaps of the order of 1 cubic centimetre during the whole life-history of the fruit; and the cause of its prodigious biological activity in such small concentration is a problem for further research.
https://en.wikipedia.org/wiki/Ethylene_%28plant_hormone%29
passage: These absorbers prevent corruption of the measurement due to reflections. A compact range is an anechoic chamber with a reflector to simulate far field conditions. Typical values for a centimeter wave radar are:Ship RCS Table - Insect: 0.00001 m2 - Bird: 0.01 m2 - Stealth aircraft: <0.1 m2 (e.g. F-117A: 0.001 m2) - Surface-to-air-missile: ≈0.1 m2 - Human: 1 m2 - small combat aircraft: 2–3 m2 - large combat aircraft: 5–6 m2 - Cargo aircraft: up to 100 m2 - Coastal trading vessel (55 m length): 300–4000 m2 - Corner reflector with 1.5 m edge length: ≈20,000 m2M. Skolnik: Introduction to radar systems. 2nd Edition, McGraw-Hill, Inc., 1980, p. 44 - Frigate (103 m length): 5000–100,000 m2 - Container ship (212 m length): 10,000–80,000 m2 ## Calculation Quantitatively, RCS is calculated in three-dimensions as $$ \sigma = \lim_{r \to \infty} 4 \pi r^{2} \frac{S_{s}}{S_{i}} $$ Where $$ \sigma $$ is the RCS, $$ S_{i} $$ is the incident power density measured at the target, and $$ S_{s} $$ is the scattered power density seen at a distance $$ r $$ away from the target.
https://en.wikipedia.org/wiki/Radar_cross_section
passage: ## Complexity Selection sort is not difficult to analyze compared to other sorting algorithms, since none of the loops depend on the data in the array. Selecting the minimum requires scanning $$ n $$ elements (taking $$ n-1 $$ comparisons) and then swapping it into the first position. Finding the next lowest element requires scanning the remaining $$ n-1 $$ elements (taking $$ n-2 $$ comparisons) and so on. Therefore, the total number of comparisons is $$ (n-1)+(n-2)+\dots+1 = \sum_{i=1}^{n-1}i $$ By arithmetic progression, $$ \sum_{i=1}^{n-1}i= \frac{(n-1)+1}{2}(n-1)= \frac{1}{2}n(n-1)= \frac{1}{2}(n^2-n) $$ which is of complexity $$ O(n^2) $$ in terms of number of comparisons. ## Comparison to other sorting algorithms Among quadratic sorting algorithms (sorting algorithms with a simple average-case of Θ(n2)), selection sort almost always outperforms bubble sort and gnome sort. Insertion sort is very similar in that after the kth iteration, the first $$ k $$ elements in the array are in sorted order. Insertion sort's advantage is that it only scans as many elements as it needs in order to place the $$ k+1 $$ st element, while selection sort must scan all remaining elements to find the $$ k+1 $$ st element.
https://en.wikipedia.org/wiki/Selection_sort
passage: This can include obvious information such as usernames and account information, but also extends to more personal data like physical movements, interaction habits, and responses to virtual environments. In addition, advanced VR systems can capture biometric data like voice patterns, eye movements, and physiological responses to VR experiences. Virtual reality technology has grown substantially since its inception, moving from a niche technology to a mainstream consumer product. As the user base has grown, so too has the amount of personal data collected by these systems. This data can be used to improve VR systems, to provide personalized experiences, or to collect demographic information for marketing purposes. However, it also raises significant privacy concerns, especially when this data is stored, shared, or sold without the user's explicit consent. Existing data protection and privacy laws like the General Data Protection Regulation (GDPR) in the EU, and the California Consumer Privacy Act (CCPA) in the United States, can be applied to VR. These regulations require companies to disclose how they collect and use data, and give users a degree of control over their personal information. Despite these regulations, enforcing privacy laws in VR can be challenging due to the global nature of the technology and the vast amounts of data collected. Due to its history of privacy issues, the involvement of Meta Platforms (formerly Facebook, Inc.) in the VR market has led to privacy concerns specific to its platforms.
https://en.wikipedia.org/wiki/Virtual_reality
passage: For simplicity of the discussion here and below, the formulas are generally presented in weakened forms without all possible insertions of double-negations in the antecedents. More general variants hold. Incorporating the predicate $$ \psi $$ and currying, the following generalization also entails the relation between implication and conjunction in the predicate calculus, discussed below. $$ \big(\forall x \ \phi(x)\to (\psi(x)\to\varphi)\big)\,\,\leftrightarrow\,\,\Big(\big(\exists x \ \phi(x)\land \psi(x)\big)\to\varphi\Big) $$ If the predicate $$ \psi $$ is decidedly false for all $$ x $$ , then this equivalence is trivial. If $$ \psi $$ is decidedly true for all $$ x $$ , the schema simply reduces to the previously stated equivalence. In the language of classes, $$ A=\{x\mid\phi(x)\} $$ and $$ B=\{x\mid\psi(x)\} $$ , the special case of this equivalence with false $$ \varphi $$ equates two characterizations of disjointness $$ A\cap B=\emptyset $$ : $$ \forall(x\in A).x\notin B\,\,\leftrightarrow\,\,\neg\exists(x\in A).x\in B $$
https://en.wikipedia.org/wiki/Intuitionistic_logic
passage: This shows that every point on the unit circle in the discrete-time filter z-plane, $$ z = e^{ j \omega_d T} $$ is mapped to a point on the $$ j \omega $$ axis on the continuous-time filter s-plane, $$ s = j \omega_a $$ . That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is $$ \omega_a = \frac{2}{T} \tan \left( \omega_d \frac{T}{2} \right) $$ and the inverse mapping is $$ \omega_d = \frac{2}{T} \arctan \left( \omega_a \frac{T}{2} \right). $$ The discrete-time filter behaves at frequency $$ \omega_d $$ the same way that the continuous-time filter behaves at frequency $$ (2/T) \tan(\omega_d T/2) $$ . Specifically, the gain and phase shift that the discrete-time filter has at frequency $$ \omega_d $$ is the same gain and phase shift that the continuous-time filter has at frequency $$ (2/T) \tan(\omega_d T/2) $$ .
https://en.wikipedia.org/wiki/Bilinear_transform
passage: Because there is no load to absorb that power, it retransmits all of that power, possibly with a phase shift which is critically dependent on the element's exact length. Thus such a conductor can be arranged in order to transmit a second copy of a transmitter's signal in order to affect the radiation pattern (and feedpoint impedance) of the element electrically connected to the transmitter. Antenna elements used in this way are known as passive radiators. A Yagi–Uda array uses passive elements to greatly increase gain in one direction (at the expense of other directions). A number of parallel approximately half-wave elements (of very specific lengths) are situated parallel to each other, at specific positions, along a boom; the boom is only for support and not involved electrically. Only one of the elements is electrically connected to the transmitter or receiver, while the remaining elements are passive. The Yagi produces a fairly large gain (depending on the number of passive elements) and is widely used as a directional antenna with an antenna rotor to control the direction of its beam. It suffers from having a rather limited bandwidth, restricting its use to certain applications. Rather than using one driven antenna element along with passive radiators, one can build an array antenna in which multiple elements are all driven by the transmitter through a system of power splitters and transmission lines in relative phases so as to concentrate the RF power in a single direction.
https://en.wikipedia.org/wiki/Antenna_%28radio%29
passage: The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3. ### Preservation Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam. In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation. ### Cost Scientists claim that antimatter is the costliest material to make.
https://en.wikipedia.org/wiki/Antimatter
passage: Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits, and so forth. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin. ### Homomers An assemblage of multiple copies of a particular polypeptide chain can be described as a homomer, multimer or oligomer. Bertolini et al. in 2021 presented evidence that homomer formation may be driven by interaction between nascent polypeptide chains as they are translated from mRNA by nearby adjacent ribosomes. Hundreds of proteins have been identified as being assembled into homomers in human cells. The process of assembly is often initiated by the interaction of the N-terminal region of polypeptide chains. Evidence that numerous gene products form homomers (multimers) in a variety of organisms based on intragenic complementation evidence was reviewed in 1965. ## Domains, motifs, and folds in protein structure Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds.
https://en.wikipedia.org/wiki/Protein_structure
passage: If $$ \forall n\geq N, N\in\N $$ we have , then also converges to . ### Proof According to the above hypotheses we have, taking the limit inferior and superior: $$ L=\lim_{x \to a} g(x)\leq\liminf_{x\to a}f(x) \leq \limsup_{x\to a}f(x)\leq \lim_{x \to a}h(x)=L, $$ so all the inequalities are indeed equalities, and the thesis immediately follows. A direct proof, using the -definition of limit, would be to prove that for all real there exists a real such that for all with $$ |x - a| < \delta, $$ we have $$ |f(x) - L| < \varepsilon. $$ Symbolically, $$ \forall \varepsilon > 0, \exists \delta > 0 : \forall x, (|x - a | < \delta \ \Rightarrow |f(x) - L |< \varepsilon). $$ As $$ \lim_{x \to a} g(x) = L $$ means that and $$ \lim_{x \to a} h(x) = L $$ means that then we have $$ g(x) \leq f(x) \leq h(x) $$ $$ g(x) - L\leq f(x) - L\leq h(x) - L $$ We can choose $$ \delta:=\min\left\{\delta_1,\delta_2\right\} $$ .
https://en.wikipedia.org/wiki/Squeeze_theorem
passage: One component remains and we are done. The edge BD is not considered because both endpoints are in the same component. ## Other algorithms Other algorithms for this problem include Prim's algorithm and Kruskal's algorithm. Fast parallel algorithms can be obtained by combining Prim's algorithm with Borůvka's. A faster randomized minimum spanning tree algorithm based in part on Borůvka's algorithm due to Karger, Klein, and Tarjan runs in expected time. The best known (deterministic) minimum spanning tree algorithm by Bernard Chazelle is also based in part on Borůvka's and runs in time, where α is the inverse Ackermann function. These randomized and deterministic algorithms combine steps of Borůvka's algorithm, reducing the number of components that remain to be connected, with steps of a different type that reduce the number of edges between pairs of components. ## Notes Category:Graph algorithms Category: Spanning tree Category: Articles with example pseudocode
https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
passage: In the above problem, the parameter is the so-called reference point $$ \bar z $$ representing objective function values preferred by the decision maker. - Sen's multi-objective programming $$ \begin{array}{ll} \max & \frac{\sum_{j=1}^r Z_j}{W_j}- \frac{\sum_{j=r+1}^s Z_j}{W_{r+1}} \\ \text{s.t. } & AX=b \\ BLOCK1\end{array} $$ where $$ W_j $$ is individual optima (absolute) for objectives of maximization $$ r $$ and minimization $$ r+1 $$ to $$ s $$ . - hypervolume/Chebyshev scalarization $$ \min_{x\in X} \max_i \frac{ f_i(x)}{w_i} $$ where the weights of the objectives $$ w_i>0 $$ are the parameters of the scalarization. If the parameters/weights are drawn uniformly in the positive orthant, it is shown that this scalarization provably converges to the Pareto front, even when the front is non-convex. For example, portfolio optimization is often conducted in terms of mean-variance analysis.
https://en.wikipedia.org/wiki/Multi-objective_optimization
passage: Our parabola yk is written as pk,2 in this notation. The degree m must be 1 or larger. The next approximation xk is now one of the roots of the pk,m, i.e. one of the solutions of pk,m(x)=0. Taking m=1 we obtain the secant method whereas m=2 gives Muller's method. Muller calculated that the sequence {xk} generated this way converges to the root ξ with an order μm where μm is the positive solution of $$ x^{m+1} - x^m - x^{m-1} - \dots - x - 1 = 0 $$ . As m approaches infinity the positive solution for the equation approaches 2. The method is much more difficult though for m>2 than it is for m=1 or m=2 because it is much harder to determine the roots of a polynomial of degree 3 or higher. Another problem is that there seems no prescription of which of the roots of pk,m to pick as the next approximation xk for m>2. These difficulties are overcome by Sidi's generalized secant method which also employs the polynomial pk,m. Instead of trying to solve pk,m(x)=0, the next approximation xk is calculated with the aid of the derivative of pk,m at xk-1 in this method. ## Computational example Below, Muller's method is implemented in the Python programming language.
https://en.wikipedia.org/wiki/Muller%27s_method
passage: - Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers. - Confidentiality is the nondisclosure of information except to another authorized person. - Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that the data exchange between systems can be intercepted or modified. - Cyber attribution, is an attribution of cybercrime, i.e., finding who perpetrated a cyberattack. - Cyberwarfare is an Internet-based conflict that involves politically motivated attacks on information and information systems. Such attacks can, for example, disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems. - Data integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record. - Encryption is used to protect the confidentiality of a message. Cryptographically secure ciphers are designed to make any practical attempt of breaking them infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance. - Endpoint security software aids networks in preventing malware infection and data theft at network entry points made vulnerable by the prevalence of potentially infected devices such as laptops, mobile devices, and USB drives. - Firewalls serve as a gatekeeper system between networks, allowing only traffic that matches defined rules.
https://en.wikipedia.org/wiki/Computer_security
passage: The new Gaussian curvature is then given by $$ K^\prime(x)= e^{-2u} (K(x) - \Delta u), $$ where is the Laplacian for the original metric. Thus to show that a given surface is conformally equivalent to a metric with constant curvature it suffices to solve the following variant of Liouville's equation: $$ \Delta u = K^\prime e^{2u} + K(x). $$ When has Euler characteristic 0, so is diffeomorphic to a torus, , so this amounts to solving $$ \Delta u = K(x). $$ By standard elliptic theory, this is possible because the integral of over is zero, by the Gauss–Bonnet theorem. When has negative Euler characteristic, , so the equation to be solved is: $$ \Delta u = -e^{2u} + K(x). $$ Using the continuity of the exponential map on Sobolev space due to Neil Trudinger, this non-linear equation can always be solved. Finally in the case of the 2-sphere, and the equation becomes: $$ \Delta u = e^{2u} + K(x). $$ So far this non-linear equation has not been analysed directly, although classical results such as the Riemann–Roch theorem imply that it always has a solution. The method of Ricci flow, developed by Richard S. Hamilton, gives another proof of existence based on non-linear partial differential equations to prove existence.
https://en.wikipedia.org/wiki/Differential_geometry_of_surfaces
passage: Indeed, some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in inductors, capacitors and resistors respectively. ## Historical overview There are three main stages in the history of passive analogue filter development: 1. Simple filters. The frequency dependence of electrical response was known for capacitors and inductors from very early on. The resonance phenomenon was also familiar from an early date and it was possible to produce simple, single-branch filters with these components. Although attempts were made in the 1880s to apply them to telegraphy, these designs proved inadequate for successful frequency-division multiplexing. Network analysis was not yet powerful enough to provide the theory for more complex filters and progress was further hampered by a general failure to understand the frequency domain nature of signals. 1. ## Image filters . Image filter theory grew out of transmission line theory and the design proceeded in a similar manner to transmission line analysis. For the first time filters could be produced that had precisely controllable passbands and other parameters. These developments took place in the 1920s and filters produced to these designs were still in widespread use in the 1980s, only declining as the use of analogue telecommunications has declined.
https://en.wikipedia.org/wiki/Analogue_filter
passage: bases such that is prime (only consider even )OEIS sequence02, 4, 6, 10, 12, 16, 18, 22, 28, 30, 36, 40, 42, 46, 52, 58, 60, 66, 70, 72, 78, 82, 88, 96, 100, 102, 106, 108, 112, 126, 130, 136, 138, 148, 150, ...12, 4, 6, 10, 14, 16, 20, 24, 26, 36, 40, 54, 56, 66, 74, 84, 90, 94, 110, 116, 120, 124, 126, 130, 134, 146, 150, 156, 160, 170, 176, 180, 184, ...22, 4, 6, 16, 20, 24, 28, 34, 46, 48, 54, 56, 74, 80, 82, 88, 90, 106, 118, 132, 140, 142, 154, 160, 164, 174, 180, 194, 198, 204, 210, 220, 228, ...32, 4, 118, 132, 140, 152, 208, 240, 242, 288, 290, 306, 378, 392, 426, 434, 442, 508, 510, 540, 542, 562, 596, 610, 664, 680, 682, 732, 782, ...42, 44, 74, 76, 94, 156, 158, 176, 188, 198, 248, 288, 306, 318, 330, 348, 370, 382, 396, 452, 456, 470, 474, 476, 478, 560, 568, 598, 642, ...530, 54, 96, 112, 114, 132, 156, 332, 342, 360, 376, 428, 430,
https://en.wikipedia.org/wiki/Fermat_number
passage: The same holds on every Polish space, see , , , and . For example, the Wiener measure turns the Polish space $$ \textstyle C[0,\infty) $$ (of all continuous functions $$ \textstyle [0,\infty) \to \mathbb{R}, $$ endowed with the topology of local uniform convergence) into a standard probability space. Another example: for every sequence of random variables, their joint distribution turns the Polish space $$ \textstyle \mathbb{R}^\infty $$ (of sequences; endowed with the product topology) into a standard probability space. (Thus, the idea of dimension, very natural for topological spaces, is utterly inappropriate for standard probability spaces.) The product of two standard probability spaces is a standard probability space. The same holds for the product of countably many spaces, see , , and . A measurable subset of a standard probability space is a standard probability space. It is assumed that the set is not a null set, and is endowed with the conditional measure. See and . Every probability measure on a standard Borel space turns it into a standard probability space. ## Using the standardness ### Regular conditional probabilities In the discrete setup, the conditional probability is another probability measure, and the conditional expectation may be treated as the (usual) expectation with respect to the conditional measure, see conditional expectation.
https://en.wikipedia.org/wiki/Standard_probability_space
passage: - The resulting matching might not match all of the participants. In this case, the condition of stability is that no unmatched pair prefer each other to their situation in the matching (whether that situation is another partner or being unmatched). With this condition, a stable matching will still exist, and can still be found by the Gale–Shapley algorithm. For this kind of stable matching problem, the rural hospitals theorem states that: - The set of assigned doctors, and the number of filled positions in each hospital, are the same in all stable matchings. - Any hospital that has some empty positions in some stable matching, receives exactly the same set of doctors in all stable matchings. ## Related problems In stable matching with indifference, some men might be indifferent between two or more women and vice versa. The stable roommates problem is similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women"). The hospitals/residents problem – also known as the college admissions problem – differs from the stable marriage problem in that a hospital can take multiple residents, or a college can take an incoming class of more than one student. Algorithms to solve the hospitals/residents problem can be hospital-oriented (as the NRMP was before 1995) or resident-oriented. This problem was solved, with an algorithm, in the same original paper by Gale and Shapley, in which the stable marriage problem was solved.
https://en.wikipedia.org/wiki/Stable_matching_problem
passage: For example, it can be used with Newton's method if the Hessian matrix is positive definite. ## Motivation Given a starting position $$ \mathbf{x} $$ and a search direction $$ \mathbf{p} $$ , the task of a line search is to determine a step size $$ \alpha > 0 $$ that adequately reduces the objective function $$ f:\mathbb R^n\to\mathbb R $$ (assumed i.e. continuously differentiable), i.e., to find a value of $$ \alpha $$ that reduces $$ f(\mathbf{x}+\alpha\,\mathbf{p}) $$ relative to $$ f(\mathbf{x}) $$ . However, it is usually undesirable to devote substantial resources to finding a value of $$ \alpha $$ to precisely minimize $$ f $$ . This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of $$ \alpha $$ that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of $$ \alpha $$ . The backtracking line search starts with a large estimate of $$ \alpha $$ and iteratively shrinks it.
https://en.wikipedia.org/wiki/Backtracking_line_search
passage: In 3D rendering, triangles and polygons in space might be primitives. Ray casting Ray casting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish. Radiosity Radiosity, also known as Global Illumination, is a method that attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms. Ray tracing Ray tracing is an extension of the same technique developed in scanline rendering and ray casting. Like those, it handles complicated objects well, and the objects may be described mathematically. Unlike scanline and casting, ray tracing is almost always a Monte Carlo technique, that is one based on averaging a number of randomly generated samples from a model. ### Volume rendering Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner.
https://en.wikipedia.org/wiki/Scientific_visualization
passage: A detailed overview of racks and their applications in knot theory may be found in the paper by Colin Rourke and Roger Fenn. ## Racks A rack may be defined as a set $$ \mathrm{R} $$ with a binary operation $$ \triangleleft $$ such that for every $$ a, b, c \in \mathrm{R} $$ the self-distributive law holds: $$ a \triangleleft(b \triangleleft c) = (a \triangleleft b) \triangleleft(a \triangleleft c) $$ and for every $$ a, b \in \mathrm{R}, $$ there exists a unique $$ c \in \mathrm{R} $$ such that $$ a \triangleleft c = b. $$ This definition, while terse and commonly used, is suboptimal for certain purposes because it contains an existential quantifier which is not really necessary. To avoid this, we may write the unique $$ c \in \mathrm{R} $$ such that $$ a \triangleleft c = b $$ as $$ b \triangleright a. $$ We then have $$ a \triangleleft c = b \iff c = b \triangleright a, $$ and
https://en.wikipedia.org/wiki/Racks_and_quandles
passage: In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as $$ \sgn x $$ or $$ \sgn (x) $$ . ## Definition The signum function of a real number $$ x $$ is a piecewise function which is defined as follows: $$ \sgn x :=\begin{cases} -1 & \text{if } x < 0, \\ 0 & \text{if } x = 0, \\ 1 & \text{if } x > 0. \end{cases} $$ The law of trichotomy states that every real number must be positive, negative or zero. The signum function denotes which unique category a number falls into by mapping it to one of the values , or which can then be used in mathematical expressions or further calculations. For example: $$ \begin{array}{lcr} \sgn(2) &=& +1\,, \\ \sgn(\pi) &=& +1\,, \\ \sgn(-8) &=& -1\,, \\ \sgn(-\frac{1}{2}) &=& -1\,, \\ \sgn(0) &=& 0\,. \end{array} $$
https://en.wikipedia.org/wiki/Sign_function
passage: - The sum of two numbers is unique; there is only one correct answer for a sums. When the sum of a pair of digits results in a two-digit number, the "tens" digit is referred to as the "carry digit". In elementary arithmetic, students typically learn to add whole numbers and may also learn about topics such as negative numbers and fractions. ## Subtraction Subtraction evaluates the difference between two numbers, where the minuend is the number being subtracted from, and the subtrahend is the number being subtracted. It is represented using the minus sign ( $$ - $$ ). The minus sign is also used to notate negative numbers. Subtraction is not commutative, which means that the order of the numbers can change the final value; $$ 3-5 $$ is not the same as $$ 5-3 $$ . In elementary arithmetic, the minuend is always larger than the subtrahend to produce a positive result. Subtraction is also used to separate, combine (e.g., find the size of a subset of a specific set), and find quantities in other contexts. There are several methods to accomplish subtraction. The traditional mathematics method subtracts using methods suitable for hand calculation. Reform mathematics is distinguished generally by the lack of preference for any specific technique, replaced by guiding students to invent their own methods of computation. American schools teach a method of subtraction using borrowing. A subtraction problem such as $$ 86-39 $$ is solved by borrowing a 10 from the tens place to add to the ones place in order to facilitate the subtraction.
https://en.wikipedia.org/wiki/Elementary_arithmetic
passage: Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext may ask the holder of the private key to decrypt an unsuspicious-looking ciphertext for some value chosen by the attacker. Because of the multiplicative property, ' is the encryption of . Hence, if the attacker is successful with the attack, they will learn from which they can derive the message by multiplying with the modular inverse of modulo . - Given the private exponent , one can efficiently factor the modulus . And given factorization of the modulus , one can obtain any private key (', ) generated against a public key (', ). ### Padding schemes To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value before encrypting it. This padding ensures that does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext with some number of additional bits, the size of the un-padded message must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure.
https://en.wikipedia.org/wiki/RSA_cryptosystem%23Side-channel_analysis_attacks
passage: We then let a function $$ k(u,v)=\wp(u)-\wp(v) $$ From the previous lemma we have: $$ k(u,v)= \wp(u)-\wp(v)=c\frac{\sigma(u+v)\sigma(u-v)}{\sigma(u)^2} $$ From some calculations one can find that $$ c=\frac1{\sigma(v)^2} \implies\wp(u)-\wp(v)=\frac{\sigma(u+v)\sigma(u-v)}{\sigma(u)^2\sigma(v)^2} $$ By definition the Weierstrass Zeta function: $$ \frac{d}{dz}\ln \sigma(z)=\zeta(z) $$ therefore we logarithmicly differentiate both sides obtaining: $$ \frac{\wp'(u)}{\wp(u)-\wp(v)}=\zeta(u+v)-2\zeta(u)-\zeta(u-v) $$ Once again by definition $$ \zeta'(z)=-\wp(z) $$
https://en.wikipedia.org/wiki/Weierstrass_elliptic_function
passage: Consider some possible values of $$ z $$ : 1. Let $$ z=3 $$ . Then $$ f_z(x) = (x-3)^2 - 5 = x^2 - 6x + 4 $$ , thus $$ \gcd(x^2 - 6x + 4 ; x^5 - 1) = 1 $$ . Both numbers $$ 3 \pm \beta $$ are quadratic non-residues, so we need to take some other $$ z $$ . 1. Let $$ z=2 $$ . Then $$ f_z(x) = (x-2)^2 - 5 = x^2 - 4x - 1 $$ , thus $$ \gcd( x^2 - 4x - 1 ; x^5 - 1)\equiv x - 9 \pmod{11} $$ . From this follows $$ x - 9 = x - 2 - \beta $$ , so $$ \beta \equiv 7 \pmod{11} $$ and $$ -\beta \equiv -7 \equiv 4 \pmod{11} $$ . A manual check shows that, indeed, $$ 7^2 \equiv 49 \equiv 5\pmod{11} $$ and $$ 4^2\equiv 16 \equiv 5\pmod{11} $$ .
https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Rabin_algorithm
passage: In general, they vanish unless they contain an odd number of indices from the set . The symmetric coefficients take the values $$ \begin{align} BLOCK5 d_{344} = d_{355} = -d_{366} = -d_{377} = -d_{247} = d_{146} = d_{157} = d_{256} &= \frac{1}{2} ~. \end{align} $$ They vanish if the number of indices from the set is odd. A generic group element generated by a traceless 3×3 Hermitian matrix , normalized as , can be expressed as a second order matrix polynomial in : $$ \begin{align} \exp(i\theta H) ={} BLOCK6\end{align} $$ LP where $$ \varphi \equiv \frac{1}{3}\left[\arccos\left(\frac{3\sqrt{3}}{2}\det H\right) - \frac{\pi}{2}\right]. $$ ## Lie algebra structure As noted above, the Lie algebra $$ \mathfrak{su}(n) $$ of consists of skew-Hermitian matrices with trace zero. The complexification of the Lie algebra $$ \mathfrak{su}(n) $$ is $$ \mathfrak{sl}(n; \mathbb{C}) $$ , the space of all complex matrices with trace zero.
https://en.wikipedia.org/wiki/Special_unitary_group
passage: In mathematics, an odd composite integer n is called an Euler pseudoprime to base a, if a and n are coprime, and $$ a^{(n-1)/2} \equiv \pm 1\pmod{n} $$ (where mod refers to the modulo operation). The motivation for this definition is the fact that all prime numbers p satisfy the above equation which can be deduced from Fermat's little theorem. Fermat's theorem asserts that if p is prime, and coprime to a, then ap−1 ≡ 1 (mod p). Suppose that p>2 is prime, then p can be expressed as 2q + 1 where q is an integer. Thus, a(2q+1) − 1 ≡ 1 (mod p), which means that a2q − 1 ≡ 0 (mod p). This can be factored as (aq − 1)(aq + 1) ≡ 0 (mod p), which is equivalent to a(p−1)/2 ≡ ±1 (mod p). The equation can be tested rather quickly, which can be used for probabilistic primality testing. These tests are twice as strong as tests based on Fermat's little theorem. Every Euler pseudoprime is also a Fermat pseudoprime. It is not possible to produce a definite test of primality based on whether a number is an Euler pseudoprime because there exist absolute Euler pseudoprimes, numbers which are Euler pseudoprimes to every base relatively prime to themselves.
https://en.wikipedia.org/wiki/Euler_pseudoprime
passage: - A subtree rooted at a node labeled 0 corresponds to the union of the subgraphs defined by the children of that node. - A subtree rooted at a node labeled 1 corresponds to the join of the subgraphs defined by the children of that node; that is, we form the union and add an edge between every two vertices corresponding to leaves in different subtrees. Alternatively, the join of a set of graphs can be viewed as formed by complementing each graph, forming the union of the complements, and then complementing the resulting union. An equivalent way of describing the cograph formed from a cotree is that two vertices are connected by an edge if and only if the lowest common ancestor of the corresponding leaves is labeled by 1. Conversely, every cograph can be represented in this way by a cotree. If we require the labels on any root-leaf path of this tree to alternate between 0 and 1, this representation is unique. ## Computational properties Cographs may be recognized in linear time, and a cotree representation constructed, using modular decomposition, partition refinement, LexBFS , or split decomposition. Once a cotree representation has been constructed, many familiar graph problems may be solved via simple bottom-up calculations on the cotrees. For instance, to find the maximum clique in a cograph, compute in bottom-up order the maximum clique in each subgraph represented by a subtree of the cotree.
https://en.wikipedia.org/wiki/Cograph
passage: ### Force-based cloth The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically unstable for sufficiently stiff systems). Therefore, implicit solvers must be used, requiring solution of a large sparse matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive frame rates. An alternative is to use an explicit method with low stiffness, with ad hoc methods to avoid instability and excessive stretching (e.g. strain limiting corrections). ### Position-based dynamics To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably PhysX, Havok Cloth, and Maya nCloth) use position based dynamics (PBD), an approach based on constraint relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained.
https://en.wikipedia.org/wiki/Soft-body_dynamics
passage: Then the points of the celestial sphere (equivalently, lines of sight) are identified with certain Hermitian matrices. #### Projective geometry and different views of the 2-sphere This picture emerges cleanly in the language of projective geometry. The (restricted) Lorentz group acts on the projective celestial sphere. This is the space of non-zero null vectors with $$ t>0 $$ under the given quotient for projective spaces: $$ (t,x,y,z)\sim (t',x',y',z') $$ if $$ (t',x',y',z') = (\lambda t, \lambda x, \lambda y, \lambda z) $$ for $$ \lambda > 0 $$ . This is referred to as the celestial sphere as this allows us to rescale the time coordinate $$ t $$ to 1 after acting using a Lorentz transformation, ensuring the space-like part sits on the unit sphere. From the Möbius side, acts on complex projective space , which can be shown to be diffeomorphic to the 2-sphere – this is sometimes referred to as the Riemann sphere. The quotient on projective space leads to a quotient on the group . Finally, these two can be linked together by using the complex projective vector to construct a null-vector.
https://en.wikipedia.org/wiki/Lorentz_group
passage: These changes in musical direction disoriented some fans and led them to reject those bands which were perceived as having compromised key elements of their musical identity in the pursuit of success. These two styles do not exhaust all of the musical influences found in the British heavy metal music of the early 1980s, because many bands were also inspired by progressive rock (Iron Maiden, Diamond Head, Blitzkrieg, Demon, Saracen, Shiva, Witchfynde), boogie rock (Saxon, Vardis, Spider, Le Griffe) and glam rock (Girl, Wrathchild). Doom metal bands Pagan Altar and Witchfinder General were also part of the NWOBHM and their albums are considered among the best examples of that already established subgenre. British writer John Tucker writes that NWOBHM bands were in general fuelled by their first experiences with adult life and "their lyrics rolled everything into one big youthful fantasy". They usually avoided social and political themes in their lyrics, or treated them in a shallow "street-level" way, preferring topics from mythology, the occult, fantasy, science fiction and horror films. Songs about romance and lust were rare, but the frequent lyrics about male bonding and the rock lifestyle contain many sexist allusions. Christian symbolism is often present in the lyrics and cover art, as is the figure of Satan, used more as a shocking and macabre subject than as the antireligious device of 1990s' black metal subculture. ## History
https://en.wikipedia.org/wiki/New_wave_of_British_heavy_metal
passage: Brazilian production did not accelerate until the 1960s, and the first of many spinning mills was established. Today, Brazil is the major world producer of sisal. ### Propagation Propagation of sisal is generally by using bulbils produced from buds in the flower stalk or by suckers growing around the base of the plant, which are grown in nursery fields until large enough to be transplanted to their final positions. These methods offer no potential for genetic improvement. In vitro multiplication of selected genetic material using meristematic tissue culture offers considerable potential for the development of improved genetic material. ### Fiber extraction Fiber is extracted by a process known as decortication, where leaves are crushed, beaten, and brushed away by a rotating wheel set with blunt knives, so that only fibers remain. Alternatively, in East Africa, where production is typically on large estates, the leaves are transported to a central decortication plant, where water is used to wash away the waste parts of the leaves. The fiber is then dried, brushed, and baled for export. Proper drying is important, as fiber quality depends largely on moisture content. Artificial drying has been found to result in generally better grades of fiber than sun drying, but is not always feasible in the less industrialized countries where sisal is produced. In the drier climate of northeast Brazil, sisal is mainly grown by smallholders and the fiber is extracted by teams using portable raspadors, which do not use water. Fiber is subsequently cleaned by brushing.
https://en.wikipedia.org/wiki/Sisal
passage: The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation. A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies. ## Scope The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages. As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on -values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.
https://en.wikipedia.org/wiki/Frequentist_probability
passage: When V is of finite dimension n, one can choose a basis for V to identify V with Fn, and hence recover a matrix representation with entries in the field F. An effective or faithful representation is a representation (V,φ), for which the homomorphism φ is injective. ### Equivariant maps and isomorphisms If V and W are vector spaces over F, equipped with representations φ and ψ of a group G, then an equivariant map from V to W is a linear map α: V → W such that $$ \alpha( g\cdot v ) = g \cdot \alpha(v) $$ for all g in G and v in V. In terms of φ: G → GL(V) and ψ: G → GL(W), this means $$ \alpha\circ \varphi(g) = \psi(g)\circ \alpha $$ for all g in G, that is, the following diagram commutes: Equivariant maps for representations of an associative or Lie algebra are defined similarly. If α is invertible, then it is said to be an isomorphism, in which case V and W (or, more precisely, φ and ψ) are isomorphic representations, also phrased as equivalent representations. An equivariant map is often called an intertwining map of representations. Also, in the case of a group , it is on occasion called a -map. Isomorphic representations are, for practical purposes, "the same"; they provide the same information about the group or algebra being represented. Representation theory therefore seeks to classify representations up to isomorphism.
https://en.wikipedia.org/wiki/Representation_theory
passage: $$ \begin{align} BLOCK2 \hat{\mathbf{y}}_{k \mid k} &= \hat{\mathbf{y}}_{k \mid k-1} + \mathbf{i}_k \end{align} $$ The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors. $$ \begin{align} BLOCK3 \hat{\mathbf{y}}_{k \mid k} &= \hat{\mathbf{y}}_{k \mid k-1} + \sum_{j=1}^N \mathbf{i}_{k,j} \end{align} $$ To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used. $$ \begin{align} BLOCK4 \hat{\mathbf{y}}_{k \mid k-1} &= BLOCK5\end{align} $$ ## Fixed-lag smoother The optimal fixed-lag smoother provides the optimal estimate of $$ \hat{\mathbf{x}}_{k-N \mid k} $$ for a given fixed-lag $$ N $$ using the measurements from $$ \mathbf{z}_1 $$ to $$ \mathbf{z}_k $$ .
https://en.wikipedia.org/wiki/Kalman_filter
passage: More formally, let $$ p_{1} $$ and $$ p_{2} $$ be two independent channels modelled as above; $$ p_{1} $$ having an input alphabet $$ \mathcal{X}_{1} $$ and an output alphabet $$ \mathcal{Y}_{1} $$ . Idem for $$ p_{2} $$ . We define the product channel $$ p_{1}\times p_2 $$ as $$ \forall (x_{1}, x_{2}) \in (\mathcal{X}_{1}, \mathcal{X}_{2}),\;(y_{1}, y_{2}) \in (\mathcal{Y}_{1}, \mathcal{Y}_{2}),\; (p_{1}\times p_{2})((y_{1}, y_{2}) | (x_{1},x_{2}))=p_{1}(y_{1}|x_{1})p_{2}(y_{2}|x_{2}) $$ This theorem states: $$ C(p_{1}\times p_{2}) = C(p_{1}) + C(p_{2}) $$
https://en.wikipedia.org/wiki/Channel_capacity
passage: In 1767, a society for the preservation of life from accidents in water was started in Amsterdam, and in 1773, physician William Hawes began publicizing the power of artificial respiration as means of resuscitation of those who appeared drowned. This led to the formation, in 1774, of the Society for the Recovery of Persons Apparently Drowned, later the Royal Humane Society, who did much to promote resuscitation. Napoleon's surgeon, Baron Dominique-Jean Larrey, is credited with creating an ambulance corps, the ambulance volantes, which included medical assistants, tasked to administer first aid in battle. In 1859, Swiss businessman Jean-Henri Dunant witnessed the aftermath of the Battle of Solferino, and his work led to the formation of the Red Cross, with a key stated aim of "aid to sick and wounded soldiers in the field". The Red Cross and Red Crescent are still the largest provider of first aid worldwide. In 1870, Prussian military surgeon Friedrich von Esmarch introduced formalized first aid to the military, and first coined the term "erste hilfe" (translating to 'first aid'), including training for soldiers in the Franco-Prussian War on care for wounded comrades using pre-learnt bandaging and splinting skills, and making use of the Esmarch bandage which he designed. The bandage was issued as standard to the Prussian combatants, and also included aide-memoire pictures showing common uses.
https://en.wikipedia.org/wiki/First_aid
passage: Let $$ S $$ be a set of places of $$ K. $$ Define the set of the -adeles of as $$ \mathbb{A}_{K,S} := {\prod_{v \in S}}^' K_v. $$ Furthermore, if $$ \mathbb{A}_K^S := {\prod_{v \notin S}}^' K_v $$ the result is: $$ \mathbb{A}_K=\mathbb{A}_{K,S} \times \mathbb{A}_K^S. $$ ### The adele ring of rationals By Ostrowski's theorem the places of $$ \Q $$ are $$ \{p \in \N :p \text{ prime}\} \cup \{\infty\}, $$ it is possible to identify a prime $$ p $$ with the equivalence class of the $$ p $$ -adic absolute value and $$ \infty $$ with the equivalence class of the absolute value $$ |\cdot|_\infty $$ defined as: $$ \forall x \in \Q: \quad |x|_\infty:= \begin{cases} x & x \geq 0 \\ -x & x < 0 \end{cases} $$ The completion of $$ \Q $$ with respect to the place $$ p $$ is $$ \Q_p $$ with valuation ring $$ \Z_p. $$
https://en.wikipedia.org/wiki/Adele_ring
passage: Moreover, in the category of CW complexes and cellular maps, cellular homology can be interpreted as a homology theory. To compute an extraordinary (co)homology theory for a CW complex, the Atiyah–Hirzebruch spectral sequence is the analogue of cellular homology. Some examples: - For the sphere, $$ S^n, $$ take the cell decomposition with two cells: a single 0-cell and a single n-cell. The cellular homology chain complex $$ C_* $$ and homology are given by: $$ C_k = \begin{cases} \Z & k \in \{0,n\} \\ 0 & k \notin \{0,n\} \end{cases} \quad H_k = \begin{cases} \Z & k \in \{0,n\} \\ 0 & k \notin \{0,n\} \end{cases} $$ since all the differentials are zero.
https://en.wikipedia.org/wiki/CW_complex
passage: ## History Around 1735, Leonhard Euler discovered the formula $$ V - E + F = 2 $$ relating the number of vertices (V), edges (E) and faces (F) of a convex polyhedron, and hence of a planar graph. The study and generalization of this formula, specifically by Cauchy (1789–1857) and L'Huilier (1750–1840), boosted the study of topology. In 1827, Carl Friedrich Gauss published General investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A." Yet, "until Riemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered". "Möbius and Jordan seem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces are homeomorphic or not. " The subject is clearly defined by Felix Klein in his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry.
https://en.wikipedia.org/wiki/Topological_space
passage: .3141.7032.0522.4732.7713.0573.4213.690280.6830.8551.0561.3131.7012.0482.4672.7633.0473.4083.674290.6830.8541.0551.3111.6992.0452.4622.7563.0383.3963.659300.6830.8541.0551.3101.6972.0422.4572.7503.0303.3853.646400.6810.8511.0501.3031.6842.0212.4232.7042.9713.3073.551500.6790.8491.0471.2991.6762.0092.4032.6782.9373.2613.496600.6790.84
https://en.wikipedia.org/wiki/Student%27s_t-distribution
passage: The field of digital image processing is the study of algorithms for their transformation. ### Raster file formats Most users come into contact with raster images through digital cameras, which use any of several image file formats. Some digital cameras give access to almost all the data captured by the camera, using a raw image format. The Universal Photographic Imaging Guidelines (UPDIG) suggests these formats be used when possible since raw files produce the best quality images. These file formats allow the photographer and the processing agent the greatest level of control and accuracy for output. Their use is inhibited by the prevalence of proprietary information (trade secrets) for some camera makers, but there have been initiatives such as OpenRAW to influence manufacturers to release these records publicly. An alternative may be Digital Negative (DNG), a proprietary Adobe product described as "the public, archival format for digital camera raw data". Although this format is not yet universally accepted, support for the product is growing, and increasingly professional archivists and conservationists, working for respectable organizations, variously suggest or recommend DNG for archival purposes. Archaeology Data Service / Digital Antiquity: Guides to Good Practice - Section 3 Archiving Raster Images - File Formats Inter-University Consortium for Political and Social Research: Obsolescence - File Formats and Software The J. Paul Getty Museum - Department of Photographs: Rapid Capture Backlog Project - Presentation Archives Association of British Columbia: Acquisition and Preservation Strategies (Rosaleen Hill) ## Vector Vector images resulted from mathematical geometry (vector).
https://en.wikipedia.org/wiki/Digital_image
passage: Problem in the extended domain $$ \Omega $$ for the new solution $$ u_{\epsilon}(x) $$ : $$ L_\epsilon u_\epsilon = - \phi^\epsilon(x), x = (x_1, x_2, \dots , x_n) \in \Omega $$ $$ l_\epsilon u_\epsilon = g^\epsilon(x), x \in \partial \Omega $$ It is necessary to pose the problem in the extended area so that the following condition is fulfilled: $$ u_\epsilon (x) \xrightarrow[\epsilon \rightarrow 0]{ } u(x), x \in D $$ ## Simple example, 1-dimensional problem $$ \frac{d^2u}{dx^2} = -2, \quad 0 < x < 1 \quad (1) $$ $$ u(0) = 0, u(1) = 0 $$ ### Prolongation by leading coefficients $$ u_\epsilon(x) $$ solution of problem: $$ \frac{d}{dx}k^\epsilon(x)\frac{du_\epsilon}{dx} = - \phi^{\epsilon}(x), 0 < x < 2 \quad (2) $$ Discontinuous coefficient $$ k^{\epsilon}(x) $$ and right part of equation previous equation we obtain from expressions: $$
https://en.wikipedia.org/wiki/Fictitious_domain_method
passage: In fact, code verification makes the JVM different from a classic stack architecture, of which efficient emulation with a JIT compiler is more complicated and typically carried out by a slower interpreter. Additionally, the Interpreter used by the default JVM is a special type known as a Template Interpreter, which translates bytecode directly to native, register based machine language rather than emulate a stack like a typical interpreter. In many aspects the HotSpot Interpreter can be considered a JIT compiler rather than a true interpreter, meaning the stack architecture that the bytecode targets is not actually used in the implementation, but merely a specification for the intermediate representation that can well be implemented in a register based architecture. Another instance of a stack architecture being merely a specification and implemented in a register based virtual machine is the Common Language Runtime. The original specification for the bytecode verifier used natural language that was incomplete or incorrect in some respects. A number of attempts have been made to specify the JVM as a formal system. By doing this, the security of current JVM implementations can more thoroughly be analyzed, and potential security exploits prevented. It will also be possible to optimize the JVM by skipping unnecessary safety checks, if the application being run is proven to be safe. #### Secure execution of remote code A virtual machine architecture allows very fine-grained control over the actions that code within the machine is permitted to take.
https://en.wikipedia.org/wiki/Java_virtual_machine
passage: and update each $$ f_j^{(\ell)} $$ in turn to be the smoothed fit for the residuals of all the others: $$ \hat{f_j}^{(\ell)} \leftarrow \text{Smooth}[\lbrace y_i - \hat{\alpha} - \sum_{k \neq j} \hat{f_k}(x_{ik}) \rbrace_1^N ] $$ Looking at the abbreviated form it is easy to see the backfitting algorithm as equivalent to the Gauss–Seidel method for linear smoothing operators S. ## Explicit derivation for two dimensions Following, we can formulate the backfitting algorithm explicitly for the two dimensional case. We have: $$ f_1 = S_1(Y-f_2), f_2 = S_2(Y-f_1) $$ If we denote $$ \hat{f}_1^{(i)} $$ as the estimate of $$ f_1 $$ in the ith updating step, the backfitting steps are $$ \hat{f}_1^{(i)} = S_1[Y - \hat{f}_2^{(i-1)}], \hat{f}_2^{(i)} = S_2[Y - \hat{f}_1^{(i)}] $$ By induction we get $$
https://en.wikipedia.org/wiki/Backfitting_algorithm
passage: The divergences that are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one that comes from - the region in the integral where all particles in the loop have large energies and momenta, - very short wavelengths and high-frequencies fluctuations of the fields, in the path integral for the field, - very short proper-time between particle emission and absorption, if the loop is thought of as a sum over particle paths. So these divergences are short-distance, short-time phenomena. Shown in the pictures at the right margin, there are exactly three one-loop divergent loop diagrams in quantum electrodynamics: The three divergences correspond to the three parameters in the theory under consideration: 1. The field normalization Z. 1. The mass of the electron. 1. The charge of the electron. The second class of divergence called an infrared divergence, is due to massless particles, like the photon. Every process involving charged particles emits infinitely many coherent photons of infinite wavelength, and the amplitude for emitting any finite number of photons is zero. For photons, these divergences are well understood. For example, at the 1-loop order, the vertex function has both ultraviolet and infrared divergences. In contrast to the ultraviolet divergence, the infrared divergence does not require the renormalization of a parameter in the theory involved.
https://en.wikipedia.org/wiki/Renormalization
passage: ## Introduction The integral of a positive real function between boundaries and can be interpreted as the area under the graph of , between and . This notion of area fits some functions, mainly piecewise continuous functions, including elementary functions, for example polynomials. However, the graphs of other functions, for example the Dirichlet function, don't fit well with the notion of area. Graphs like that of the latter, raise the question: for which class of functions does "area under the curve" make sense? The answer to this question has great theoretical importance. As part of a general movement toward rigor in mathematics in the nineteenth century, mathematicians attempted to put integral calculus on a firm foundation. The Riemann integral—proposed by Bernhard Riemann (1826–1866)—is a broadly successful attempt to provide such a foundation. Riemann's definition starts with the construction of a sequence of easily calculated areas that converge to the integral of a given function. This definition is successful in the sense that it gives the expected answer for many already-solved problems, and gives useful results for many other problems. However, Riemann integration does not interact well with taking limits of sequences of functions, making such limiting processes difficult to analyze. This is important, for instance, in the study of Fourier series, Fourier transforms, and other topics. The Lebesgue integral describes better how and when it is possible to take limits under the integral sign (via the monotone convergence theorem and dominated convergence theorem).
https://en.wikipedia.org/wiki/Lebesgue_integral
passage: I 10 Supersymmetry between forces and matter, with both closed strings and open strings, no tachyon, group symmetry is SO(32) IIA 10 Supersymmetry between forces and matter, with closed strings and open strings bound to D-branes, no tachyon, massless fermions spin both ways (nonchiral) IIB 10 Supersymmetry between forces and matter, with closed strings and open strings bound to D-branes, no tachyon, massless fermions only spin one way (chiral) HO 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, heterotic, meaning right moving and left moving strings differ, group symmetry is SO(32) HE 10 Supersymmetry between forces and matter, with closed strings only, no tachyon, heterotic, meaning right moving and left moving strings differ, group symmetry is E8×E8 Note that in the type IIA and type IIB string theories closed strings are allowed to move everywhere throughout the ten-dimensional space-time (called the bulk), while open strings have their ends attached to D-branes, which are membranes of lower dimensionality (their dimension is odd - 1,3,5,7 or 9 - in type IIA and even - 0,2,4,6 or 8 - in type IIB, including the time direction). Before the 1990s, string theorists believed there were five distinct superstring theories: type I, types IIA and IIB, and the two heterotic string theories (SO(32) and E8×E8).
https://en.wikipedia.org/wiki/String_duality
passage: In gauge theory and mathematical physics, a topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. While TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for mathematical work related to topological field theory. In condensed matter physics, topological quantum field theories are the low-energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states. ## Overview In a topological field theory, correlation functions do not depend on the metric of spacetime. This means that the theory is not sensitive to changes in the shape of spacetime; if spacetime warps or contracts, the correlation functions do not change. Consequently, they are topological invariants. Topological field theories are not very interesting on flat Minkowski spacetime used in particle physics. Minkowski space can be contracted to a point, so a TQFT applied to Minkowski space results in trivial topological invariants. Consequently, TQFTs are usually applied to curved spacetimes, such as, for example, Riemann surfaces.
https://en.wikipedia.org/wiki/Topological_quantum_field_theory
passage: The effect is to increase the gate voltage necessary to establish the channel, as seen in the figure. This change in channel strength by application of reverse bias is called the "body effect. " Using an nMOS example, the gate-to-body bias VGB positions the conduction-band energy levels, while the source-to-body bias VSB positions the electron Fermi level near the interface, deciding occupancy of these levels near the interface, and hence the strength of the inversion layer or channel. The body effect upon the channel can be described using a modification of the threshold voltage, approximated by the following equation: $$ V_\text{TB} = V_{T0} + \gamma \left( \sqrt{V_\text{SB} + 2\varphi_B} - \sqrt{2\varphi_B} \right), $$ where VTB is the threshold voltage with substrate bias present, and VT0 is the zero-VSB value of threshold voltage, $$ \gamma $$ is the body effect parameter, and 2φB is the approximate potential drop between surface and bulk across the depletion layer when and gate bias is sufficient to ensure that a channel is present. As this equation shows, a reverse bias causes an increase in threshold voltage VTB and therefore demands a larger gate voltage before the channel populates. The body can be operated as a second gate, and is sometimes referred to as the "back gate"; the body effect is sometimes called the "back-gate effect".
https://en.wikipedia.org/wiki/MOSFET
passage: Any diagonal unitary matrix must have complex numbers of absolute value 1 on the main diagonal. We can therefore write $$ A = S\,\operatorname{diag}\left(e^{i\theta_1}, \dots, e^{i\theta_n}\right)\,S^{-1}. $$ A path in U(n) from the identity to A is then given by $$ t \mapsto S \, \operatorname{diag}\left(e^{it\theta_1}, \dots, e^{it\theta_n}\right)\,S^{-1} . $$ The unitary group is not simply connected; the fundamental group of U(n) is infinite cyclic for all n: $$ \pi_1(\operatorname{U}(n)) \cong \mathbf{Z} . $$ To see this, note that the above splitting of U(n) as a semidirect product of SU(n) and U(1) induces a topological product structure on U(n), so that $$ \pi_1(\operatorname{U}(n)) \cong \pi_1(\operatorname{SU}(n)) \times \pi_1(\operatorname{U}(1)). $$ Now the first unitary group U(1) is topologically a circle, which is well known to have a fundamental group isomorphic to Z, whereas SU(n) is simply connected. The determinant map induces an isomorphism of fundamental groups, with the splitting inducing the inverse.
https://en.wikipedia.org/wiki/Unitary_group
passage: This theorem can therefore be interpreted in the following manner: “given any effective procedure to transform programs, there is always a program that, when modified by the procedure, does exactly what it did before”, or: “it’s impossible to write a program that changes the extensional behaviour of all programs”. ### Proof of the fixed-point theorem The proof uses a particular total computable function $$ h $$ , defined as follows. Given a natural number $$ x $$ , the function $$ h $$ outputs the index of the partial computable function that performs the following computation: Given an input $$ y $$ , first attempt to compute $$ \varphi_{x}(x) $$ . If that computation returns an output $$ e $$ , then compute $$ \varphi_e(y) $$ and return its value, if any. Thus, for all indices $$ x $$ of partial computable functions, if $$ \varphi_x(x) $$ is defined, then $$ \varphi_{h(x)} \simeq \varphi_{\varphi_x(x)} $$ . If $$ \varphi_x(x) $$ is not defined, then $$ \varphi_{h(x)} $$ is a function that is nowhere defined.
https://en.wikipedia.org/wiki/Kleene%27s_recursion_theorem
passage: These molecules are increasingly considered to be nanomachines. Researchers have used DNA to construct nano-dimensioned four-bar linkages. McCarthy, C, DNA Origami Mechanisms and Machines | Mechanical Design 101, 2014 ## Impact ### Mechanization and automation Mechanization (or mechanisation in BE) is providing human operators with machinery that assists them with the muscular requirements of work or displaces muscular work. In some fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an un-geared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines. Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provides human operators with machinery to assist them with the muscular requirements of work, automation greatly decreases the need for human sensory and mental requirements as well. Automation plays an increasingly important role in the world economy and in daily experience. ### Automata An automaton (plural: automata or automatons) is a self-operating machine.
https://en.wikipedia.org/wiki/Machine
passage: The equation describing these standing waves is given by: $$ E=E_0 \sin\left(\frac{n\pi}{L}x\right)\,\! $$ . where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived. In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation. These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model. Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency $$ \nu $$ with a photon of energy $$ h\nu $$ .
https://en.wikipedia.org/wiki/Atomic%2C_molecular%2C_and_optical_physics%23Optical_physics
passage: This is known as the Maurer–Cartan equation. It is often written as $$ d\omega + \frac{1}{2}[\omega,\omega]=0. $$ Here denotes the bracket of Lie algebra-valued forms. ## Maurer–Cartan frame One can also view the Maurer–Cartan form as being constructed from a Maurer–Cartan frame. Let be a basis of sections of consisting of left-invariant vector fields, and be the dual basis of sections of such that , the Kronecker delta. Then is a Maurer–Cartan frame, and is a Maurer–Cartan coframe. Since is left-invariant, applying the Maurer–Cartan form to it simply returns the value of at the identity. Thus . Thus, the Maurer–Cartan form can be written Suppose that the Lie brackets of the vector fields are given by $$ [E_i,E_j]=\sum_k{c_{ij}}^kE_k. $$ The quantities are the structure constants of the Lie algebra (relative to the basis ).
https://en.wikipedia.org/wiki/Maurer%E2%80%93Cartan_form
passage: First, , the set of its initial states, replaces ; second, its transition rules are oriented conversely: q(f(x1,...,xn)) → f(q1(x1),...,qn(xn)), for an n-ary , and xi variables denoting subtrees. That is, members of Δ are here rewrite rules from nodes whose roots are states to nodes whose children's roots are states. A top-down automaton starts in some of its initial states at the root and moves downward along branches of the tree, associating along a run a state with each subterm inductively. A tree is accepted if every branch can be gone through this way. A tree automaton is called deterministic (abbreviated DFTA) if no two rules from Δ have the same left hand side; otherwise it is called nondeterministic (NFTA). Non-deterministic top-down tree automata have the same expressive power as non-deterministic bottom-up ones; the transition rules are simply reversed, and the final states become the initial states. In contrast, deterministic top-down tree automata are less powerful than their bottom-up counterparts, because in a deterministic tree automaton no two transition rules have the same left-hand side. For tree automata, transition rules are rewrite rules; and for top-down ones, the left-hand side will be parent nodes.
https://en.wikipedia.org/wiki/Tree_automaton
passage: ## Relation to convergence of random variables A sequence $$ \{X_n\} $$ converges to $$ X $$ in the $$ L_1 $$ norm if and only if it converges in measure to $$ X $$ and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable. This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem. ## Citations ## References - - Diestel, J. and Uhl, J. (1977). Vector measures, Mathematical Surveys 15, American Mathematical Society, Providence, RI Category:Martingale theory
https://en.wikipedia.org/wiki/Uniform_integrability
passage: Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), and computer networks, such as the Internet. In packet switching networks, routing is the higher-level decision making that directs network packets from their source toward their destination through intermediate network nodes by specific packet forwarding mechanisms. Packet forwarding is the transit of network packets from one network interface to another. Intermediate nodes are typically network hardware devices such as routers, gateways, firewalls, or switches. General-purpose computers also forward packets and perform routing, although they have no specially optimized hardware for the task. The routing process usually directs forwarding on the basis of routing tables. Routing tables maintain a record of the routes to various network destinations. Routing tables may be specified by an administrator, learned by observing network traffic or built with the assistance of routing protocols. Routing, in a narrower sense of the term, often refers to IP routing and is contrasted with bridging. IP routing assumes that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet.
https://en.wikipedia.org/wiki/Routing
passage: As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite, but is rather just some general directed graph. The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply $$ \left[ \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right]. $$
https://en.wikipedia.org/wiki/Hypergraph
passage: In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward. ## Central finite difference This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing: Derivative Accuracy −5 −4 −3 −2 −1 0 1 2 3 4 5 1 2 −1/2 0 1/2 4 1/12 −2/3 0 2/3 −1/12 6 −1/60 3/20 −3/4 0 3/4 −3/20 1/60 8 1/280 −4/105 1/5 −4/5 0 4/5 −1/5 4/105 −1/280 2 2 1 −2 1 4 −1/12 4/3 −5/2 4/3 −1/12 6 1/90 −3/20 3/2 −49/18 3/2 −3/20 1/90 8−1/560 8/315 −1/5 8/5 −205/72 8/5 −1/5 8/315 −1/560 3 2 −1/2 1 0 −1 1/2 4 1/8 −1 13/8 0 −13/8 1 −1/8 6 −7/240 3/10 −169/120 61/30 0 −61/30 169/120 −3/10 7/240 4 2 1 −4 6 −4 1 4 −1/6 2 −13/2 28/3 −13/2 2 −1/6 6 7/240 −2/5 169/60 −122/15 91/8 −122/15 169/60 −2/5 7/2405 2 −1/2 2 −5/2 0 5/2 −2 1/2 4 1/6 −3/2 13/3 −29/6 0 29/6 −13/3 3/2 −1/6 6 −13/288 19/36 −87/32 13/2 −323/48 0 323/48
https://en.wikipedia.org/wiki/Finite_difference_coefficient
passage: Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. Two of the most currently researched models are quantum triviality, and Higgs hierarchy problem. ## Overview In particle physics, elementary particles and forces give rise to the world around us. Physicists explain the behaviors of these particles and how they interact using the Standard Model—a widely accepted framework believed to explain most of the world we see around us. Initially, when these models were being developed and tested, it seemed that the mathematics behind those models, which were satisfactory in areas already tested, would also forbid elementary particles from having any mass, which showed clearly that these initial models were incomplete. In 1964 three groups of physicists almost simultaneously released papers describing how masses could be given to these particles, using approaches known as symmetry breaking. This approach allowed the particles to obtain a mass, without breaking other parts of particle physics theory that were already believed reasonably correct. This idea became known as the Higgs mechanism, and later experiments confirmed that such a mechanism does exist—but they could not show exactly how it happens. The simplest theory for how this effect takes place in nature, and the theory that became incorporated into the Standard Model, was that if one or more of a particular kind of "field" (known as a Higgs field) happened to permeate space, and if it could interact with elementary particles in a particular way, then this would give rise to a Higgs mechanism in nature.
https://en.wikipedia.org/wiki/Alternatives_to_the_Standard_Higgs_Model
passage: BLOCK0 \right) $$ The symmetrization is distributive over addition; $$ A_{(\alpha} \left(B_{\beta)\gamma\cdots} + C_{\beta)\gamma\cdots} \right) = A_{(\alpha}B_{\beta)\gamma\cdots} + A_{(\alpha}C_{\beta)\gamma\cdots} $$ Indices are not part of the symmetrization when they are: - not on the same level, for example; - : $$ A_{(\alpha}B^{\beta}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B^{\beta}{}_{\gamma} + A_{\gamma}B^{\beta}{}_{\alpha} \right) $$ - within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; - : $$ A_{(\alpha}B_{|\beta|}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B_{\beta \gamma} + A_{\gamma}B_{\beta \alpha} \right) $$ Here the and indices are symmetrized, is not.
https://en.wikipedia.org/wiki/Ricci_calculus
passage: Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids. Carbohydrates Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways. Nucleotides The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base.
https://en.wikipedia.org/wiki/Metabolism
passage: The regula falsi (false position) method The convergence rate of the bisection method could possibly be improved by using a different solution estimate. The regula falsi method calculates the new solution estimate as the -intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment. More precisely, suppose that in the -th iteration the bracketing interval is . Construct the line through the points and , as illustrated. This line is a secant or chord of the graph of the function . In point-slope form, its equation is given by $$ y - f(b_k) = \frac{f(b_k)-f(a_k)}{b_k-a_k} (x-b_k). $$ Now choose to be the -intercept of this line, that is, the value of for which , and substitute these values to obtain $$ f(b_k) + \frac{f(b_k)-f(a_k)}{b_k-a_k} (c_k-b_k) = 0. $$ Solving this equation for ck gives:
https://en.wikipedia.org/wiki/Regula_falsi
passage: ## Model constructions As in classical model theory, there are methods for constructing a new Kripke model from other models. The natural homomorphisms in Kripke semantics are called p-morphisms (which is short for pseudo-epimorphism, but the latter term is rarely used). A p-morphism of Kripke frames $$ \langle W,R\rangle $$ and $$ \langle W',R'\rangle $$ is a mapping $$ f\colon W\to W' $$ such that - f preserves the accessibility relation, i.e., u R v implies f(u) R’ f(v), - whenever f(u) R’ v’, there is a v ∈ W such that u R v and f(v)  = v’. A p-morphism of Kripke models $$ \langle W,R,\Vdash\rangle $$ and $$ \langle W',R',\Vdash'\rangle $$ is a p-morphism of their underlying frames $$ f\colon W\to W' $$ , which satisfies $$ w\Vdash p $$ if and only if $$ f(w)\Vdash'p $$ , for any propositional variable p. P-morphisms are a special kind of bisimulations.
https://en.wikipedia.org/wiki/Kripke_semantics