text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: ### Parallel programming Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes. Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors. ### Debugging and monitoring Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing. The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters. Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation.
https://en.wikipedia.org/wiki/Computer_cluster
passage: Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula. An explicit formula for Towers of Hanoi: h1 = 1 = 21 - 1 h2 = 3 = 22 - 1 h3 = 7 = 23 - 1 h4 = 15 = 24 - 1 h5 = 31 = 25 - 1 h6 = 63 = 26 - 1 h7 = 127 = 27 - 1 In general: hn = 2n - 1, for all n >= 1 ### Binary search The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for. Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.
https://en.wikipedia.org/wiki/Recursion_%28computer_science%29
passage: In mathematics, the Riemann hypothesis is the conjecture that the ## Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part . Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers. It was proposed by , after whom it is named. The Riemann hypothesis and some of its generalizations, along with Goldbach's conjecture and the twin prime conjecture, make up Hilbert's eighth problem in David Hilbert's list of twenty-three unsolved problems; it is also one of the Millennium Prize Problems of the Clay Mathematics Institute, which offers US$1 million for a solution to any of them. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. The Riemann zeta function ζ(s) is a function whose argument s may be any complex number other than 1, and whose values are also complex. It has zeros at the negative even integers; that is, when s is one of −2, −4, −6, .... These are called its trivial zeros. The zeta function is also zero for other values of s, which are called nontrivial zeros.
https://en.wikipedia.org/wiki/Riemann_hypothesis
passage: 1. Grushko's theorem has the consequence that if a subset B of a free group F on n elements generates F and has n elements, then B generates F freely. ## Free abelian group The free abelian group on a set S is defined via its universal property in the analogous way, with obvious modifications: Consider a pair (F, φ), where F is an abelian group and φ: S → F is a function. F is said to be the free abelian group on S with respect to φ if for any abelian group G and any function ψ: S → G, there exists a unique homomorphism f: F → G such that f(φ(s)) = ψ(s), for all s in S. The free abelian group on S can be explicitly identified as the free group F(S) modulo the subgroup generated by its commutators, [F(S), F(S)], i.e. its abelianisation. In other words, the free abelian group on S is the set of words that are distinguished only up to the order of letters. The rank of a free group can therefore also be defined as the rank of its abelianisation as a free abelian group. ## Tarski's problems Around 1945, Alfred Tarski asked whether the free groups on two or more generators have the same first-order theory, and whether this theory is decidable. answered the first question by showing that any two nonabelian free groups have the same first-order theory, and answered both questions, showing that this theory is decidable.
https://en.wikipedia.org/wiki/Free_group
passage: A standard basis for consists of mutually orthogonal vectors, of which square to and of which square to . Of such a basis, the algebra will therefore have vectors that square to and vectors that square to . A few low-dimensional cases are: - is naturally isomorphic to since there are no nonzero vectors. - is a two-dimensional algebra generated by that squares to , and is algebra-isomorphic to , the field of complex numbers. - is a two-dimensional algebra generated by that squares to , and is algebra-isomorphic to the split-complex numbers. - is a four-dimensional algebra spanned by . The latter three elements all square to and anticommute, and so the algebra is isomorphic to the quaternions . - is isomorphic to the algebra of split-quaternions. - is an 8-dimensional algebra isomorphic to the direct sum , the split-biquaternions. - , also called the Pauli algebra, is isomorphic to the algebra of biquaternions. ### Complex numbers One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension is equivalent to the standard diagonal form $$ Q(z) = z_1^2 + z_2^2 + \dots + z_n^2. $$ Thus, for each dimension , up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form.
https://en.wikipedia.org/wiki/Clifford_algebra
passage: ## Modes ### Visual Gestures Most animals understand communication through a visual display of distinctive body parts or bodily movements. Animals will reveal or accentuate a body part to relay certain information. The parent herring gull displays its bright yellow bill on the ground next over its chick when it has returned to the nest with food. The chicks exhibit a begging response by tapping the red spot on the lower mandible of the parent herring gull's bill. This signal stimulates the parent to regurgitate food and completes the feeding signal. The distinctive morphological feature accentuated in this communication is the parent's red-spotted bill, while the tapping towards the ground makes the red spot visible to the chick, demonstrating a distinctive movement. Frans de Waal studied bonobos and chimps to understand if language was somehow evolved by gestures. He found that both apes and humans only use intentional gestures to communicate. Facial expression Another important signal of emotion in animal communication are facial gestures. Blue and Yellow Macaws were studied to understand how they reacted to interactions with a familiar animal caretaker. Studies show that Blue and Yellow Macaws demonstrated a significant amount of blushing frequently during mutual interactions with a caretaker. In another experiment, Jeffrey Mogil studied facial expression in mice in response to increments of increasing pain. He found that mice exhibited five recognizable facial expressions: orbital tightening, nose and cheek bulge, and changes in ear and whisker carriage.
https://en.wikipedia.org/wiki/Animal_communication
passage: - Selection sort: Find the smallest (or biggest) element in the array, and put it in the proper place. Swap it with the value in the first position. Repeat until array is sorted. - Quick sort: Partition the array into two segments. In the first segment, all elements are less than or equal to the pivot value. In the second segment, all elements are greater than or equal to the pivot value. Finally, sort the two segments recursively. - Merge sort: Divide the list of elements in two parts, sort the two parts individually and then merge it. ### Physical Various sorting tasks are essential in industrial processes, such as mineral processing. For example, during the extraction of gold from ore, a device called a shaker table uses gravity, vibration, and flow to separate gold from lighter materials in the ore (sorting by size and weight). Sorting is also a naturally occurring process that results in the concentration of ore or sediment. Sorting results from the application of some criterion or differential stressors to a mass to separate it into its components based on some variable quality. Materials that are different, but only slightly so, such as the isotopes of uranium, are very difficult to separate. Optical sorting is an automated process of sorting solid products using cameras and/or lasers and has widespread use in the food industry. Sensor-based sorting is used in mineral processing.
https://en.wikipedia.org/wiki/Sorting
passage: Typically, the emitter region is heavily doped compared to the other two layers, and the collector is doped more lightly (typically ten times lighter) than the base. By design, most of the BJT collector current is due to the flow of charge carriers injected from a heavily doped emitter into the base where they are minority carriers (electrons in NPNs, holes in PNPs) that diffuse toward the collector, so BJTs are classified as minority-carrier devices. In typical operation, the base–emitter junction is forward biased, which means that the p-doped side of the junction is at a more positive potential than the n-doped side, and the base–collector junction is reverse biased. When forward bias is applied to the base–emitter junction, the equilibrium between the thermally generated carriers and the repelling electric field of the emitter depletion region is disturbed. This allows thermally excited carriers (electrons in NPNs, holes in PNPs) to inject from the emitter into the base region. These carriers create a diffusion current through the base from the region of high concentration near the emitter toward the region of low concentration near the collector. To minimize the fraction of carriers that recombine before reaching the collector–base junction, the transistor's base region must be thin enough that carriers can diffuse across it in much less time than the semiconductor's minority-carrier lifetime. Having a lightly doped base ensures recombination rates are low.
https://en.wikipedia.org/wiki/Bipolar_junction_transistor
passage: More accurate methods that consider the Earth's ellipticity are given by Vincenty's formulae and the other formulas in the geographical distance article. ## The law of haversines Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points , , and on the sphere. If the lengths of these three sides are (from to ), (from to ), and (from to ), and the angle of the corner opposite is , then the law of haversines states: $$ \operatorname{hav}(c) = \operatorname{hav}(a - b) + \sin(a)\sin(b)\operatorname{hav}(C). $$ Since this is a unit sphere, the lengths , , and are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius of the sphere). In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where is the north pole, while and are the two points whose separation is to be determined. In that case, and are (that is, the, co-latitudes), is the longitude separation , and is the desired . Noting that , the haversine formula immediately follows.
https://en.wikipedia.org/wiki/Haversine_formula
passage: ### Control systems In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications. Finance ANNs are used for stock market prediction and credit scoring: - In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions. - In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process. ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies. ### Medicine ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
https://en.wikipedia.org/wiki/Neural_network_%28machine_learning%29
passage: This construction is especially important when $$ f $$ is the projection of a fiber bundle onto its base space. For example, the sheaves of smooth functions are the sheaves of sections of the trivial bundle. Another example: the sheaf of sections of $$ \C \stackrel{\exp}{\longrightarrow} \C\setminus \{0\} $$ is the sheaf which assigns to any the set of branches of the complex logarithm on . Given a point $$ x $$ and an abelian group $$ S $$ , the skyscraper sheaf $$ S_x $$ is defined as follows: if $$ U $$ is an open set containing $$ x $$ , then $$ S_x(U)=S $$ . If $$ U $$ does not contain $$ x $$ , then $$ S_x(U)=0 $$ , the trivial group. The restriction maps are either the identity on $$ S $$ , if both open sets contain $$ x $$ , or the zero map otherwise. #### Sheaves on manifolds On an $$ n $$ -dimensional $$ C^k $$ -manifold $$ M $$ , there are a number of important sheaves, such as the sheaf of $$ j $$ -times continuously differentiable functions $$ \mathcal{O}^j_M $$ (with $$ j \leq k $$ ).
https://en.wikipedia.org/wiki/Sheaf_%28mathematics%29
passage: In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form $$ y'(x) = q_0(x) + q_1(x) \, y(x) + q_2(x) \, y^2(x) $$ where $$ q_0(x) \neq 0 $$ and $$ q_2(x) \neq 0 $$ . If $$ q_0(x) = 0 $$ the equation reduces to a Bernoulli equation, while if $$ q_2(x) = 0 $$ the equation becomes a first order linear ordinary differential equation. The equation is named after Jacopo Riccati (1676–1754). More generally, the term Riccati equation is used to refer to matrix equations with an analogous quadratic term, which occur in both continuous-time and discrete-time linear-quadratic-Gaussian control. The steady-state (non-dynamic) version of these is referred to as the algebraic Riccati equation. ## Conversion to a second order linear equation The non-linear Riccati equation can always be converted to a second order linear ordinary differential equation (ODE):
https://en.wikipedia.org/wiki/Riccati_equation
passage: In 1784, Félix Vicq-d'Azyr, discovered a black colored structure in the midbrain. In 1791 Samuel Thomas von Sömmerring alluded to this structure, calling it the substantia nigra. In the same year, Luigi Galvani described the role of electricity in nerves of dissected frogs. In 1808, Franz Joseph Gall studied and published work on phrenology. Phrenology was the faulty science of looking at head shape to determine different aspects of personality and brain function. In 1811, Julien Jean César Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1822, Karl Friedrich Burdach distinguished between the lateral and medial geniculate bodies, as well as named the cingulate gyrus. In 1824, F. Magendie studied and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law. In 1838, Theodor Schwann began studying white and grey matter in the brain, and discovered the myelin sheath. These cells, which cover the axons of the neurons in the brain, are named Schwann cells after him. In 1843 Carlo Matteucci and Emil du Bois-Reymond demonstrated that nerves transmit signals electrically.
https://en.wikipedia.org/wiki/Neurophysiology
passage: Common Lisp, on the other hand, also provides the dedicated `boolean` type, derived as a specialization of the symbol. ### Pascal, Ada, and Haskell The language Pascal (1970) popularized the concept of programmer-defined enumerated types, previously available with different nomenclature in COBOL, FACT and JOVIAL. A built-in `Boolean` data type was then provided as a predefined enumerated type with values `FALSE` and `TRUE`. By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded `Boolean` values. Otherwise, the `Boolean` type had all the facilities which were available for enumerated types in general, such as ordering and use as indices. In contrast, converting between `Boolean`s and integers (or any other types) still required explicit tests or function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages which had enumerated types, such as Modula, Ada, and Haskell. ### Perl and Lua Perl has no Boolean data type. Instead, any value can behave as Boolean in Boolean context (condition of `if` or `while` statement, argument of `&&` or `||`, etc.). The number `0`, the strings `"0"` and `""`, the empty list `()`, and the special value `undef` evaluate to false. All else evaluates to true.
https://en.wikipedia.org/wiki/Boolean_data_type
passage: If $$ Y $$ is a Hausdorff space and $$ S $$ is a dense subset of $$ X $$ then a continuous extension of $$ f : S \to Y $$ to $$ X, $$ if one exists, will be unique. The Blumberg theorem states that if $$ f : \R \to \R $$ is an arbitrary function then there exists a dense subset $$ D $$ of $$ \R $$ such that the restriction $$ f\big\vert_D : D \to \R $$ is continuous; in other words, every function $$ \R \to \R $$ can be restricted to some dense subset on which it is continuous. Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function $$ f : X \to Y $$ between particular types of partially ordered sets $$ X $$ and $$ Y $$ is continuous if for each directed subset $$ A $$ of $$ X, $$ we have $$ \sup f(A) = f(\sup A). $$ Here $$ \,\sup\, $$ is the supremum with respect to the orderings in $$ X $$ and $$ Y, $$ respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
https://en.wikipedia.org/wiki/Continuous_function
passage: and $$ \eta\equiv 1 $$ are fixed points for the evolution. (2) indicates that the evolution is unchanged by interchanging the roles of 0's and 1's. In property (3), $$ \eta\leq \zeta $$ means $$ \forall x,\eta(x)\leq\zeta(x) $$ , and $$ \eta \leq \zeta $$ implies $$ c(x,\eta)\leq c(x,\zeta) $$ if $$ \eta(x)=\zeta(x)=0 $$ , and implies $$ c(x,\eta)\geq c(x,\zeta) $$ if $$ \eta(x)=\zeta(x)=1 $$ . ### Clustering and coexistence The interest in is the limiting behavior of the models. Since the flip rates of a site depends on its neighbours, it is obvious that when all sites take the same value, the whole system stops changing forever. Therefore, a voter model has two trivial extremal stationary distributions, the point-masses $$ \scriptstyle \delta_0 $$ and $$ \scriptstyle \delta_1 $$ on $$ \scriptstyle \eta \equiv 0 $$ or $$ \scriptstyle \eta\equiv 1 $$ respectively, which represent consensus. The main question to be discussed is whether or not there are others, which would then represent coexistence of different opinions in equilibrium.
https://en.wikipedia.org/wiki/Voter_model
passage: Heapsort's primary advantages are its simple, non-recursive code, minimal auxiliary storage requirement, and reliably good performance: its best and worst cases are within a small constant factor of each other, and of the theoretical lower bound on comparison sorts. While it cannot do better than for pre-sorted inputs, it does not suffer from quicksort's worst case, either. Real-world quicksort implementations use a variety of heuristics to avoid the worst case, but that makes their implementation far more complex, and implementations such as introsort and pattern-defeating quicksort use heapsort as a last-resort fallback if they detect degenerate behaviour. Thus, their worst-case performance is slightly worse than if heapsort had been used from the beginning. Heapsort's primary disadvantages are its poor locality of reference and its inherently serial nature; the accesses to the implicit tree are widely scattered and mostly random, and there is no straightforward way to convert it to a parallel algorithm. The worst-case performance guarantees make heapsort popular in real-time computing, and systems concerned with maliciously chosen inputs such as the Linux kernel. The combination of small implementation and dependably "good enough" performance make it popular in embedded systems, and generally any application where sorting is not a performance bottleneck. heapsort is ideal for sorting a list of filenames for display, but a database management system would probably want a more aggressively optimized sorting algorithm. A well-implemented quicksort is usually 2–3 times faster than heapsort. See Fig. 1 on p. 6.
https://en.wikipedia.org/wiki/Heapsort
passage: ### Dual basis Let $$ \{ e_1 , \ldots , e_n \} $$ be a basis of , i.e. a set of $$ n $$ linearly independent vectors that span the -dimensional vector space . The basis that is dual to $$ \{ e_1 , \ldots , e_n \} $$ is the set of elements of the dual vector space $$ V^{*} $$ that forms a biorthogonal system with this basis, thus being the elements denoted $$ \{ e^1 , \ldots , e^n \} $$ satisfying $$ e^i \cdot e_j = \delta^i{}_j, $$ where $$ \delta $$ is the Kronecker delta. Given a nondegenerate quadratic form on , $$ V^{*} $$ becomes naturally identified with , and the dual basis may be regarded as elements of , but are not in general the same set as the original basis. Given further a GA of , let $$ I = e_1 \wedge \cdots \wedge e_n $$ be the pseudoscalar (which does not necessarily square to ) formed from the basis .
https://en.wikipedia.org/wiki/Geometric_algebra
passage: $$ subject to the rules $$ \begin{alignat}{6} a \cdot (\mathbf{v} \otimes \mathbf{w}) ~&=~ (a \cdot \mathbf{v}) \otimes \mathbf{w} ~=~ \mathbf{v} \otimes (a \cdot \mathbf{w}), && ~~\text{ where } a \text{ is a scalar} \\ (\mathbf{v}_1 + \mathbf{v}_2) \otimes \mathbf{w} ~&=~ \mathbf{v}_1 \otimes \mathbf{w} + \mathbf{v}_2 \otimes \mathbf{w} && \\ \mathbf{v} \otimes (\mathbf{w}_1 + \mathbf{w}_2) ~&=~ \mathbf{v} \otimes \mathbf{w}_1 + \mathbf{v} \otimes \mathbf{w}_2. && \\ \end{alignat} $$ These rules ensure that the map $$ f $$ from the $$ V \times W $$ to $$ V \otimes W $$ that maps a tuple $$ (\mathbf{v}, \mathbf{w}) $$ to $$ \mathbf{v} \otimes \mathbf{w} $$ is bilinear.
https://en.wikipedia.org/wiki/Vector_space
passage: It is desirable to have a weaker condition from which to deduce extendability. For example, suppose $$ a > 1 $$ is a real number. At the precalculus level, the function $$ f: x \mapsto a^x $$ can be given a precise definition only for rational values of (assuming the existence of qth roots of positive real numbers, an application of the Intermediate Value Theorem). One would like to extend $$ f $$ to a function defined on all of $$ R $$ . The identity $$ f(x+\delta)-f(x) = a^x\left(a^{\delta} - 1\right) $$ shows that is not uniformly continuous on the set of all rational numbers; however for any bounded interval the restriction of to $$ Q \cap I $$ is uniformly continuous, hence Cauchy-continuous, hence $$ f $$ extends to a continuous function on . But since this holds for every , there is then a unique extension of to a continuous function on all of . More generally, a continuous function $$ f: S \rightarrow R $$ whose restriction to every bounded subset of is uniformly continuous is extendable to , and the converse holds if is locally compact. A typical application of the extendability of a uniformly continuous function is the proof of the inverse Fourier transformation formula. We first prove that the formula is true for test functions, there are densely many of them.
https://en.wikipedia.org/wiki/Uniform_continuity
passage: ## Interpreting formal power series as functions In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let $$ f = \sum a_n X^n \in RX, $$ and suppose $$ S $$ is a commutative associative algebra over $$ R $$ , $$ I $$ is an ideal in $$ S $$ such that the I-adic topology on $$ S $$ is complete, and $$ x $$ is an element of $$ I $$ . Define: $$ f(x) = \sum_{n\ge 0} a_n x^n. $$ This series is guaranteed to converge in $$ S $$ given the above assumptions on $$ x $$ . Furthermore, we have $$ (f+g)(x) = f(x) + g(x) $$ and $$ (fg)(x) = f(x) g(x). $$ Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved.
https://en.wikipedia.org/wiki/Formal_power_series
passage: Component Pascal strong static Cool strong explicit static CORAL strong static Crystal structural static Cuneiform explicit static Curl strong nominal Curry strong static Cython strong nominal (extension types) and structural (Python) D weakIt is almost safe, unsafe features are not commonly used. explicit nominal static Dart strong gradual typing nominal Dylan strong dynamic Eiffel strong nominal static Elixir strong implicit dynamic Erlang strong implicit dynamic Euphoria strong explicit, implicit with objects nominal static, dynamic with objects F# strong implicit nominal static Forth typeless Fortran strong explicitOptionally, typing can be explicitly implied by the first letter of the identifier (known as implicit typing within the Fortran community). nominal static Gambas strong explicit nominal GLBasic strong explicit. Non-explicit declarations available through project options nominal static Gleam strong nominal static GoThe Go Programming Language Specification strong structural static Gosu strong nominal (subclassing) and structural static Groovy strong Harbour strong dynamic Haskell strong nominal static Haxe strong nominal (subclassing) and structural Io strong implicit dynamic icon strong implicit dynamic ISLISP strong dynamic J strong dynamic Java strongSheng Liang, Gilad Bracha. Dynamic class loading in the Java virtual machine. Volume 33, Issue 10 of ACM SIGPLAN Notices, October 1998.
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_by_type_system
passage: Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. ### Relativistic effects For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr model, an  electron has a velocity given by $$ v = Z \alpha c $$ , where is the atomic number, $$ \alpha $$ is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with $$ Z > 137 $$ is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman.
https://en.wikipedia.org/wiki/Atomic_orbital
passage: Normal distributions form an exponential family with natural parameters $$ \textstyle\theta_1=\frac{\mu}{\sigma^2} $$ and $$ \textstyle\theta_2=\frac{-1}{2\sigma^2} $$ , and natural statistics x and x2. The dual expectation parameters for normal distribution are and . ### Cumulative distribution function The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter , is the integral $$ \Phi(x) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt\,. $$ ### Error function The related error function $$ \operatorname{erf}(x) $$ gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range . That is: $$ \operatorname{erf}(x) = \frac 1 {\sqrt\pi} \int_{-x}^x e^{-t^2} \, dt = \frac 2 {\sqrt\pi} \int_0^x e^{-t^2} \, dt\,. $$ These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more.
https://en.wikipedia.org/wiki/Normal_distribution
passage: Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode. Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties. ### Physics Topology is relevant to physics in areas such as condensed matter physics, quantum field theory, quantum computing and physical cosmology. The topological dependence of mechanical properties in solids is of interest in the disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials. The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space. Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants.
https://en.wikipedia.org/wiki/Topology
passage: The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure). - Given a module M over a commutative ring R, the direct sum of modules has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as . The notion is sometimes called the algebra of dual numbers. - A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field. ### Representation theory - The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra. - If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups.
https://en.wikipedia.org/wiki/Associative_algebra
passage: Special types provided by the language are task types and protected types. For example, a date might be represented as: ```ada type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; ``` Important to note: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal: ```ada Today: Day_type := 4; Current_Month: Month_type := 10; ... Today + Current_Month ... -- illegal ``` The predefined plus-operator can only add values of the same type, so the expression is illegal. Types can be refined by declaring subtypes: ```ada subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day subtype Working_Day is Weekday range Monday .. Friday; -- Days to work Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration := (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization ``` Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types. ### Control structures
https://en.wikipedia.org/wiki/Ada_%28programming_language%29
passage: ### Occlusion Determining what should be displayed on the screen and what should be ommited is a multi-step process utilising various techniques. Using a z-buffer is the final step in this process. Each time an object is rendered into the framebuffer the z-buffer is used to compare the z-values of the fragments with the z-value already in the z-buffer (i.e., check what is closer), if the new z-value is closer than the old value, the fragment is written into the framebuffer and this new closer value is written into the z-buffer. If the z-value is further away than the value in the z-buffer, the fragment is discarded. This is repeated for all objects and surfaces in the scene (often in parallel). In the end, the z-buffer will allow correct reproduction of the usual depth perception: a close object hides one further away. This is called z-culling. The granularity of a z-buffer has a great influence on the scene quality: the traditional 16-bit z-buffer can result in artifacts (called "z-fighting" or stitching) when two objects are very close to each other. A more modern 24-bit or 32-bit z-buffer behaves much better, although the problem cannot be eliminated without additional algorithms. An 8-bit z-buffer is almost never used since it has too little precision.
https://en.wikipedia.org/wiki/Z-buffering
passage: In statistical software R, the cumulative distribution function is implemented as pt. ### Probability density function The probability density function (pdf) for the noncentral t-distribution with ν > 0 degrees of freedom and noncentrality parameter μ can be expressed in several forms. The confluent hypergeometric function form of the density function is $$ f(x) = \underbrace{\frac{\Gamma( \frac{\nu+1}{2} ) }{\sqrt{\nu \pi} \Gamma( \frac{\nu}{2} )} \left (1+ \frac{x^{2}}{\nu} \right )^{-\tfrac{\nu+1}{2}}}_{\text{StudentT}(x \, ; \, \mu=0)} \exp \big ( - \tfrac{\mu^{2}}{2} \big ) \Big \{ A_{\nu}(x \, ; \, \mu) + B_{\nu}(x \, ; \, \mu) \Big \}, $$ where $$ \begin{align}
https://en.wikipedia.org/wiki/Noncentral_t-distribution
passage: (Note: The symbol $$ \mathbb{R} $$ indicates the set of real numbers, and the notation $$ \mathbb{R}^N $$ refers to the Cartesian product of $$ N $$ copies of $$ \mathbb{R} $$ , which is an $$ N $$ -dimensional vector space over the field of the real numbers.) There are various approaches to determining the vectors $$ x_i $$ . Usually, MDS is formulated as an optimization problem, where $$ (x_1,\ldots,x_M) $$ is found as a minimizer of some cost function, for example, $$ \underset{x_1,\ldots,x_M}{\mathrm{argmin}} \sum_{i<j} ( \|x_i - x_j\| - d_{i,j} )^2. \, $$ A solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrix eigendecompositions. ## Procedure There are several steps in conducting MDS research: 1. Formulating the problem – What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for? 1. Obtaining input data – For example, :- Respondents are asked a series of questions. For each product pair, they are asked to rate similarity (usually on a 7-point Likert scale from very similar to very dissimilar).
https://en.wikipedia.org/wiki/Multidimensional_scaling
passage: ## History The four-vertex theorem was first proved for convex curves (i.e. curves with strictly positive curvature) in 1909 by Syamadas Mukhopadhyaya. His proof utilizes the fact that a point on the curve is an extremum of the curvature function if and only if the osculating circle at that point has fourth-order contact with the curve; in general the osculating circle has only third-order contact with the curve. The four-vertex theorem was proved for more general curves by Adolf Kneser in 1912 using a projective argument. ## Proof For many years the proof of the four-vertex theorem remained difficult, but a simple and conceptual proof was given by , based on the idea of the minimum enclosing circle. This is a circle that contains the given curve and has the smallest possible radius. If the curve includes an arc of the circle, it has infinitely many vertices. Otherwise, the curve and circle must be tangent at at least two points, because a circle that touched the curve at fewer points could be reduced in size while still enclosing it. At each tangency, the curvature of the curve is greater than that of the circle, for otherwise the curve would continue from the tangency outside the circle rather than inside.
https://en.wikipedia.org/wiki/Four-vertex_theorem
passage: case Bar : instructions } }` `function __get($property) { switch ($property) { case Bar : instructions ... return value; } }` `function __ set($property, $value) { switch ($property) { case Bar : instructions } }` Perl `sub Bar { my $self = shift; if (my $Bar = shift) { # setter $self->{Bar} = $Bar; return $self; } else { # getter return $self->{Bar}; }}` `sub Bar { my $self = shift; if (my $Bar = shift) { # read-only die "Bar is read-only\n"; } else { # getter return $self->{Bar}; }}` `sub Bar { my $self = shift; if (my $Bar = shift) { # setter $self->{Bar} = $Bar; return $self; } else { # write-only die "Bar is write-only\n"; }}` Raku colspan=3 Ruby `def bar instructions expression resulting in return value end def bar=(value) instructions end` `def bar instructions expression resulting in return value end` `def bar=(value) instructions end` Windows PowerShell `Add-Member «-MemberType »ScriptProperty «-Name »Bar «-Value »{ instructions ... return value } «-SecondValue »{ instructions } -InputObject variable` `Add-Member «-MemberType »ScriptProperty «-Name »Bar «-Value »{ instructions ... return value} -InputObject variable` `Add-Member «-MemberType »ScriptProperty «-Name »Bar -SecondValue { instructions } -InputObject variable` OCaml colspan=3 F# `member this.
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28object-oriented_programming%29
passage: Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral visual field. Their 1968 paper identified two basic visual cell types in the brain: - simple cells, whose output is maximized by straight edges having particular orientations within their receptive field - complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field. Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks. ### Fukushima's analog threshold elements in a vision model In 1969, Kunihiko Fukushima introduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced the ReLU (rectified linear unit) activation function. ### Neocognitron, origin of the trainable CNN architecture The "neocognitron" was introduced by Fukushima in 1980. The neocognitron introduced the two basic types of layers: - "S-layer": a shared-weights receptive-field layer, later known as a convolutional layer, which contains units whose receptive fields cover a patch of the previous layer.
https://en.wikipedia.org/wiki/Convolutional_neural_network
passage: Many flowers lack some parts—known as incomplete—or parts may be modified into other functions or look like what is typically another part. In some flowers, organs such as stamens, stigmas, and sepals are modified to resemble petals. This is most common in cultivation (such as of roses), where flowers with many additional "petals"—termed double flowers—are more attractive. Most flowers have symmetry. When the perianth is bisected through the central axis from any point and symmetrical halves are produced (as in sedges), the flower is said to be actinomorphic or regular. This is an example of radial symmetry. If there is only one plane of symmetry (as in orchids), the flower is said to be irregular or zygomorphic. If, in very rare cases, they have no symmetry at all they are called asymmetric. Floral symmetry is a key driver of diversity in flower morphology, because it is one of the main features derived through flower-plant coevolution. Zygomorphic flowers often coevolve with specific pollinators, while radially symmetric flowers tend to attract a wider range of pollinators. Floral symmetry also assists in heat retention, which is required for the growth and effective performance of the floral organs. Flowers may be directly attached to the plant at their base (sessile—the supporting stalk or stem is highly reduced or absent). There are several structures, found in some plants, that resemble flowers or floral organs.
https://en.wikipedia.org/wiki/Flower
passage: +ClassTusuryUsage level010− 13 − 10−12Moderate110−12 − 10−11210−11 − 10−10310−10 − 10−9Medium410−9 − 10−8510−8 − 10−7610−7 − 10−6710−6 − 10−5Severe810−5 − 10−4910−4 − 10−3 Instead, to express the volume of wear V it is possible to use the Holm equation - $$ V = k {W \over H} l $$ (for adhesive wear) - $$ V = k_ {a} {W \over H} l $$ (for abrasive wear) where W / H represents the real contact area, l the length of the distance traveled and k and $$ k_ {a} $$ are experimental dimensional factors. #### Wear measurement In experimental measurements of material wear, it is often necessary to recreate fairly small wear rates and to accelerate times. The phenomena, which in reality develop after years, in the laboratory must occur after a few days. A first evaluation of the wear processes is a visual inspection of the superficial profile of the body in the study, including a comparison before and after the occurrence of the wear phenomenon. In this first analysis the possible variations of the hardness and of the superficial geometry of the material are observed. Another method of investigation is that of the radioactive tracer, used to evaluate wear at macroscopic levels. One of the two materials in contact, involved in a wear process, is marked with a radioactive tracer.
https://en.wikipedia.org/wiki/Tribology
passage: Alternatively, one can infer several single-gene trees and combine them into a "supertree". With the advent of phylogenomics, hundreds of genes may be analyzed at once. ## Distance-matrix methods Distance-matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore, they require an MSA as an input. Distance is often defined as the fraction of mismatches at aligned positions, with gaps either ignored or counted as mismatches. Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the same interior node and whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. They are frequently used as the basis for progressive and iterative types of multiple sequence alignments. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees.
https://en.wikipedia.org/wiki/Computational_phylogenetics
passage: Objects close to the lens appear abnormally large relative to more distant objects, and distant objects appear abnormally small and hence farther away – distances are extended. Compression, long-lens, or telephoto distortion can be seen in images shot from a distance using a long focus lens or the more common telephoto sub-type (with an angle of view narrower than a normal lens). Distant objects look approximately the same size – closer objects are abnormally small, and more distant objects are abnormally large, and hence the viewer cannot discern relative distances between distant objects – distances are compressed. Note that linear perspective changes are caused by distance, not by the lens per se – two shots of the same scene from the same distance will exhibit identical perspective geometry, regardless of lens used. However, since wide-angle lenses have a wider field of view, they are generally used from closer, while telephoto lenses have a narrower field of view and are generally used from farther away. For example, if standing at a distance so that a normal lens captures someone's face, a shot with a wide-angle lens or telephoto lens from the same distance will have exactly the same linear perspective geometry on the face, though the wide-angle lens may fit the entire body into the shot, while the telephoto lens captures only the nose. However, crops of these three images with the same coverage will yield the same perspective distortion – the nose will look the same in all three.
https://en.wikipedia.org/wiki/Perspective_distortion
passage: When making a conic map, the map maker arbitrarily picks two standard parallels. Those standard parallels may be visualized as secant lines where the cone intersects the globe—or, if the map maker chooses the same parallel twice, as the tangent line where the cone is tangent to the globe. The resulting conic map has low distortion in scale, shape, and area near those standard parallels. Distances along the parallels to the north of both standard parallels or to the south of both standard parallels are stretched; distances along parallels between the standard parallels are compressed. When a single standard parallel is used, distances along all other parallels are stretched. Conic projections that are commonly used are: - ### Equidistant conic, which keeps parallels evenly spaced along the meridians to preserve a constant distance scale along each meridian, typically the same or similar scale as along the standard parallels. - Albers conic, which adjusts the north-south distance between non-standard parallels to compensate for the east-west stretching or compression, giving an equal-area map. - Lambert conformal conic, which adjusts the north-south distance between non-standard parallels to equal the east-west stretching, giving a conformal map. ### Pseudoconic - Bonne, an equal-area projection on which most meridians and parallels appear as curved lines.
https://en.wikipedia.org/wiki/Map_projection
passage: Improvements in technology have drastically decreased error rates, but false accusations are still frequent enough to be a problem. Perhaps the best known incident involving the abuse of an ANPR database in North America is the case of Edmonton Sun reporter Kerry Diotte in 2004. Diotte wrote an article critical of Edmonton police use of traffic cameras for revenue enhancement, and in retaliation was added to an ANPR database of "high-risk drivers" in an attempt to monitor his habits and create an opportunity to arrest him. The police chief and several officers were fired as a result, and The Office of the Privacy Commissioner of Canada expressed public concern over the "growing police use of technology to spy on motorists." Other concerns include the storage of information that could be used to identify people and store details about their driving habits and daily life, contravening the Data Protection Act along with similar legislation (see personally identifiable information). The laws in the UK are strict for any system that uses CCTV footage and can identify individuals. Also of concern is the safety of the data once it is mined, following the discovery of police surveillance records lost in a gutter. There is also a case in the UK for saying that use of ANPR cameras is unlawful under the Regulation of Investigatory Powers Act 2000. The breach exists, some say, in the fact that ANPR is used to monitor the activities of law-abiding citizens and treats everyone like the suspected criminals intended to be surveyed under the Act. The police themselves have been known to refer to the system of ANPR as a "24/7 traffic movement database" which is a diversion from its intended purpose of identifying vehicles involved in criminal activities.
https://en.wikipedia.org/wiki/Automatic_number-plate_recognition
passage: The first problem is more general because if we knew the coefficients of $$ P(G, x) $$ we could evaluate it at any point in polynomial time because the degree is n. The difficulty of the second type of problem depends strongly on the value of x and has been intensively studied in computational complexity. When x is a natural number, this problem is normally viewed as computing the number of x-colorings of a given graph. For example, this includes the problem #3-coloring of counting the number of 3-colorings, a canonical problem in the study of complexity of counting, complete for the counting class P. ### Efficient algorithms For some basic graph classes, closed formulas for the chromatic polynomial are known. For instance this is true for trees and cliques, as listed in the table above. Polynomial time algorithms are known for computing the chromatic polynomial for wider classes of graphs, including chordal graphs and graphs of bounded clique-width. The latter class includes cographs and graphs of bounded tree-width, such as outerplanar graphs. Deletion–contraction The deletion-contraction recurrence gives a way of computing the chromatic polynomial, called the deletion–contraction algorithm. In the first form (with a minus), the recurrence terminates in a collection of empty graphs. In the second form (with a plus), it terminates in a collection of complete graphs. This forms the basis of many algorithms for graph coloring.
https://en.wikipedia.org/wiki/Chromatic_polynomial
passage: Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense, with practitioners often required to attain a 4-year bachelor's degree followed by a 4–6 year Ph.D. in organic chemistry. Most training regimens also include a postdoctoral fellowship period of 2 or more years after receiving a Ph.D. in chemistry, making the total length of training range from 10 to 12 years of college education. However, employment opportunities at the Master's level also exist in the pharmaceutical industry, and at that and the Ph.D. level there are further opportunities for employment in academia and government. Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph.D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in research, where the net is necessarily cast widest, and most broad synthetic activity occurs. In research of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace" of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph.D. and post-doctoral positions, ibid.).
https://en.wikipedia.org/wiki/Medicinal_chemistry
passage: Although most machines are not able to address individual bits in memory, nor have instructions to manipulate single bits, each bit in a word can be singled out and manipulated using bitwise operations. In particular: Use `OR` to set a bit to one: 11101010 OR 00000100 = 11101110 `AND` to set a bit to zero: 11101010 AND 11111101 = 11101000 `AND` to determine if a bit is set, by zero-testing: 11101010 11101010 AND 00000001 AND 00000010 = 00000000 = 00000010 (=0 ∴ bit isn't set) (≠0 ∴ bit is set) `XOR` to invert or toggle a bit: 11101010 11101110 XOR 00000100 XOR 00000100 = 11101110 = 11101010 `NOT` to invert all bits: NOT 10110010 = 01001101 To obtain the bit mask needed for these operations, we can use a bit shift operator to shift the number 1 to the left by the appropriate number of places, as well as bitwise negation if necessary.
https://en.wikipedia.org/wiki/Bit_array
passage: For a given a m-ary tree T with $$ a $$ being one of its nodes and $$ d $$ its $$ t $$ -th child, a left-t rotation at $$ a $$ is done by making $$ d $$ the root node, and making $$ b $$ and all of its subtrees a child of $$ a $$ , additionally we assign the $$ m-1 $$ left most children of $$ d $$ to $$ a $$ and the right most child of $$ d $$ stays attached to it while $$ d $$ is promoted to root, as shown below: Convert an m-ary tree to left-tree for i = 1...n: for t = 2...m: while t child of node at depth i ≠ 1: L-t rotation at nodes at depth i end while end for end for A right-t rotation at d is the inverse of this operation. The left chain of T is a sequence of $$ x_{1}, x_{2}, \dots, x_{n} $$ nodes such that $$ x_1 $$ is the root and all nodes except $$ x_{n} $$ have one child connected to their left most (i.e., $$ m[1] $$ ) pointer. Any m-ary tree can be transformed to a left-chain tree using sequence of finite left-t rotations for t from 2 to m. Specifically, this can be done by performing left-t rotations on each node $$ x_i $$ until all of its $$ m-1 $$ sub-tree become null at each depth.
https://en.wikipedia.org/wiki/M-ary_tree
passage: Geometrically, this difference quotient measures the slope of the secant line passing through the points with coordinates (a, f(a)) and (b, f(b)). Difference quotients are used as approximations in numerical differentiation, but they have also been subject of criticism in this application. Difference quotients may also find relevance in applications involving Time discretization, where the width of the time step is used for the value of h. The difference quotient is sometimes also called the Newton quotient (after Isaac Newton) or Fermat's difference quotient (after Pierre de Fermat). ## Overview The typical notion of the difference quotient discussed above is a particular case of a more general concept. The primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point ("P") expressible on a graph. The difference between two points, themselves, is known as their Delta (ΔP), as is the difference in their function result, the particular notation being determined by the direction of formation: - Forward difference: ΔF(P) = F(P + ΔP) − F(P); - Central difference: δF(P) = F(P + ΔP) − F(P − ΔP); - Backward difference: ∇F(P) = F(P) − F(P − ΔP). The general preference is the forward orientation, as F(P) is the base, to which differences (i.e., "ΔP"s) are added to it.
https://en.wikipedia.org/wiki/Difference_quotient
passage: Thus if the same type of thermometer is calibrated in the same way its readings will be valid even if it is slightly inaccurate compared to the absolute scale. An example of a reference thermometer used to check others to industrial standards would be a platinum resistance thermometer with a digital display to 0.1 °C (its precision) which has been calibrated at 5 points against national standards (−18, 0, 40, 70, 100 °C) and which is certified to an accuracy of ±0.2 °C. According to British Standards, correctly calibrated, used and maintained liquid-in-glass thermometers can achieve a measurement uncertainty of ±0.01 °C in the range 0 to 100 °C, and a larger uncertainty outside this range: ±0.05 °C up to 200 or down to −40 °C, ±0.2 °C up to 450 or down to −80 °C. ## Indirect methods of temperature measurement Thermal expansion Utilizing the property of thermal expansion of various phases of matter. Pairs of solid metals with different expansion coefficients can be used for bi-metal mechanical thermometers. Another design using this principle is Breguet's thermometer. Some liquids possess relatively high expansion coefficients over a useful temperature ranges thus forming the basis for an alcohol or mercury thermometer. Alternative designs using this principle are the reversing thermometer and Beckmann differential thermometer. As with liquids, gases can also be used to form a gas thermometer.
https://en.wikipedia.org/wiki/Thermometer
passage: These two planes intersect to partition 3D space into 4 quadrants, which he labeled: - I: above H, in front of V - II: above H, behind V - III: below H, behind V - IV: below H, in front of V These quadrant labels are the same as used in 2D planar geometry, as seen from infinitely far to the "left", taking H and V to be the X-axis and Y-axis, respectively. The 3D object of interest is then placed into either quadrant I or III (equivalently, the position of the intersection line between the two planes is shifted), obtaining first- and third-angle projections, respectively. Quadrants II and IV are also mathematically valid, but their use would result in one view "true" and the other view "flipped" by 180° through its vertical centerline, which is too confusing for technical drawings. (In cases where such a view is useful, e.g. a ceiling viewed from above, a reflected view is used, which is a mirror image of the true orthographic view.) Monge's original formulation uses two planes only and obtains the top and front views only. The addition of a third plane to show a side view (either left or right) is a modern extension. The terminology of quadrant is a mild anachronism, as a modern orthographic projection with three views corresponds more precisely to an octant of 3D space. First-angle projection In first-angle projection
https://en.wikipedia.org/wiki/Multiview_orthographic_projection
passage: ### Eigendecomposition If orthonormal eigenvectors $$ \mathbf{u}_1, \dots, \mathbf{u}_n $$ of a Hermitian matrix are chosen and written as the columns of the matrix , then one eigendecomposition of is $$ A = U \Lambda U^\mathsf{H} $$ where $$ U U^\mathsf{H} = I = U^\mathsf{H} U $$ and therefore $$ A = \sum_j \lambda_j \mathbf{u}_j \mathbf{u}_j ^\mathsf{H}, $$ where $$ \lambda_j $$ are the eigenvalues on the diagonal of the diagonal matrix $$ \Lambda. $$
https://en.wikipedia.org/wiki/Hermitian_matrix
passage: Karl Kreil constructed a seismometer in Prague between 1848 and 1850, which used a point-suspended rigid cylindrical pendulum covered in paper, drawn upon by a fixed pencil. The cylinder was rotated every 24 hours, providing an approximate time for a given quake. Luigi Palmieri, influenced by Mallet's 1848 paper, invented a seismometer in 1856 that could record the time of an earthquake. This device used metallic pendulums which closed an electric circuit with vibration, which then powered an electromagnet to stop a clock. Palmieri seismometers were widely distributed and used for a long time. By 1872, a committee in the United Kingdom led by James Bryce expressed their dissatisfaction with the current available seismometers, still using the large 1842 Forbes device located in Comrie Parish Church, and requested a seismometer which was compact, easy to install and easy to read. In 1875 they settled on a large example of the Mallet device, consisting of an array of cylindrical pins of various sizes installed at right angles to each other on a sand bed, where larger earthquakes would knock down larger pins. This device was constructed in 'Earthquake House' near Comrie, which can be considered the world's first purpose-built seismological observatory. As of 2013, no earthquake has been large enough to cause any of the cylinders to fall in either the original device or replicas. ### The first seismographs (1880-) The first seismographs were invented in the 1870s and 1880s.
https://en.wikipedia.org/wiki/Seismometer
passage: When two functions f and g are decomposed into power series around the same center c, the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if $$ f(x) = \sum_{n=0}^\infty a_n (x - c)^n $$ and $$ g(x) = \sum_{n=0}^\infty b_n (x - c)^n $$ then $$ f(x) \pm g(x) = \sum_{n=0}^\infty (a_n \pm b_n) (x - c)^n. $$ The sum of two power series will have a radius of convergence of at least the smaller of the two radii of convergence of the two series, but possibly larger than either of the two. For instance it is not true that if two power series $$ \sum_{n=0}^\infty a_n x^n $$ and $$ \sum_{n=0}^\infty b_n x^n $$ have the same radius of convergence, then $$ \sum_{n=0}^\infty \left(a_n + b_n\right) x^n $$ also has this radius of convergence: if $$ a_n = (-1)^n $$ and $$ b_n = (-1)^{n+1} \left(1 - \frac{1}{3^n}\right) $$ , for instance, then both series have the same radius of convergence of 1, but the series $$ \sum_{n=0}^\infty \left(a_n + b_n\right) x^n = \sum_{n=0}^\infty \frac{(-1)^n}{3^n} x^n $$ has a radius of convergence of 3.
https://en.wikipedia.org/wiki/Power_series
passage: The service of this goal is what creates a drawing that one even could scale and get an accurate dimension thereby. And thus the great temptation to do so, when a dimension is wanted but was not labeled. The second principle—that even though scaling the drawing will usually work, one should nevertheless never do it—serves several goals, such as enforcing total clarity regarding who has authority to discern design intent, and preventing erroneous scaling of a drawing that was never drawn to scale to begin with (which is typically labeled "drawing not to scale" or "scale: NTS"). When a user is forbidden from scaling the drawing, they must turn instead to the engineer (for the answers that the scaling would seek), and they will never erroneously scale something that is inherently unable to be accurately scaled. But in some ways, the advent of the CAD and MBD era challenges these assumptions that were formed many decades ago. When part definition is defined mathematically via a solid model, the assertion that one cannot interrogate the model—the direct analog of "scaling the drawing"—becomes ridiculous; because when part definition is defined this way, it is not possible for a drawing or model to be "not to scale". A 2D pencil drawing can be inaccurately foreshortened and skewed (and thus not to scale), yet still be a completely valid part definition as long as the labeled dimensions are the only dimensions used, and no scaling of the drawing by the user occurs. This is because what the drawing and labels convey is in reality a symbol of what is wanted, rather than a true replica of it.
https://en.wikipedia.org/wiki/Engineering_drawing
passage: ### Groups The symmetry operations of a molecule (or other object) form a group. In mathematics, a group is a set with a binary operation that satisfies the four properties listed below. In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other. An example is the sequence of a C4 rotation about the z-axis and a reflection in the xy-plane, denoted σ(xy)C4. By convention the order of operations is from right to left. A symmetry group obeys the defining properties of any group. closure property: This means that the group is closed so that combining two elements produces no new elements. Symmetry operations have this property because a sequence of two operations will produce a third state indistinguishable from the second and therefore from the first, so that the net effect on the molecule is still a symmetry operation. This may be illustrated by means of a table. For example, the point group C3 contains three symmetry operations: rotation by 120°, C3, rotation by 240°, C32 and rotation by 360°, which is equivalent to identity, E. The group C3 is therefore not the same as the operation C3, although the same notation is used. {| class="wikitable" |+ Point group C3 Multiplication table |- ! !! E || C3 || C32 |- !
https://en.wikipedia.org/wiki/Molecular_symmetry
passage: Notice that while $$ e^{\textbf{A}t} $$ is a matrix, given that it is a matrix exponential, we can say that $$ e^{\textbf{A}t} e^{-\textbf{A}t} = I $$ . In other words, $$ \exp{\textbf{A}t} = \exp{{(-\textbf{A}t)}^{-1}} $$ . #### Example (homogeneous) Consider the system $$ \begin{matrix} x' &=& 2x & -y & +z \\ y' &=& & 3y & -1z \\ z' &=& 2x & +y & +3z \end{matrix}~. $$ The associated defective matrix is $$ A = \begin{bmatrix} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{bmatrix}~. $$ The matrix exponential is $$ e^{tA} = \frac{1}{2}\begin{bmatrix} BLOCK0 -e^{2t}\left(-1 + e^{2t} - 2t\right) & 2(t + 1)e^{2t} & -e^{2t}\left(-1 + e^{2t}\right) \\ BLOCK1\end{bmatrix}~, $$ so that the general solution of the homogeneous system is $$ \begin{bmatrix}x \\y \\ z\end{bmatrix} =
https://en.wikipedia.org/wiki/Matrix_exponential
passage: Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. ## Definition A Banach space is a complete normed space $$ (X, \|{\cdot}\|). $$ A normed space is a pair $$ (X, \|{\cdot}\|) $$ consisting of a vector space $$ X $$ over a scalar field $$ \mathbb{K} $$ (where $$ \mathbb{K} $$ is commonly $$ \Reals $$ or $$ \Complex $$ ) together with a distinguished norm $$ \|{\cdot}\| : X \to \Reals. $$ Like all norms, this norm induces a translation invariant distance function, called the canonical or (norm) induced metric, defined for all vectors $$ x, y \in X $$ by $$ d(x, y) := \|y - x\| = \|x - y\|. $$ This makes $$ X $$ into a metric space $$ (X, d). $$ A sequence $$ x_1, x_2, \ldots $$ is called or or if for every real $$ r > 0, $$ there exists some index $$ N $$
https://en.wikipedia.org/wiki/Banach_space
passage: Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. ## Hierarchy of storage Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually fast but temporary semiconductor read-write memory, typically DRAM (dynamic RAM) or other such devices. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down). Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory.
https://en.wikipedia.org/wiki/Computer_data_storage
passage: The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds. In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings determines a morphism of Kähler differentials which sends an element to , the exterior differential of . The formula holds in this context as well. The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a -manifold to a -manifold (its tangent bundle) and a -function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula . There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) dXt with a twice-differentiable function f.
https://en.wikipedia.org/wiki/Chain_rule
passage: If the number of events in the queue is much smaller or much larger than the number of buckets, it will not function efficiently. The solution is to allow the number of buckets to correspondingly grow and shrink as the queue grows and shrinks. To simplify the resize operation, the Nb (number of buckets) in a CQ is often chosen to be the power of two, i.e., $$ Nb=2^n $$ ;↵ The number of buckets is doubled or halved each time the Ne (number of events) exceeds 2Nb or decreases below Nb/2 respectively. When Nb is resized, the new width w has to be calculated as well. The new w that is adopted will be estimated by sampling the average inter-event time gap from the first few hundred events starting at the current bucket position. Thereafter, a new Calendar queue is created and all the events in the old calendar will be recopied over. ## References - - - - Category:Priority queues
https://en.wikipedia.org/wiki/Calendar_queue
passage: Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. $$ U = - m \left(G \frac{ M_1}{r_1}+ G \frac{ M_2}{r_2}\right) $$ therefore, $$ U = - m \sum G \frac{ M}{r} , $$ ### Negative gravitational energy As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which becomes zero: $$ r = 0 $$ and $$ r = \infty $$ . The choice of $$ U = 0 $$ at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.
https://en.wikipedia.org/wiki/Potential_energy
passage: ### Algebra Objects A is equivalent to B if: Normal form Finitely generated R-modules with R a principal ideal domain A and B are isomorphic as R-modules Primary decomposition (up to reordering) or invariant factor decomposition ### Geometry In analytic geometry: - The equation of a line: Ax + By = C, with A2 + B2 = 1 and C ≥ 0 - The equation of a circle: $$ (x - h)^2 + (y - k)^2 = r^2 $$ By contrast, there are alternative forms for writing equations. For example, the equation of a line may be written as a linear equation in point-slope and slope-intercept form. Convex polyhedra can be put into canonical form such that: - All faces are flat, - All edges are tangent to the unit sphere, and - The centroid of the polyhedron is at the origin. ### Integrable systems Every differentiable manifold has a cotangent bundle. That bundle can always be endowed with a certain differential form, called the canonical one-form. This form gives the cotangent bundle the structure of a symplectic manifold, and allows vector fields on the manifold to be integrated by means of the Euler-Lagrange equations, or by means of Hamiltonian mechanics. Such systems of integrable differential equations are called integrable systems. ### Dynamical systems The study of dynamical systems overlaps with that of integrable systems; there one has the idea of a normal form (dynamical systems).
https://en.wikipedia.org/wiki/Canonical_form%23Computing
passage: However SQUIDs are noise sensitive, making them impractical as laboratory magnetometers in high DC magnetic fields, and in pulsed magnets. Commercial #### SQUID magnetometer s are available for sample temperatures between 300 mK and 400 K, and magnetic fields up to 7 tesla. ### Inductive pickup coils Inductive pickup coils (also referred as inductive sensor) measure the magnetic dipole moment of a material by detecting the current induced in a coil due to the changing magnetic moment of the sample. The sample's magnetization can be changed by applying a small ac magnetic field (or a rapidly changing dc field), as occurs in capacitor-driven pulsed magnets. These measurements require differentiating between the magnetic field produced by the sample and that from the external applied field. Often a special arrangement of cancellation coils is used. For example, half of the pickup coil is wound in one direction, and the other half in the other direction, and the sample is placed in only one half. The external uniform magnetic field is detected by both halves of the coil, and since they are counter-wound, the external magnetic field produces no net signal. ### VSM (vibrating-sample magnetometer) Vibrating-sample magnetometers (VSMs) detect the dipole moment of a sample by mechanically vibrating the sample inside of an inductive pickup coil or inside of a SQUID coil. Induced current or changing flux in the coil is measured. The vibration is typically created by a motor or a piezoelectric actuator.
https://en.wikipedia.org/wiki/Magnetometer
passage: ## Notation There are several notations describing infinite compositions, including the following: Forward compositions: $$ F_{k,n}(z) = f_k \circ f_{k + 1} \circ \dots \circ f_{n - 1} \circ f_n (z). $$ Backward compositions: $$ G_{k,n}(z) = f_n \circ f_{n - 1} \circ \dots \circ f_{k + 1} \circ f_k (z). $$ In each case convergence is interpreted as the existence of the following limits: $$ \lim_{n\to \infty} F_{1,n}(z), \qquad \lim_{n\to\infty} G_{1,n}(z). $$ For convenience, set and . One may also write $$ F_n(z)=\underset{k=1}{\overset{n}{\mathop R}}\,f_k(z)=f_1 \circ f_2\circ \cdots \circ f_n(z) $$ and $$ G_n(z)=\underset{k=1}{\overset{n}{\mathop L}}\,g_k(z)=g_n \circ g_{n-1}\circ \cdots \circ g_1(z) $$ ## Contraction theorem Many results can be considered extensions of the following result:
https://en.wikipedia.org/wiki/Infinite_compositions_of_analytic_functions
passage: In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection. ## Definition Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point. In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point.
https://en.wikipedia.org/wiki/Transversality_%28mathematics%29
passage: However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. #### Protostomes and deuterostomes Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome phyla are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes.
https://en.wikipedia.org/wiki/Animal
passage: This is provided by the Environment Agency in the U.K. under its Monitoring Certification Scheme (MCERTS). ### Sampling methods There are a wide range of sampling methods which depend on the type of environment, the material being sampled and the subsequent analysis of the sample. At its simplest a sample can be filling a clean bottle with river water and submitting it for conventional chemical analysis. At the more complex end, sample data may be produced by complex electronic sensing devices taking sub-samples over fixed or variable time periods. Sampling methods include judgmental sampling, simple random sampling, stratified sampling, systematic and grid sampling, adaptive cluster sampling, grab samples, semi-continuous monitoring and continuous, passive sampling, remote surveillance, remote sensing, biomonitoring and other sampling methods. #### Judgmental sampling In judgmental sampling, the selection of sampling units (i.e., the number and location and/or timing of collecting samples) is based on knowledge of the feature or condition under investigation and on professional judgment. Judgmental sampling is distinguished from probability-based sampling in that inferences are based on professional judgment, not statistical scientific theory. Therefore, conclusions about the target population are limited and depend entirely on the validity and accuracy of professional judgment; probabilistic statements about parameters are not possible. As described in subsequent chapters, expert judgment may also be used in conjunction with other sampling designs to produce effective sampling for defensible decisions. #### Simple random sampling In simple random sampling
https://en.wikipedia.org/wiki/Environmental_monitoring
passage: Rotational axis vibration can occur due to low stiffness and damping, which are inherent problems of superconducting magnets, preventing the use of completely superconducting magnetic bearings for flywheel applications. Since flux pinning is an important factor for providing the stabilizing and lifting force, the HTSC can be made much more easily for flywheel energy storage than for other uses. HTSC powders can be formed into arbitrary shapes so long as flux pinning is strong. An ongoing challenge that has to be overcome before superconductors can provide the full lifting force for an FES system is finding a way to suppress the decrease of levitation force and the gradual fall of rotor during operation caused by the flux creep of the superconducting material. ## Physical characteristics ### General Compared with other ways to store electricity, FES systems have long lifetimes (lasting decades with little or no maintenance; full-cycle lifetimes quoted for flywheels range from in excess of 105, up to 107, cycles of use), high specific energy (100–130 W·h/kg, or 360–500 kJ/kg), and large maximum power output. The energy efficiency (ratio of energy out per energy in) of flywheels, also known as round-trip efficiency, can be as high as 90%. Typical capacities range from 3 kWh to 133 kWh. Rapid charging of a system occurs in less than 15 minutes. The high specific energies often cited with flywheels can be a little misleading as commercial systems built have much lower specific energy, for example 11 W·h/kg, or 40 kJ/kg.
https://en.wikipedia.org/wiki/Flywheel_energy_storage
passage: Balancing them is a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either - autoregressive (i.e. predicting how the segment continues, as GPTs do): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi". - "masked" (i.e. filling in the parts missing from the segment, the way "BERT" does it): for example, given a segment "I like to `[__] [__]` cream", the model predicts that "eat" and "ice" are missing. Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation. ### Infrastructure Substantial infrastructure is necessary for training the largest models. ## Training cost The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in the list of large language models.
https://en.wikipedia.org/wiki/Large_language_model
passage: the comparison logic is not the central aspect of this algorithm, it is hidden behind a generic comparator and can also consist of several comparison criteria (e.g. multiple columns). The compare function should return if a row is less(-1), equal(0) or bigger(1) than another row: ```typescript function compare(leftRow: RelationRow, rightRow: RelationRow): number { // Return -1 if leftRow is less than rightRow // Return 0 if leftRow is equal to rightRow // Return 1 if leftRow is greater than rightRow } ``` Note that a relation in terms of this pseudocode supports some basic operations: ```typescript interface Relation { // Returns true if relation has a next row (otherwise false) hasNext(): boolean // Returns the next row of the relation (if any) next(): RelationRow // Sorts the relation with the given comparator sort(comparator: Comparator): void // Marks the current row index mark(): void // Restores the current row index to the marked row index restoreMark(): void } ``` ## Simple C# implementation Note that this implementation assumes the join attributes are unique, i.e., there is no need to output multiple tuples for a given value of the key.
https://en.wikipedia.org/wiki/Sort-merge_join
passage: See van Heijenoort's commentary and Norbert Wiener's 1914 A simplification of the logic of relations in van Heijenoort 1967:224ff. ## The unit class, impredicativity, and the vicious circle principle Suppose a librarian wants to index her collection into a single book (call it Ι for "index"). Her index will list all the books and their locations in the library. As it turns out, there are only three books, and these have titles Ά, β, and Γ. To form her index I, she goes out and buys a book of 200 blank pages and labels it "I". Now she has four books: I, Ά, β, and Γ. Her task is not difficult. When completed, the contents of her index I are 4 pages, each with a unique title and unique location (each entry abbreviated as Title. LocationT): I = { I.LI, Ά.LΆ, β.Lβ, Γ.LΓ}. This sort of definition of I was deemed by Poincaré to be "impredicative". He seems to have considered that only predicative definitions can be allowed in mathematics: "a definition is 'predicative' and logically admissible only if it excludes all objects that are dependent upon the notion defined, that is, that can in any way be determined by it". Zermelo 1908 in van Heijenoort 1967:190. See the discussion of this very quotation in Mancosu 1998:68.
https://en.wikipedia.org/wiki/Logicism
passage: " An influential 1992 study by Brock et al. appeared to find support for technical trading rules. Sullivan and Timmerman tested the 1992 study for data snooping and other problems in 1999; they determined the sample covered by Brock et al. was robust to data snooping. Subsequently, a comprehensive study of the question by Amsterdam economist Gerwin Griffioen concludes that: "for the U.S., Japanese and most Western European stock market indices the recursive out-of-sample forecasting procedure does not show to be profitable, after implementing little transaction costs. Moreover, for sufficiently high transaction costs it is found, by estimating CAPMs, that technical trading shows no statistically significant risk-corrected out-of-sample forecasting power for almost all of the stock market indices." Transaction costs are particularly applicable to "momentum strategies"; a comprehensive 1996 review of the data and studies concluded that even small transaction costs would lead to an inability to capture any excess from such strategies. In a 2000 paper published in the Journal of Finance, professor Andrew W. Lo of MIT, working with Harry Mamaysky and Jiang Wang found that: In that same paper Lo wrote that "several academic studies suggest that ... technical analysis may well be an effective means for extracting useful information from market prices." Some techniques such as Drummond Geometry attempt to overcome the past data bias by projecting support and resistance levels from differing time frames into the near-term future and combining that with reversion to the mean techniques.
https://en.wikipedia.org/wiki/Technical_analysis
passage: Heat storms occur when the temperature reaches for three or more consecutive days over a wide area (tens of thousands of square miles). The National Weather Service issues heat advisories and excessive heat warnings when it expects unusual periods of hot weather. In Canada, heat waves are defined using the daily maximum and minimum temperatures, and in most of the country, the humidex as well, exceeding a regional threshold for two or more days. The threshold in which daily maximum temperatures must exceed ranges between in Newfoundland and in interior British Columbia, though this threshold is much lower in Nunavut, ranging between and . #### Oceania In Adelaide, South Australia, a heat wave is five consecutive days at or above , or three consecutive days at or over . The Australian Bureau of Meteorology defines a heat wave as three or more days of unusual maximum and minimum temperatures. Before this new Pilot Heatwave Forecast there was no national definition for heat waves or measures of heat wave severity. In New Zealand, heat waves thresholds depend on local climatology, with the temperature threshold ranging between in Greymouth and in Gisborne. ## Marine Heatwaves Marine heatwaves have become a prominent subject of research in recent years, reflecting the fact that since the turn of this century many ocean areas have experienced peaks of temperatures, along with more frequent, more intense, more prolonged warming events than ever met on record. The genesis of marine heatwaves is mainly driven by a combination of oceanic and atmospheric factors, often triggered by high pressure systems that will reduce cloud cover and increase solar absorption by the sea surface.
https://en.wikipedia.org/wiki/Heat_wave
passage: The functional $$ \mu_A $$ extends to a positive linear functional on compactly supported continuous functions and so gives a Haar measure. (Note that even though the limit is linear in $$ K $$ , the individual terms $$ [K:U] $$ are not usually linear in $$ K $$ .) ### A construction using mean values of functions Von Neumann gave a method of constructing Haar measure using mean values of functions, though it only works for compact groups. The idea is that given a function $$ f $$ on a compact group, one can find a convex combination $$ \sum a_i f(g_i g) $$ (where $$ \sum a_i=1 $$ ) of its left translates that differs from a constant function by at most some small number $$ \epsilon $$ . Then one shows that as $$ \epsilon $$ tends to zero the values of these constant functions tend to a limit, which is called the mean value (or integral) of the function $$ f $$ . For groups that are locally compact but not compact this construction does not give Haar measure as the mean value of compactly supported functions is zero. However something like this does work for almost periodic functions on the group which do have a mean value, though this is not given with respect to Haar measure. ### A construction on Lie groups On an n-dimensional Lie group, Haar measure can be constructed easily as the measure induced by a left-invariant n-form. This was known before Haar's theorem. ##
https://en.wikipedia.org/wiki/Haar_measure
passage: The first mass-produced computers were the Bull Gamma 3 (1952, 1,200 units) and the IBM 650 (1954, 2,000 units). ## Design Vacuum-tube technology required a great deal of electricity. The ENIAC computer (1946) had over 17,000 tubes and suffered a tube failure (which would take 15 minutes to locate) on average every two days. In operation the ENIAC consumed 150 kilowatts of power, of which 80 kilowatts were used for heating tubes, 45 kilowatts for DC power supplies, 20 kilowatts for ventilation blowers, and 5 kilowatts for punched-card auxiliary equipment. Because the failure of any one of the thousands of tubes in a computer could result in errors, tube reliability was of high importance. Special quality tubes were built for computer service, with higher standards of materials, inspection and testing than standard receiving tubes. One effect of digital operation that rarely appeared in analog circuits was cathode poisoning. Vacuum tubes that operated for extended intervals with no plate current would develop a high-resistivity layer on the cathodes, reducing the gain of the tube. Specially selected materials were required for computer tubes to prevent this effect. To avoid mechanical stresses associated with warming the tubes to operating temperature, often the tube heaters had their full operating voltage applied slowly, over a minute or more, to prevent stress-related fractures of the cathode heaters. To avoid thermal cycling, heater power could be left on during standby time for the machine, with high-voltage plate supplies switched off.
https://en.wikipedia.org/wiki/Vacuum-tube_computer
passage: If agents find it optimal to truthfully report type, we say such a mechanism is truthfully implementable. The task is then to solve for a truthfully implementable and impute this transfer function to the original game. An allocation is truthfully implementable if there exists a transfer function such that which is also called the incentive compatibility (IC) constraint. In applications, the IC condition is the key to describing the shape of in any useful way. Under certain conditions it can even isolate the transfer function analytically. Additionally, a participation (individual rationality) constraint is sometimes added if agents have the option of not playing. #### Necessity Consider a setting in which all agents have a type-contingent utility function . Consider also a goods allocation that is vector-valued and size (which permits number of goods) and assume it is piecewise continuous with respect to its arguments. The function is implementable only if whenever and and x is continuous at . This is a necessary condition and is derived from the first- and second-order conditions of the agent's optimization problem assuming truth-telling. Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution (MRS) increases as a function of the type, In short, agents will not tell the truth if the mechanism does not offer higher agent types a better deal. Otherwise, higher types facing any mechanism that punishes high types for reporting will lie and declare they are lower types, violating the truthtelling incentive-compatibility constraint.
https://en.wikipedia.org/wiki/Mechanism_design
passage: #### Gregory–Leibniz series The Gregory–Leibniz series $$ \pi = 4\sum_{n=0}^{\infty} \cfrac {(-1)^n}{2n+1} = 4\left( \frac{1}{1} - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} +- \cdots\right) $$ is the power series for arctan(x) specialized to  = 1. It converges too slowly to be of practical interest. However, the power series converges much faster for smaller values of $$ x $$ , which leads to formulae where $$ \pi $$ arises as the sum of small angles with rational tangents, known as Machin-like formulae.
https://en.wikipedia.org/wiki/Approximations_of_%CF%80
passage: Angiotensin II is a hormone which acts on the adrenal cortex, causing the release into the blood of the steroid hormone, aldosterone. Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension. The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid, in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia. The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about.
https://en.wikipedia.org/wiki/Homeostasis
passage: Many traditional African practices, such as voodoo and hoodoo, have a strong belief in superstition. Some of these religions include a belief that third parties can influence an individual's luck. Shamans and witches are both respected and feared, based on their ability to cause good or bad fortune for those in villages near them. ### Self-fulfilling prophecy Some evidence supports the idea that belief in luck acts like a placebo, producing positive thinking and improving people's responses to events. In personality psychology, people reliably differ from each other depending on four key aspects: beliefs in luck, rejection of luck, being lucky, and being unlucky. People who believe in good luck are more optimistic, more satisfied with their lives, and have better moods. People who believe they are personally unlucky experience more anxiety, and less likely to take advantage of unexpected opportunities. One 2010 study found that golfers who were told they were using a "lucky ball" performed better than those who were not. Some people intentionally put themselves in situations that increase the chances of a serendipitous encounter, such as socializing with people who work in different fields. ## Social aspects ### Games The philosopher Nicholas Rescher has proposed that the luck of someone's result in a situation of uncertainty is measured by the difference between this party's yield and expectation: λ = Y - E. Thus skill enhances expectation and reduces luck. The extent to which different games will depend on luck, rather than skill or effort, varies considerably.
https://en.wikipedia.org/wiki/Luck
passage: ### Confidence interval of a sampled standard deviation The standard deviation we obtain by sampling a distribution is itself not absolutely accurate, both for mathematical reasons (explained here by the confidence interval) and for practical reasons of measurement (measurement error). The mathematical effect can be described by the confidence interval or CI. To show how a larger sample will make the confidence interval narrower, consider the following examples: A small population of has only one degree of freedom for estimating the standard deviation. The result is that a 95% CI of the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows: $$ \Pr\left(q_\frac{\alpha}{2} < k \frac{s^2}{\sigma^2} < q_{1 - \frac{\alpha}{2}}\right) = 1 - \alpha, $$ where $$ q_p $$ is the -th quantile of the chi-square distribution with degrees of freedom, and is the confidence level. This is equivalent to the following: $$ \Pr\left(k\frac{s^2}{q_{1 - \frac{\alpha}{2}}} < \sigma^2 < k\frac{s^2}{q_{\frac{\alpha}{2}}}\right) = 1 - \alpha. $$ With , and . The reciprocals of the square roots of these two numbers give us the factors 0.45 and 31.9 given above. A larger population of has 9 degrees of freedom for estimating the standard deviation.
https://en.wikipedia.org/wiki/Standard_deviation
passage: There is a net force on the stone in the horizontal plane which acts toward the center. In an inertial frame of reference, were it not for this net force acting on the stone, the stone would travel in a straight line, according to Newton's first law of motion. In order to keep the stone moving in a circular path, a centripetal force, in this case provided by the string, must be continuously applied to the stone. As soon as it is removed (for example if the string breaks) the stone moves in a straight line, as viewed from above. In this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newton's laws of motion. In a frame of reference rotating with the stone around the same axis as the stone, the stone is stationary. However, the force applied by the string is still acting on the stone. If one were to apply Newton's laws in their usual (inertial frame) form, one would conclude that the stone should accelerate in the direction of the net applied force—towards the axis of rotation—which it does not do. The centrifugal force and other fictitious forces must be included along with the real forces in order to apply Newton's laws of motion in the rotating frame. Earth The Earth constitutes a rotating reference frame because it rotates once every 23 hours and 56 minutes around its axis. Because the rotation is slow, the fictitious forces it produces are often small, and in everyday situations can generally be neglected.
https://en.wikipedia.org/wiki/Centrifugal_force
passage: while $$ f'(a) = 0 $$ . An example is $$ f(x) = (x - a)^3 $$ . In fact, for such a function, the inverse cannot be differentiable at $$ b = f(a) $$ , since if $$ f^{-1} $$ were differentiable at $$ b $$ , then, by the chain rule, $$ 1 = (f^{-1} \circ f)'(a) = (f^{-1})'(b)f'(a) $$ , which implies $$ f'(a) \ne 0 $$ . (The situation is different for holomorphic functions; see ## Holomorphic inverse function theorem below.) For functions of more than one variable, the theorem states that if $$ f $$ is a continuously differentiable function from an open subset $$ A $$ of $$ \mathbb{R}^n $$ into $$ \R^n $$ , and the derivative $$ f'(a) $$ is invertible at a point (that is, the determinant of the Jacobian matrix of at is non-zero), then there exist neighborhoods $$ U $$ of $$ a $$ in $$ A $$ and $$ V $$ of $$ b = f(a) $$ such that $$ f(U) \subset V $$ and $$ f : U \to V $$ is bijective.
https://en.wikipedia.org/wiki/Inverse_function_theorem
passage: \end{align} $$ The reciprocal of the natural logarithm can be also written in this way: $$ \frac {1}{\ln(x)} = \frac {2x}{x^2-1}\sqrt{\frac {1}{2}+\frac {x^2+1}{4x}}\sqrt{\frac {1}{2}+\frac {1}{2}\sqrt{\frac {1}{2}+\frac {x^2+1}{4x}}}\ldots $$ For example: $$ \frac {1}{\ln(2)} = \frac {4}{3}\sqrt{\frac {1}{2} + \frac {5}{8}} \sqrt{\frac {1}{2} + \frac {1}{2} \sqrt{\frac {1}{2} +\frac {5}{8}}} \ldots $$
https://en.wikipedia.org/wiki/Natural_logarithm
passage: In mathematics, the fundamental theorem of Galois theory is a result that describes the structure of certain types of field extensions in relation to groups. It was proved by Évariste Galois in his development of Galois theory. In its most basic form, the theorem asserts that given a field extension E/F that is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are fields K satisfying F ⊆ K ⊆ E; they are also called subextensions of E/F.) ## Explicit description of the correspondence For finite extensions, the correspondence can be described explicitly as follows. - For any subgroup H of Gal(E/F), the corresponding fixed field, denoted EH, is the set of those elements of E which are fixed by every automorphism in H. - For any intermediate field K of E/F, the corresponding subgroup is Aut(E/K), that is, the set of those automorphisms in Gal(E/F) which fix every element of K. The fundamental theorem says that this correspondence is a one-to-one correspondence if (and only if) E/F is a Galois extension. For example, the topmost field E corresponds to the trivial subgroup of Gal(E/F), and the base field F corresponds to the whole group Gal(E/F). The notation Gal(E/F) is only used for Galois extensions. If E/F is Galois, then Gal(E/F) = Aut(E/F).
https://en.wikipedia.org/wiki/Fundamental_theorem_of_Galois_theory
passage: Even so, it has recently been discovered that there are some forms of life, chemotrophs, that appear to gain all their metabolic energy from chemosynthesis driven by hydrothermal vents, thus showing that some life may not require solar energy to thrive. Chemosynthetic bacteria and archaea use hydrogen sulfide and methane from hydrothermal vents and cold seeps as an energy source (just as plants use sunlight) to produce carbohydrates; they form the base of the food chain in regions with little to no sunlight. Regardless of where the energy is obtained, a species that produces its own energy lies at the base of the food chain model, and is a critically important part of an ecosystem. Higher trophic levels cannot produce their own energy and so must consume producers or other life that itself consumes producers. In the higher trophic levels lies consumers (secondary consumers, tertiary consumers, etc.). Consumers are organisms that eat other organisms. All organisms in a food chain, except the first organism, are consumers. Secondary consumers eat and obtain energy from primary consumers, tertiary consumers eat and obtain energy from secondary consumers, etc. At the highest trophic level is typically an apex predator, a consumer with no natural predators in the food chain model. When any trophic level dies, detritivores and decomposers consume their organic material for energy and expel nutrients into the environment in their waste. Decomposers and detritivores break down the organic compounds into simple nutrients that are returned to the soil.
https://en.wikipedia.org/wiki/Food_chain
passage: Reprinted, pp. 168–184 in W.D. Hart (ed., 1996). Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained. ## Contemporary schools of thought Contemporary schools of thought in the philosophy of mathematics include: artistic, ### Platonism , mathematicism, logicism, formalism, conventionalism, intuitionism, constructivism, finitism, structuralism, embodied mind theories ( #### Aristotelian realism , psychologism, empiricism), fictionalism, social constructivism, and non-traditional schools. However, many of these schools of thought are mutually compatible. For example, most living mathematicians are together Platonists and formalists, give a great importance to aesthetic, and consider that axioms should be chosen for the results they produce, not for their coherence with human intuition of reality (conventionalism). ### Artistic The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art. A famous mathematician who claims that is the British G. H. Hardy. For Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts. Platonism
https://en.wikipedia.org/wiki/Philosophy_of_mathematics
passage: Note that $$ d\tau=-dv $$ . We also assume that is constant during the integral, which in turn yields $$ \begin{align} \mathbf x[k+1] &= e^{\mathbf{A}T}\mathbf x[k] - \left( \int_{v(kT)}^{v((k+1)T)} e^{\mathbf{A}v} dv \right) \mathbf{Bu}[k] \\[2pt] &= e^{\mathbf{A}T}\mathbf x[k] - \left( \int_T^0 e^{\mathbf{A}v} dv \right) \mathbf{Bu}[k] \\[2pt] &= e^{\mathbf{A}T}\mathbf x[k] + \left( \int_0^T e^{\mathbf{A}v} dv \right) \mathbf{Bu}[k] \\[4pt] &= e^{\mathbf{A}T}\mathbf x[k] + \mathbf A^{-1}\left(e^{\mathbf{A}T} - \mathbf I \right) \mathbf{Bu}[k] \end{align} $$ which is an exact solution to the discretization problem. When is singular, the latter expression can still be used by replacing $$ e^{\mathbf{A}T} $$ by its Taylor expansion, $$
https://en.wikipedia.org/wiki/Discretization
passage: The choice of loss function here gives rise to several well-known learning algorithms such as regularized least squares and support vector machines. A purely online model in this category would learn based on just the new input $$ (x_{t+1}, y_{t+1}) $$ , the current best predictor $$ f_{t} $$ and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinear kernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used where $$ f_{t+1} $$ is permitted to depend on $$ f_t $$ and all previous data points $$ (x_1, y_1), \ldots, (x_t, y_t) $$ . In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques. A common strategy to overcome the above issues is to learn using mini-batches, which process a small batch of $$ b \ge 1 $$ data points at a time, this can be considered as pseudo-online learning for $$ b $$ much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimized out-of-core versions of machine learning algorithms, for example, stochastic gradient descent. When combined with backpropagation, this is currently the de facto training method for training artificial neural networks.
https://en.wikipedia.org/wiki/Online_machine_learning
passage: As a result, the interpolation inequality still holds. ### Extension by zero Like above, we define $$ H^s_0(\Omega) $$ to be the closure in $$ H^s(\Omega) $$ of the space $$ C^\infty_c(\Omega) $$ of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following If $$ u\in H^s_0(\Omega) $$ we may define its extension by zero $$ \tilde u \in L^2(\R^n) $$ in the natural way, namely $$ \tilde u(x)= \begin{cases} u(x) & x \in \Omega \\ 0 & \text{else} \end{cases} $$ For its extension by zero, $$ Ef := \begin{cases} f & \textrm{on} \ \Omega, \\ 0 & \textrm{otherwise} \end{cases} $$ is an element of $$ L^p(\R^n). $$ Furthermore, $$ \| Ef \|_{L^p(\R^n)}= \| f \|_{L^p(\Omega)}. $$ In the case of the Sobolev space W1,p(Ω) for , extending a function u by zero will not necessarily yield an element of $$ W^{1,p}(\R^n). $$
https://en.wikipedia.org/wiki/Sobolev_space
passage: Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. When multiple neurons are functionally connected together, they form what is called a neural circuit. A neuron contains all the structures of other cells such as a nucleus, mitochondria, and Golgi bodies but has additional unique structures such as an axon, and dendrites. The soma is a compact structure, and the axon and dendrites are filaments extruding from the soma. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell. Neurons may lack dendrites or have no axons. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated. Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to the dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite.
https://en.wikipedia.org/wiki/Neuron
passage: If the cevian divides the side of length into two segments of length and , with adjacent to and adjacent to , then Stewart's theorem states that $$ b^2m + c^2n = a(d^2 + mn). $$ A common mnemonic used by students to memorize this equation (after rearranging the terms) is: $$ \underset{\text{A }man\text{ and his }dad}{man\ +\ dad} = \!\!\!\!\!\! \underset{\text{put a }bomb\text{ in the }sink.}{bmb\ +\ cnc} $$ The theorem may be written more symmetrically using signed lengths of segments. That is, take the length to be positive or negative according to whether is to the left or right of in some fixed orientation of the line. In this formulation, the theorem states that if are collinear points, and is any point, then $$ \left(\overline{PA}^2\cdot \overline{BC}\right) + \left(\overline{PB}^2\cdot \overline{CA}\right) + \left(\overline{PC}^2\cdot \overline{AB}\right) + \left(\overline{AB}\cdot \overline{BC}\cdot \overline{CA}\right) =0. $$ In the special case where the cevian is a median (meaning it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem.
https://en.wikipedia.org/wiki/Stewart%27s_theorem
passage: 3C2 σh 2S3 3σv trigonal planar or trigonal bipyramidal boron trifluoride phosphorus pentachloride cyclopropane D4h E 2C4 C2 2C2' 2C2" i 2S4 σh 2σv 2σd square planar xenon tetrafluoride octachlorodimolybdate(II) anion Trans-[CoIII(NH3)4Cl2]+ (excluding H atoms) D5h E 2C5 2C52 5C2 σh 2S5 2S53 5σv pentagonal cyclopentadienyl anion ruthenocene C70 D6h E 2C6 2C3 C2 3C2' 3C2‘’ i 2S3 2S6 σh 3σd 3σv hexagonal benzene bis(benzene)chromium coronene (C24H12) D7h E C7 S7 7C2 σh 7σv heptagonal tropylium (C7H7+) cation D8h E C8 C4 C2 S8 i 8C2 σh 4σv 4σd octagonal cyclooctatetraenide (C8H82−) anion uranocene D2d E 2S4 C2 2C2' 2σd 90° twist allene tetrasulfur tetranitride diborane(4) (excited state) D3d E 2C3 3C2
https://en.wikipedia.org/wiki/Molecular_symmetry
passage: ### Plant cell culture methods Plant cell cultures are typically grown as cell suspension cultures in a liquid medium or as callus cultures on a solid medium. The culturing of undifferentiated plant cells and calli requires the proper balance of the plant growth hormones auxin and cytokinin. ### Insect cell culture Cells derived from Drosophila melanogaster (most prominently, Schneider 2 cells) can be used for experiments which may be hard to do on live flies or larvae, such as biochemical studies or studies using siRNA. Cell lines derived from the army worm Spodoptera frugiperda, including Sf9 and Sf21, and from the cabbage looper Trichoplusia ni, High Five cells, are commonly used for expression of recombinant proteins using baculovirus. ### Bacterial and yeast culture methods For bacteria and yeasts, small quantities of cells are usually grown on a solid support that contains nutrients embedded in it, usually a gel such as agar, while large-scale cultures are grown with the cells suspended in a nutrient broth. ### Viral culture methods The culture of viruses requires the culture of cells of mammalian, plant, fungal or bacterial origin as hosts for the growth and replication of the virus. Whole wild type viruses, recombinant viruses or viral products may be generated in cell types other than their natural hosts under the right conditions. Depending on the species of the virus, infection and viral replication may result in host cell lysis and formation of a viral plaque.
https://en.wikipedia.org/wiki/Cell_culture
passage: This later influenced hacker culture and technopaganism. ### Technological utopianism Technological utopianism refers to the belief that technological development is a moral good, which can and should bring about a utopia, that is, a society in which laws, governments, and social conditions serve the needs of all its citizens. Examples of techno-utopian goals include post-scarcity economics, life extension, mind uploading, cryonics, and the creation of artificial superintelligence. Major techno-utopian movements include transhumanism and singularitarianism. The transhumanism movement is founded upon the "continued evolution of human life beyond its current human form" through science and technology, informed by "life-promoting principles and values." The movement gained wider popularity in the early 21st century. Singularitarians believe that machine superintelligence will "accelerate technological progress" by orders of magnitude and "create even more intelligent entities ever faster", which may lead to a pace of societal and technological change that is "incomprehensible" to us. This event horizon is known as the technological singularity. Major figures of techno-utopianism include Ray Kurzweil and Nick Bostrom. Techno-utopianism has attracted both praise and criticism from progressive, religious, and conservative thinkers. ### Anti-technology backlash Technology's central role in our lives has drawn concerns and backlash. The backlash against technology is not a uniform movement and encompasses many heterogeneous ideologies. The earliest known revolt against technology was Luddism, a pushback against early automation in textile production.
https://en.wikipedia.org/wiki/Technology
passage: In the mathematical discipline of graph theory, a feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles ("removal" means deleting the vertex and all edges adjacent to it). Equivalently, each FVS contains at least one vertex of any cycle in the graph. The feedback vertex set number of a graph is the size of a smallest FVS. Whether there exists a feedback vertex set of size at most k is an NP-complete problem; it was among the first problems shown to be NP-complete. It has wide applications in operating systems, database systems, and VLSI chip design. ## Definition The FVS decision problem is as follows: INSTANCE: An (undirected or directed) graph $$ G = (V, E) $$ and a positive integer $$ k $$ . QUESTION: Is there a subset $$ X \subseteq V $$ with $$ |X| \leq k $$ such that, when all vertices of $$ X $$ and their adjacent edges are deleted from $$ G $$ , the remainder is cycle-free? The graph $$ G[V \setminus X] $$ that remains after removing $$ X $$ from $$ G $$ is an induced forest (resp. an induced directed acyclic graph in the case of directed graphs). Thus, finding a minimum FVS in a graph is equivalent to finding a maximum induced forest (resp.
https://en.wikipedia.org/wiki/Feedback_vertex_set
passage: If $$ p $$ is a polynomial function of degree $$ <n $$ , then $$ p[x_0, \dots, x_n] = 0. $$ - Mean value theorem for divided differences: if $$ f $$ is n times differentiable, then $$ f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!} $$ for a number $$ \xi $$ in the open interval determined by the smallest and largest of the $$ x_k $$ 's.
https://en.wikipedia.org/wiki/Divided_differences
passage: × 1401943 × 1412753 × 1428127 × 1984327 × 2556331 × 5112661 × 5714803 × 7450297 × 8334721 × 10715147 × 14091139 × 14092193 × 18739907 × 19270249 × 29866451 × 96656723 × 133338869 × 193707721 × 283763713 × 407865361 × 700116563 × 795217607 × 3035864933 × 3336809191 × 35061928679 × 143881112839 × 161969595577 × 287762225677 × 761838257287 × 840139875599 × 2031161085853 × 2454335007529 × 2765759031089 × 31280679788951 × 75364676329903 × 901563572369231 × 2169378653672701 × 4764764439424783 × 70321958644800017 × 79787519018560501 × 702022478271339803 × 1839633098314450447 × 165301473942399079669 × 604088623657497125653141 × 160014034995323841360748039 × 25922273669242462300441182317 × 15428152323948966909689390436420781 × 420391294797275951862132367930818883361 × 23735410086474640244277823338130677687887 × 628683935022908831926019116410056880219316806841500141982334538232031397827230330241 George Woltman, 2001
https://en.wikipedia.org/wiki/Multiply_perfect_number
passage: The self-intersecting disk is homeomorphic to an ordinary disk. The parametric equations of the self-intersecting disk are: $$ \begin{align} X(u, v) &= r \, v \, \cos 2u, \\ Y(u, v) &= r \, v \, \sin 2u, \\ Z(u, v) &= r \, v \, \cos u, \end{align} $$ where u ranges from 0 to 2π and v ranges from 0 to 1. Projecting the self-intersecting disk onto the plane of symmetry (z = 0 in the parametrization given earlier) which passes only through the double points, the result is an ordinary disk which repeats itself (doubles up on itself). The plane z = 0 cuts the self-intersecting disk into a pair of disks which are mirror reflections of each other. The disks have centers at the origin. Now consider the rims of the disks (with v = 1). The points on the rim of the self-intersecting disk come in pairs which are reflections of each other with respect to the plane z = 0. A cross-capped disk is formed by identifying these pairs of points, making them equivalent to each other.
https://en.wikipedia.org/wiki/Real_projective_plane
passage: This field remains an active area of research and standardization, aiming to future-proof critical infrastructure against quantum-enabled threats. Ongoing research in quantum and post-quantum cryptography will be critical for maintaining the integrity of digital infrastructure. Advances such as new QKD protocols, improved QRNGs, and the international standardization of quantum-resistant algorithms will play a key role in ensuring the security of communication and data in the emerging quantum era. Quantum computing also presents broader systemic and geopolitical risks. These include the potential to break current encryption protocols, disrupt financial systems, and accelerate the development of dual-use technologies such as advanced military systems or engineered pathogens. As a result, nations and corporations are actively investing in post-quantum safeguards, and the race for quantum supremacy is increasingly shaping global power dynamics. ## Communication Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping. Modern fiber-optic cables can transmit quantum information over relatively short distances.
https://en.wikipedia.org/wiki/Quantum_computing
passage: The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles. Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm.
https://en.wikipedia.org/wiki/Anatomy
passage: We include the empty set in both the labelled and the unlabelled case. The unlabelled case is done using the function $$ M(f(z), y) = \sum_{n\ge 0} y^n Z(E_n)(f(z), f(z^2), \ldots, f(z^n)) $$ so that $$ \mathfrak{M}(f(z)) = M(f(z), 1). $$ Evaluating $$ M(f(z), 1) $$ we obtain $$ F(z) = \exp \left( \sum_{\ell\ge 1} \frac{f(z^\ell)}{\ell} \right). $$ For the labelled case we have $$ G(z) = 1 + \sum_{n\ge 1} \left(\frac{1}{|S_n|}\right) g(z)^n = \sum_{n\ge 0} \frac{g(z)^n}{n!} = \exp g(z). $$ In the labelled case we denote the operator by , and in the unlabelled case, by . This is because in the labeled case there are no multisets (the labels distinguish the constituents of a compound combinatorial class) whereas in the unlabeled case there are multisets and sets, with the latter being given by $$ F(z) = \exp \left( \sum_{\ell\ge 1} (-1)^{\ell-1} \frac{f(z^\ell)} \ell \right). $$
https://en.wikipedia.org/wiki/Symbolic_method_%28combinatorics%29
passage: Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes $$ 2^n $$ operations before halting. For small $$ n $$ , say 100, and assuming for the sake of example that the computer does $$ 10^{12} $$ operations each second, the program would run for about $$ 4 \times 10^{10} $$ years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes $$ 1.0001^n $$ operations is practical until $$ n $$ gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, $$ n^{15} $$ , it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even $$ n^3 $$ or $$ n^2 $$ algorithms are often impractical on realistic sizes of problems. ## Continuous complexity theory Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis.
https://en.wikipedia.org/wiki/Computational_complexity_theory
passage: Say that the natural number is the index the generation ( for the first generation, for the second generation, etc.). The letter is used because the index of a generation is time. Say denotes, at generation , the number of individuals of the population that will reproduce, i.e. the population size at generation . The population at the next generation, which is the population at time is: $$ N_{t+1} = N_t + B_t - D_t + I_t - E_t $$ where - is the number of births in the population between generations and , - is the number of deaths between generations and , - is the number of immigrants added to the population between generations and , and - is the number of emigrants moving out of the population between generations and . For the sake of simplicity, we suppose there is no migration to or from the population, but the following method can be applied without this assumption. Mathematically, it means that for all , . The previous equation becomes: $$ N_{t+1} = N_t + B_t - D_t. $$ In general, the number of births and the number of deaths are approximately proportional to the population size. This remark motivates the following definitions. - The birth rate at time is defined by . - The death rate at time is defined by .
https://en.wikipedia.org/wiki/Population_dynamics
passage: In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties: RP algorithm (1 run) ≥ 1/2 ≤ 1/2 0 1RP algorithm (n runs) ≥ 1 − 2−n ≤ 2−n 0 1co-RP algorithm (1 run) 1 0 ≤ 1/2 ≥ 1/2 - It always runs in polynomial time in the input size - If the correct answer is NO, it always returns NO - If the correct answer is YES, then it returns YES with probability at least 1/2 (otherwise, it returns NO). In other words, the algorithm is allowed to flip a truly random coin while it is running. The only case in which the algorithm can return YES is if the actual answer is YES; therefore if the algorithm terminates and produces YES, then the correct answer is definitely YES; however, the algorithm can terminate with NO regardless of the actual answer. That is, if the algorithm returns NO, it might be wrong. Some authors call this class R, although this name is more commonly used for the class of recursive languages. If the correct answer is YES and the algorithm is run n times with the result of each run statistically independent of the others, then it will return YES at least once with probability at least . So if the algorithm is run 100 times, then the chance of it giving the wrong answer every time is lower than the chance that cosmic rays corrupted the memory of the computer running the algorithm. In this sense, if a source of random numbers is available, most algorithms in RP are highly practical. The fraction 1/2 in the definition is arbitrary.
https://en.wikipedia.org/wiki/RP_%28complexity%29