text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by , but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. Negative edge weights are found in various applications of graphs. This is why this algorithm is useful. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle. ## Algorithm Like Dijkstra's algorithm, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path.
https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
passage: Roosevelt quickly understood the implications, stating, "Alex, what you are after is to see that the Nazis don't blow us up." Roosevelt ordered the formation of the Advisory Committee on Uranium. In February 1940, encouraged by Fermi and John R. Dunning, Alfred O. C. Nier was able to separate U-235 and U-238 from uranium tetrachloride in a glass mass spectrometer. Subsequently, Dunning, bombarding the U-235 sample with neutrons generated by the Columbia University cyclotron, confirmed "U-235 was responsible for the slow neutron fission of uranium. " At the University of Birmingham, Frisch teamed up with Peierls, who had been working on a critical mass formula. assuming isotope separation was possible, they considered 235U, which had a cross section not yet determined, but which was assumed to be much larger than that of natural uranium. They calculated only a pound or two in a volume less than a golf ball, would result in a chain reaction faster than vaporization, and the resultant explosion would generate temperature greater than the interior of the sun, and pressures greater than the center of the earth. Additionally, the costs of isotope separation "would be insignificant compared to the cost of the war." By March 1940, encouraged by Mark Oliphant, they wrote the Frisch–Peierls memorandum in two parts, "On the construction of a 'super-bomb; based on a nuclear chain reaction in uranium," and "Memorandum on the properties of a radioactive 'super-bomb.' ".
https://en.wikipedia.org/wiki/Nuclear_fission
passage: This happens already for quadratic integers, for example in the uniqueness of the factorization fails: $$ 6 = 2 \cdot 3 = (1 + \sqrt{-5}) \cdot (1 - \sqrt{-5}) $$ Using the norm it can be shown that these two factorization are actually inequivalent in the sense that the factors do not just differ by a unit in Euclidean domains are unique factorization domains: For example the ring of Gaussian integers, and the ring of Eisenstein integers, where $$ \omega $$ is a cube root of unity (unequal to 1), have this property. ### Analytic objects: ζ-functions, L-functions, and class number formula The failure of unique factorization is measured by the class number, commonly denoted h, the cardinality of the so-called ideal class group. This group is always finite. The ring of integers $$ \mathcal{O}_K $$ possesses unique factorization if and only if it is a principal ring or, equivalently, if $$ K $$ has class number 1. Given a number field, the class number is often difficult to compute. The class number problem, going back to Gauss, is concerned with the existence of imaginary quadratic number fields (i.e., $$ \mathbf{Q}(\sqrt{-d}), d \ge 1 $$ ) with prescribed class number.
https://en.wikipedia.org/wiki/Algebraic_number_field
passage: A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices. - Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release. - Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release. - Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s. - Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3. - Gamers: Proactive hobbyists who are players and consumers of video games. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry.
https://en.wikipedia.org/wiki/Video_game
passage: Space discretization is the same as above. We recall that the interval $$ (a,b) $$ is partitioned into $$ N+1 $$ intervals of length $$ h $$ . We introduce jump $$ [{}\cdot{}] $$ and average $$ \{{}\cdot{}\} $$ of functions at the node $$ x_k $$ : $$ [v]\Big|_{x_k} = v(x_k^+)-v(x_k^-), \quad \{v\}\Big|_{x_k} = 0.5 (v(x_k^+)+v(x_k^-)) $$ The interior penalty discontinuous Galerkin (IPDG) method is: find $$ u_h $$ satisfying $$ A(u_h,v_h) + A_{\partial}(u_h,v_h) = \ell(v_h) + \ell_\partial(v_h) $$ where the bilinear forms $$ A $$ and $$ A_\partial $$ are $$ A(u_h,v_h) = \sum_{k=1}^{N+1} \int_{x_{k-1}}^{x_k}\partial_x u_h \partial_x v_h -\sum_{k=1}^N \{ \partial_x u_h\}_{x_k} [v_h]_{x_k}
https://en.wikipedia.org/wiki/Discontinuous_Galerkin_method
passage: This can be compared with approximately 1.618, exactly the golden ratio, for the secant method and with exactly 2 for Newton's method. So, the secant method makes less progress per iteration than Muller's method and Newton's method makes more progress. More precisely, if ξ denotes a single root of f (so f(ξ) = 0 and f'(ξ) ≠ 0), f is three times continuously differentiable, and the initial guesses x0, x1, and x2 are taken sufficiently close to ξ, then the iterates satisfy $$ \lim_{k\to\infty} \frac{|x_k-\xi|}{|x_{k-1}-\xi|^\mu} = \left| \frac{f'''(\xi)}{6f'(\xi)} \right|^{(\mu-1)/2}, $$ where μ ≈ 1.84 is the positive solution of $$ x^3 - x^2 - x - 1 = 0 $$ , the defining equation for the tribonacci constant. ## Generalizations and related methods Muller's method fits a parabola, i.e. a second-order polynomial, to the last three obtained points f(xk-1), f(xk-2) and f(xk-3) in each iteration. One can generalize this and fit a polynomial pk,m(x) of degree m to the last m+1 points in the kth iteration. Our parabola yk is written as pk,2 in this notation. The degree m must be 1 or larger.
https://en.wikipedia.org/wiki/Muller%27s_method
passage: Some conifers also provide foods such as pine nuts and juniper berries, the latter used to flavor gin. ## References ## External links - Conifers at the Tree of Life Web Project - 300 million-year-old conifer in Illinois – 4/2007 - World list of conifer species from Conifer Database by A. Farjon in the Catalogue of Life () - Tree browser for conifer families and genera via the Catalogue of Life () - Royal Horticultural Society Encyclopedia of Conifers: A Comprehensive Guide to Cultivars and Species - DendroPress: Conifers Around the World. - Category:Extant Pennsylvanian first appearances Category:Plant divisions
https://en.wikipedia.org/wiki/Conifer
passage: ## Properties In the following, let M be a G-module. ### Long exact sequence of cohomology In practice, one often computes the cohomology groups using the following fact: if $$ 0 \to L \to M \to N \to 0 $$ is a short exact sequence of G-modules, then a long exact sequence is induced: $$ 0\longrightarrow L^G \longrightarrow M^G \longrightarrow N^G \overset{\delta^0}{\longrightarrow} H^1(G,L) \longrightarrow H^1(G,M) \longrightarrow H^1(G,N) \overset{\delta^1}{\longrightarrow} H^2(G,L)\longrightarrow \cdots $$ The so-called connecting homomorphisms, $$ \delta^n : H^n (G,N) \to H^{n+1}(G, L) $$ can be described in terms of inhomogeneous cochains as follows. If $$ c \in H^n(G, N) $$ is represented by an n-cocycle $$ \phi: G^n \to N, $$ then $$ \delta^n(c) $$ is represented by $$ d^n(\psi), $$ where $$ \psi $$ is an n-cochain $$ G^n \to M $$ "lifting" $$ \phi $$ (i.e. $$ \phi $$ is the composition of $$ \psi $$ with the surjective map M → N).
https://en.wikipedia.org/wiki/Group_cohomology
passage: ### Commercial software There are several commercial topology optimization software on the market. Most of them use topology optimization as a hint how the optimal design should look like, and manual geometry re-construction is required. There are a few solutions which produce optimal designs ready for Additive Manufacturing. Examples ### Structural compliance A stiff structure is one that has the least possible displacement when given certain set of boundary conditions. A global measure of the displacements is the strain energy (also called compliance) of the structure under the prescribed boundary conditions. The lower the strain energy the higher the stiffness of the structure. So, the objective function of the problem is to minimize the strain energy. On a broad level, one can visualize that the more the material, the less the deflection as there will be more material to resist the loads. So, the optimization requires an opposing constraint, the volume constraint. This is in reality a cost factor, as we would not want to spend a lot of money on the material. To obtain the total material utilized, an integration of the selection field over the volume can be done. Finally the elasticity governing differential equations are plugged in so as to get the final problem statement.
https://en.wikipedia.org/wiki/Topology_optimization
passage: The vector equations are almost identical to the scalar equations (see multiple dimensions). ### Single particle The momentum of a particle is conventionally represented by the letter . It is the product of two quantities, the particle's mass (represented by the letter ) and its velocity (): $$ p = m v. $$ The unit of momentum is the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity is in meters per second then the momentum is in kilogram meters per second (kg⋅m/s). In cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters per second (g⋅cm/s). Being a vector, momentum has magnitude and direction. For example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg⋅m/s due north measured with reference to the ground. ### Many particles The momentum of a system of particles is the vector sum of their momenta.
https://en.wikipedia.org/wiki/Momentum
passage: For very weak signals, a pre-amplifier is used, although harmonic and intermodulation distortion may lead to the creation of new frequency components that were not present in the original signal. FFT-based With an FFT based spectrum analyzer, the frequency resolution is $$ \Delta\nu=1/T $$ , the inverse of the time T over which the waveform is measured and Fourier transformed. With Fourier transform analysis in a digital spectrum analyzer, it is necessary to sample the input signal with a sampling frequency $$ \nu_s $$ that is at least twice the bandwidth of the signal, due to the Nyquist limit. A Fourier transform will then produce a spectrum containing all frequencies from zero to $$ \nu_s/2 $$ . This can place considerable demands on the required analog-to-digital converter and processing power for the Fourier transform, making FFT based spectrum analyzers limited in frequency range. ### Hybrid superheterodyne-FFT Since FFT based analyzers are only capable of considering narrow bands, one technique is to combine swept and FFT analysis for consideration of wide and narrow spans. This technique allows for faster sweep time. This method is made possible by first down converting the signal, then digitizing the intermediate frequency and using superheterodyne or FFT techniques to acquire the spectrum. One benefit of digitizing the intermediate frequency is the ability to use digital filters, which have a range of advantages over analog filters such as near perfect shape factors and improved filter settling time.
https://en.wikipedia.org/wiki/Spectrum_analyzer
passage: This is Thompson's group, the first known example of an infinite but finitely presented simple group. The same group can also be represented by an action on rooted binary trees, or by an action on the dyadic rationals within the unit interval. ### Other related constructions In reverse mathematics, one way of constructing the real numbers is to represent them as functions from unary numbers to dyadic rationals, where the value of one of these functions for the argument $$ i $$ is a dyadic rational with denominator $$ 2^i $$ that approximates the given real number. Defining real numbers in this way allows many of the basic results of mathematical analysis to be proven within a restricted theory of second-order arithmetic called "feasible analysis" (BTFA). The surreal numbers are generated by an iterated construction principle which starts by generating all finite dyadic rationals, and then goes on to create new and strange kinds of infinite, infinitesimal and other numbers. This number system is foundational to combinatorial game theory, and dyadic rationals arise naturally in this theory as the set of values of certain combinatorial games. The fusible numbers are a subset of the dyadic rationals, the closure of the set $$ \{0\} $$ under the operation $$ x,y\mapsto(x+y+1)/2 $$ , restricted to pairs $$ x,y $$ with $$ |x-y|<1 $$ . They are well-ordered, with order type equal to the epsilon number $$ \varepsilon_0 $$ .
https://en.wikipedia.org/wiki/Dyadic_rational
passage: The ability of machine learning to infer missing data enables it to predict streamflow with both historical stream gauge data and real-time data. Streamflow Hydrology Estimate using Machine Learning (SHEM) is a model that can serve this purpose. To verify its accuracies, the prediction result was compared with the actual recorded data, and the accuracies were found to be between 0.78 and 0.99. +Example applications in Streamflow Discharge PredictionObjectiveInput datasetLocationMachine Learning Algorithms (MLAs)PerformanceStreamflow Estimate with data missingStreamgage data from NWIS-WebFour diverse watersheds in Idaho, US and Washington, USRandom ForestsThe estimates correlated well to the historical data of the discharges. The accuracy ranges from 0.78 to 0.99. ## Challenge ### Inadequate training data An adequate amount of training and validation data is required for machine learning. However, some very useful products like satellite remote sensing data only have decades of data since the 1970s. If one is interested in the yearly data, then only less than 50 samples are available. Such amount of data may not be adequate. In a study of automatic classification of geological structures, the weakness of the model is the small training dataset, even though with the help of data augmentation to increase the size of the dataset. Another study of predicting streamflow found that the accuracies depend on the availability of sufficient historical data, therefore sufficient training data determine the performance of machine learning. Inadequate training data may lead to a problem called overfitting.
https://en.wikipedia.org/wiki/Machine_learning_in_earth_sciences
passage: The formal solution of the eigenvalue equation is the vacuum state displaced to a location in phase space, i.e., it is obtained by letting the unitary displacement operator operate on the vacuum, $$ |\alpha\rangle=e^{\alpha \hat a^\dagger - \alpha^*\hat a}|0\rangle = D(\alpha)|0\rangle $$ , where and . This can be easily seen, as can virtually all results involving coherent states, using the representation of the coherent state in the basis of Fock states, $$ |\alpha\rangle =e^{-{|\alpha|^2\over2}}\sum_{n=0}^{\infty}{\alpha^n\over\sqrt{n!}}|n\rangle =e^{-{|\alpha|^2\over2}}e^{\alpha\hat a^\dagger}e^{-{\alpha^* \hat a}}|0\rangle =e^{\alpha \hat a^\dagger - \alpha^*\hat a}|0\rangle = D(\alpha)|0\rangle ~, $$ where $$ |n\rangle $$ are energy (number) eigenvectors of the Hamiltonian $$ H =\hbar \omega \left( \hat a^\dagger \hat a + \frac 12\right)~, $$ and the final equality derives from the Baker-Campbell-Hausdorff formula.
https://en.wikipedia.org/wiki/Coherent_state
passage: In an area of mathematics called differential topology, an exotic sphere is a differentiable manifold M that is homeomorphic but not diffeomorphic to the standard Euclidean n-sphere. That is, M is a sphere from the point of view of all its topological properties, but carrying a smooth structure that is not the familiar one (hence the name "exotic"). The first exotic spheres were constructed by in dimension $$ n = 7 $$ as $$ S^3 $$ -bundles over $$ S^4 $$ . He showed that there are at least 7 differentiable structures on the 7-sphere. In any dimension showed that the diffeomorphism classes of oriented exotic spheres form the non-trivial elements of an abelian monoid under connected sum, which is a finite abelian group if the dimension is not 4. The classification of exotic spheres by showed that the oriented exotic 7-spheres are the non-trivial elements of a cyclic group of order 28 under the operation of connected sum. These groups are known as Kervaire–Milnor groups. More generally, in any dimension n ≠ 4, there is a finite Abelian group whose elements are the equivalence classes of smooth structures on Sn, where two structures are considered equivalent if there is an orientation preserving diffeomorphism carrying one structure onto the other. The group operation is defined by [x] +
https://en.wikipedia.org/wiki/Exotic_sphere
passage: When the hash function is chosen randomly, the cuckoo graph is a random graph in the Erdős–Rényi model. With high probability, for load factor less than 1/2 (corresponding to a random graph in which the ratio of the number of edges to the number of vertices is bounded below 1/2), the graph is a pseudoforest and the cuckoo hashing algorithm succeeds in placing all keys. The same theory also proves that the expected size of a connected component of the cuckoo graph is small, ensuring that each insertion takes constant expected time. However, also with high probability, a load factor greater than 1/2 will lead to a giant component with two or more cycles, causing the data structure to fail and need to be resized. Since a theoretical random hash function requires too much space for practical usage, an important theoretical question is which practical hash functions suffice for Cuckoo hashing. One approach is to use k-independent hashing. In 2009 it was shown that $$ O(\log n) $$ -independence suffices, and at least 6-independence is needed. Another approach is to use tabulation hashing, which is not 6-independent, but was shown in 2012 to have other properties sufficient for Cuckoo hashing. A third approach from 2014 is to slightly modify the cuckoo hashtable with a so-called stash, which makes it possible to use nothing more than 2-independent hash functions. ## Practice In practice, cuckoo hashing is about 20–30% slower than linear probing, which is the fastest of the common approaches.
https://en.wikipedia.org/wiki/Cuckoo_hashing
passage: For strongly correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions. DCTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array. DCTs are closely related to Chebyshev polynomials, and fast DCT algorithms (below) are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature. ### General applications The DCT is widely used in many applications, which include the following. ### Visual media standards The DCT-II is an important image compression technique. It is used in image compression standards such as JPEG, and video compression standards such as , MJPEG, MPEG, DV, Theora and Daala. There, the two-dimensional DCT-II of $$ N \times N $$ blocks are computed and the results are quantized and entropy coded. In this case, $$ N $$ is typically 8 and the DCT-II formula is applied to each row and column of the block.
https://en.wikipedia.org/wiki/Discrete_cosine_transform
passage: ## Steam carriage While in Edinburgh he experimented with steam engines, using a square boiler for which he developed a method of staying the surface of the boiler which became universal. The Scottish Steam Carriage Company was formed producing a steam carriage with two cylinders developing 12 horsepower each. Six were constructed in 1834, well-sprung and fitted out to high standard, which from March 1834 ran between Glasgow's George Square and the Tontine Hotel in Paisley at hourly intervals at 15 mph. The road trustees objected that it wore out the road and placed various obstructions of logs and stones in the road, which actually caused more discomfort for horse-drawn carriages. But in July 1834 one of the carriages was overturned and the boiler smashed, causing the death of several passengers. Two of the coaches were sent to London where they ran for a short time between London and Greenwich. ## The wave of translation In 1834, while conducting experiments to determine the most efficient design for canal boats, he discovered a phenomenon that he described as the wave of translation. In fluid dynamics the wave is now called Russell's solitary wave. The discovery is described here in his own words:This passage has been repeated in many papers and books on soliton theory.
https://en.wikipedia.org/wiki/John_Scott_Russell
passage: Also k-d trees are always binary, which is not the case for octrees. By using a depth-first search the nodes are to be traversed and only required surfaces are to be viewed. ## History The use of octrees for 3D computer graphics was pioneered by Donald Meagher at Rensselaer Polytechnic Institute, described in a 1980 report "Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer", for which he holds a 1995 patent (with a 1984 priority date) "High-speed image generation of complex solid objects using octree encoding" ## Common uses - Level of detail rendering in 3D computer graphics - Spatial indexing - Nearest neighbor search - Efficient collision detection in three dimensions - View frustum culling - Fast multipole method - Unstructured grid - Finite element analysis - Sparse voxel octree - State estimation - Set estimation ## Application to color quantization The octree color quantization algorithm, invented by Gervautz and Purgathofer in 1988, encodes image color data as an octree up to nine levels deep. Octrees are used because $$ 2^3 = 8 $$ and there are three color components in the RGB system. The node index to branch out from at the top level is determined by a formula that uses the most significant bits of the red, green, and blue color components, e.g. 4r + 2g + b. The next lower level uses the next bit significance, and so on.
https://en.wikipedia.org/wiki/Octree
passage: For clarity, define $$ \mathbf f_n = \mathbf f(\mathbf x_n), $$ $$ \Delta \mathbf x_n = \mathbf x_n - \mathbf x_{n - 1}, $$ $$ \Delta \mathbf f_n = \mathbf f_n - \mathbf f_{n - 1}, $$ so the above may be rewritten as $$ \mathbf J_n \Delta \mathbf x_n \simeq \Delta \mathbf f_n. $$ The above equation is underdetermined when is greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix, , and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification to : $$ \mathbf J_n = \mathbf J_{n - 1} + \frac{\Delta \mathbf f_n - \mathbf J_{n - 1} \Delta \mathbf x_n}{\|\Delta \mathbf x_n\|^2} \Delta \mathbf x_n^{\mathrm T}. $$ This minimizes the Frobenius norm $$ \|\mathbf J_n - \mathbf J_{n - 1}\|_{\rm F} . $$ One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach.
https://en.wikipedia.org/wiki/Broyden%27s_method
passage: To symplectically simulate the system, one simply composes these solution maps. ## Applications ### In plasma physics In recent decades symplectic integrator in plasma physics has become an active research topic, because straightforward applications of the standard symplectic methods do not suit the need of large-scale plasma simulations enabled by the peta- to exa-scale computing hardware. Special symplectic algorithms need to be customarily designed, tapping into the special structures of the physics problem under investigation. One such example is the charged particle dynamics in an electromagnetic field. With the canonical symplectic structure, the Hamiltonian of the dynamics is $$ H(\boldsymbol{p},\boldsymbol{x}) = \tfrac{1}{2} \left(\boldsymbol{p}-\boldsymbol{A}\right)^2 + \phi, $$ whose $$ \boldsymbol{p} $$ -dependence and $$ \boldsymbol{x} $$ -dependence are not separable, and standard explicit symplectic methods do not apply. For large-scale simulations on massively parallel clusters, however, explicit methods are preferred. To overcome this difficulty, we can explore the specific way that the $$ \boldsymbol{p} $$ -dependence and $$ \boldsymbol{x} $$ -dependence are entangled in this Hamiltonian, and try to design a symplectic algorithm just for this or this type of problem.
https://en.wikipedia.org/wiki/Symplectic_integrator
passage: #### Line matching at the radio Antenna tuning in the loose sense, performed by an impedance matching device (somewhat inappropriately named an "antenna tuner", or the older, more appropriate term transmatch) goes beyond merely removing reactance and includes transforming the remaining resistance to match the feedline and radio. An additional problem is matching the remaining resistive impedance to the characteristic impedance of the transmission line: A general impedance matching network (an "antenna tuner" or ATU) will have at least two adjustable elements to correct both components of impedance. Any matching network will have both power losses and power restrictions when used for transmitting. Commercial antennas are generally designed to approximately match standard 50 Ohm coaxial cables, at standard frequencies; the design expectation is that a matching network will be merely used to 'tweak' any residual mismatch. #### Extreme examples of loaded small antennas In some cases matching is done in a more extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance frequency is quite different from the intended frequency of operation. Short vertical "whip" For instance, for practical reasons a "whip antenna" can be made significantly shorter than a quarter-wavelength and then resonated, using a so-called loading coil. The physically large inductor at the base of the antenna has an inductive reactance which is the opposite of the capacitative reactance that the short vertical antenna has at the desired operating frequency.
https://en.wikipedia.org/wiki/Antenna_%28radio%29
passage: We define the inverse limit of the inverse system $$ ((A_i)_{i\in I}, (f_{ij})_{i\leq j\in I}) $$ as a particular subgroup of the direct product of the 's: $$ A = \varprojlim_{i\in I}{A_i} = \left\{\left.\vec a \in \prod_{i\in I}A_i \;\right|\; a_i = f_{ij}(a_j) \text{ for all } i \leq j \text{ in } I\right\}. $$ The inverse limit $$ A $$ comes equipped with natural projections which pick out the th component of the direct product for each $$ i $$ in $$ I $$ . The inverse limit and the natural projections satisfy a universal property described in the next section. This same construction may be carried out if the $$ A_i $$ 's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category. ### General definition The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let $$ (X_i, f_{ij}) $$ be an inverse system of objects and morphisms in a category C (same definition as above).
https://en.wikipedia.org/wiki/Inverse_limit
passage: Otherwise, let and ; then . In words: - The power set of the empty set is a singleton whose only element is the empty set. - For a non-empty set , let $$ e $$ be any element of the set and its relative complement; then the power set of is a union of a power set of and a power set of whose each element is expanded with the element. ## Subsets of limited cardinality The set of subsets of of cardinality less than or equal to is sometimes denoted by or , and the set of subsets with cardinality strictly less than is sometimes denoted or . Similarly, the set of non-empty subsets of might be denoted by or . ## Power object A set can be regarded as an algebra having no nontrivial operations or defining equations. From this perspective, the concept of the power set of as the set of all subsets of generalizes naturally to the set to all subalgebras of an algebraic structure or algebra. The power set of a set, when ordered by inclusion, is always a complete atomic Boolean algebra, and every complete atomic Boolean algebra arises as the lattice of all subsets of some set. The generalization to arbitrary algebras is that the set of subalgebras of an algebra, again ordered by inclusion, is always an algebraic lattice, and every algebraic lattice arises as the lattice of subalgebras of some algebra. So in that regard, subalgebras behave analogously to subsets.
https://en.wikipedia.org/wiki/Power_set
passage: An Introduction to Analysis, Arlen Brown, Carl Pearcy (1995, ) 1. Quantum Groups, Christian Kassel (1995, ) 1. Classical Descriptive Set Theory, Alexander S. Kechris (1995, ) 1. Integration and Probability, Paul Malliavin (1995, ) 1. Field Theory, Steven Roman (2006, 2nd ed., ) 1. Functions of One Complex Variable II, John B. Conway (1995, ) 1. Differential and Riemannian Manifolds, Serge Lang (1995, ) 1. Polynomials and Polynomial Inequalities, Peter Borwein, Tamas Erdelyi (1995, ) 1. Groups and Representations, J. L. Alperin, Rowen B. Bell (1995, ) 1. Permutation Groups, John D. Dixon, Brian Mortimer (1996, ) 1. Additive Number Theory The Classical Bases, Melvyn B. Nathanson (1996, ) 1. Additive Number Theory: Inverse Problems and the Geometry of Sumsets, Melvyn B. Nathanson (1996, ) 1. Differential Geometry — Cartan's Generalization of Klein's Erlangen Program, R. W. Sharpe (1997, ) 1. Field and Galois Theory, Patrick Morandi (1996, ) 1. Combinatorial Convexity and Algebraic Geometry, Guenter Ewald (1996, ) 1. Matrix Analysis, Rajendra Bhatia (1997, ) 1. Sheaf Theory, Glen E. Bredon (1997, 2nd ed., ) 1. Riemannian Geometry, Peter Petersen (2016, 3rd ed., ) 1.
https://en.wikipedia.org/wiki/Graduate_Texts_in_Mathematics
passage: Randomization is widely applied in various fields, especially in scientific research, statistical analysis, and resource allocation, to ensure fairness and validity in the outcomes. In various contexts, randomization may involve - Generating Random Permutations: This is essential in various situations, such as shuffling cards. By randomly rearranging the sequence, it ensures fairness and unpredictability in games and experiments. - Selecting Random Samples from Populations: In statistical sampling, this method is vital for obtaining representative samples. By randomly choosing a subset of individuals, biases are minimized, ensuring that the sample accurately reflects the larger population. - Random Allocation in Experimental Design: Random assignment of experimental units to treatment or control conditions is fundamental in scientific studies. This approach ensures that each unit has an equal chance of receiving any treatment, thereby reducing systematic bias and improving the reliability of experimental results. - Generating Random Numbers: The process of random number generation is central to simulations, cryptographic applications, and statistical analysis. These numbers form the basis for simulations, model testing, and secure data encryption. - Data Stream Transformation: In telecommunications, randomization is used to transform data streams. ## Techniques like scramblers randomize the data to prevent predictable patterns, which is crucial for securing communication channels and enhancing transmission reliability. " Randomization has many uses in gambling, political use, statistical analysis, art, cryptography, gaming and other fields. ## In gambling
https://en.wikipedia.org/wiki/Randomization
passage: Multiple eDSLs can easily be combined into a single program and the facilities of the host language can be used to extend an existing eDSL. Other possible advantages using an eDSL are improved type safety and better IDE tooling. eDSL examples: SQLAlchemy "Core" an SQL eDSL in Python, jOOQ an SQL eDSL in Java, LINQ's "method syntax" an SQL eDSL in C# and kotlinx.html an HTML eDSL in Kotlin. - Domain-specific languages which are called (at runtime) from programs written in general purpose languages like C or Perl, to perform a specific function, often returning the results of operation to the "host" programming language for further processing; generally, an interpreter or virtual machine for the domain-specific language is embedded into the host application (e.g. format strings, a regular expression engine) - Domain-specific languages which are embedded into user applications (e.g., macro languages within spreadsheets) and which are (1) used to execute code that is written by users of the application, (2) dynamically generated by the application, or (3) both. Many domain-specific languages can be used in more than one way. DSL code embedded in a host language may have special syntax support, such as regexes in sed, AWK, Perl or JavaScript, or may be passed as strings. ### Design goals Adopting a domain-specific language approach to software engineering involves both risks and opportunities. The well-designed domain-specific language manages to find the proper balance between these.
https://en.wikipedia.org/wiki/Domain-specific_language
passage: $$ This yields $$ \left\vert\sum_{i=1}^k \int_{\Gamma_i} A\,dx+B\,dy\quad-\int_R\varphi \right\vert \le M \delta(1+\pi\sqrt{2}\,\delta) \text{ for some } M > 0. $$ We may as well choose $$ \delta $$ so that the RHS of the last inequality is $$ <\varepsilon. $$ The remark in the beginning of this proof implies that the oscillations of $$ A $$ and $$ B $$ on every border region is at most $$ \varepsilon $$ .
https://en.wikipedia.org/wiki/Green%27s_theorem
passage: A standard combinatorial lemma which is utilized in producing the above explicit expansions is given by This is a particularly useful formula which is commonly used to conduct unitary transforms in quantum mechanics. By defining the iterated commutator, $$ [X,Y]_n \equiv \underbrace{[X,\dotsb[X,[X}_{n \text { times }}, Y]] \dotsb],\quad [X,Y]_0 \equiv Y, $$ we can write this formula more compactly as, $$ e^X Y e^{-X} = \sum_{n=0}^{\infty} \frac{[X,Y]_n}{n!}. $$ ### An application of the identity For central, i.e., commuting with both and , $$ e^{sX} Y e^{-sX} = Y + s [ X, Y ] ~. $$ Consequently, for , it follows that $$ \frac{dg}{ds} = \Bigl( X+ e^{sX} Y e^{-sX}\Bigr) g(s) = (X + Y + s [ X, Y ]) ~g(s) ~, $$ whose solution is $$ g(s)= e^{s(X+Y) +\frac{s^2}{2} [ X, Y ] } ~. $$ Taking $$ s=1 $$ gives one of the special cases of the Baker–Campbell–Hausdorff formula described above: $$
https://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula
passage: (unless explicitly indicated otherwise), that makes $$ (I, \leq) $$ into an () ; this means that for all $$ i, j \in I, $$ there exists some $$ k \in I $$ such that $$ i \leq k \text{ and } j \leq k. $$ For any indices $$ i \text{ and } j, $$ the notation $$ j \geq i $$ is defined to mean $$ i \leq j $$ while $$ i < j $$ is defined to mean that $$ i \leq j $$ holds but it is true that $$ j \leq i $$ (if $$ \,\leq\, $$ is antisymmetric then this is equivalent to $$ i \leq j \text{ and } i \neq j $$ ). A is a map from a non–empty directed set into $$ X. $$ The notation $$ x_{\bull} = \left(x_i\right)_{i \in I} $$ will be used to denote a net with domain $$ I. $$ + Notation and Definition Name or where is a directed set. or or / of Also called the generated by (the tails of) If is a sequence then is also called the .() of/generated by (tails of) or where is a directed set.
https://en.wikipedia.org/wiki/Filter_%28set_theory%29
passage: A variant form sometimes seen substitutes $$ \log n +\log\log n = \log(n \log n). $$ An even simpler lower bound is $$ n \log n < p_n, $$ which holds for all , but the lower bound above is tighter for . In 2010 Dusart proved (Propositions 6.7 and 6.6) that $$ n \left( \log n + \log \log n - 1 + \frac{\log \log n - 2.1}{\log n} \right) \le p_n \le n \left( \log n + \log \log n - 1 + \frac{\log \log n - 2}{\log n} \right), $$ for and , respectively. In 2024, Axler further tightened this (equations 1.12 and 1.13) using bounds of the form $$ f(n,g(w)) = n \left( \log n + \log\log n - 1 + \frac{\log\log n - 2}{\log n} - \frac{g(\log\log n)}{2\log^2 n} \right) $$ proving that $$ f(n, w^2 - 6w + 11.321) \le p_n \le f(n, w^2 - 6w) $$ for and , respectively. The lower bound may also be simplified to without altering its validity. The upper bound may be tightened to if .
https://en.wikipedia.org/wiki/Prime-counting_function
passage: Moreover, the iterative error may oscillate significantly, making it unreliable as a stopping condition. This poor convergence is not explained by the condition number alone (e.g., $$ \kappa_2(A) = 10^6 $$ ), but rather by the eigenvalue distribution itself. When the eigenvalues are more evenly spaced or randomly distributed, such convergence issues are typically absent, highlighting that CGM performance depends not only on $$ \kappa(A) $$ but also on how the eigenvalues are distributed. If $$ \kappa(A) $$ is large, preconditioning is commonly used to replace the original system $$ \mathbf{A x}-\mathbf{b} = 0 $$ with $$ \mathbf{M}^{-1}(\mathbf{A x}-\mathbf{b}) = 0 $$ such that $$ \kappa(\mathbf{M}^{-1}\mathbf{A}) $$ is smaller than $$ \kappa(\mathbf{A}) $$ , see below. ### Convergence theorem Define a subset of polynomials as $$ \Pi_k^* := \left\lbrace \ p \in \Pi_k \ : \ p(0)=1 \ \right\rbrace \,, $$ where $$ \Pi_k $$ is the set of polynomials of maximal degree $$ k $$ .
https://en.wikipedia.org/wiki/Conjugate_gradient_method
passage: Constraint propagation enforcing strong directional path consistency is similar, but also enforces arc consistency. ### Directional consistency and satisfiability Directional consistency guarantees that partial solutions satisfying a constraint can be consistently extended to another variable of higher index. However, it does not guarantee that the extensions to different variables are consistent with each other. For example, a partial solution may be consistently extended to variable $$ x_i $$ or to variable $$ x_j $$ , but yet these two extensions are not consistent with each other. There are two cases in which this does not happen, and directional consistency guarantees satisfiability if no domain is empty and no constraint is unsatisfiable. The first case is that of a binary constraint problem with an ordering of the variables that makes the ordered graph of constraint having width 1. Such an ordering exists if and only if the graph of constraints is a tree. If this is the case, the width of the graph bounds the maximal number of lower (according to the ordering) nodes a node is joined to. Directional arc consistency guarantees that every consistent assignment to a variable can be extended to higher nodes, and width 1 guarantees that a node is not joined to more than one lower node. As a result, once the lower variable is assigned, its value can be consistently extended to every higher variable it is joined with. This extension cannot later lead to inconsistency.
https://en.wikipedia.org/wiki/Local_consistency
passage: There is one fan of order one, three fans of order two, eight fans of order three, and so on. A spanning tree is a subgraph of a graph which contains all of the original vertices and which contains enough edges to make this subgraph connected, but not so many edges that there is a cycle in the subgraph. We ask how many spanning trees of a fan of order are possible for each . As an observation, we may approach the question by counting the number of ways to join adjacent sets of vertices. For example, when , we have that , which is a sum over the -fold convolutions of the sequence for .
https://en.wikipedia.org/wiki/Generating_function
passage: The book was, however, received by medieval scholars in the Islamic world, and commented upon by Ibn Sahl (10th century), who was in turn improved upon by Alhazen (Book of Optics, 11th century). The Arabic translation of Ptolemy's Optics became available in Latin translation in the 12th century (Eugenius of Palermo 1154). Between the 11th and 13th century "reading stones" were invented. These were primitive plano-convex lenses initially made by cutting a glass sphere in half. The medieval (11th or 12th century) rock crystal Visby lenses may or may not have been intended for use as burning glasses. Spectacles were invented as an improvement of the "reading stones" of the high medieval period in Northern Italy in the second half of the 13th century. This was the start of the optical industry of grinding and polishing lenses for spectacles, first in Venice and Florence in the late 13th century, and later in the spectacle-making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses (probably without the knowledge of the rudimentary optical theory of the day). The practical development and experimentation with lenses led to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle-making centres in the Netherlands.
https://en.wikipedia.org/wiki/Lens
passage: For single-material, isotropic rotors this relationship can be expressed as $$ \frac{E}{m} = K\left(\frac{\sigma}{\rho}\right), $$ where $$ E $$ is kinetic energy of the rotor [J], $$ m $$ is the rotor's mass [kg], $$ K $$ is the rotor's geometric shape factor [dimensionless], $$ \sigma $$ is the tensile strength of the material [Pa], $$ \rho $$ is the material's density [kg/m3]. #### Geometry (shape factor) The highest possible value for the shape factor of a flywheel rotor, is $$ K = 1 $$ , which can be achieved only by the theoretical constant-stress disc geometry. A constant-thickness disc geometry has a shape factor of $$ K = 0.606 $$ , while for a rod of constant thickness the value is $$ K = 0.333 $$ . A thin cylinder has a shape factor of $$ K = 0.5 $$ . For most flywheels with a shaft, the shape factor is below or about $$ K = 0.333 $$ . A shaft-less design has a shape factor similar to a constant-thickness disc ( $$ K = 0.6 $$ ), which enables a doubled energy density. #### Material properties For energy storage, materials with high strength and low density are desirable.
https://en.wikipedia.org/wiki/Flywheel_energy_storage
passage: It is frequently referred to as the standard model of Big Bang cosmology. ### Cosmic microwave background The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background.
https://en.wikipedia.org/wiki/Physical_cosmology
passage: The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal. There are several other methods to fit mixed models, including using a mixed effect model (MEM) initially, and then Newton-Raphson (used by R package nlme's lme()), penalized least squares to get a profiled log likelihood only depending on the (low-dimensional) variance-covariance parameters of $$ \boldsymbol{u} $$ , i.e., its cov matrix $$ \boldsymbol{G} $$ , and then modern direct optimization for that reduced objective function (used by R's lme4 package lmer() and the Julia package MixedModels.jl) and direct optimization of the likelihood (used by e.g. R's glmmTMB). Notably, while the canonical form proposed by Henderson is useful for theory, many popular software packages use a different formulation for numerical computation in order to take advantage of sparse matrix methods (e.g. lme4 and MixedModels.jl). In the context of Bayesian methods, the brms package provides a user-friendly interface for fitting mixed models in R using Stan, allowing for the incorporation of prior distributions and the estimation of posterior distributions. In python, Bambi provides a similarly streamlined approach for fitting mixed effects models using PyMC.
https://en.wikipedia.org/wiki/Mixed_model
passage: ### General ℓp-space In complete analogy to the preceding definition one can define the space $$ \ell^p(I) $$ over a general index set $$ I $$ (and $$ 1 \leq p < \infty $$ ) as $$ \ell^p(I) = \left\{(x_i)_{i\in I} \in \mathbb{K}^I : \sum_{i \in I} |x_i|^p < +\infty\right\}, $$ where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence). With the norm $$ \|x\|_p = \left(\sum_{i\in I} |x_i|^p\right)^{1/p} $$ the space $$ \ell^p(I) $$ becomes a Banach space. In the case where $$ I $$ is finite with $$ n $$ elements, this construction yields $$ \Reals^n $$ with the $$ p $$ -norm defined above. If $$ I $$ is countably infinite, this is exactly the sequence space $$ \ell^p $$ defined above. For uncountable sets $$ I $$ this is a non-separable Banach space which can be seen as the locally convex direct limit of $$ \ell^p $$ -sequence spaces.
https://en.wikipedia.org/wiki/Lp_space
passage: ## Incidence coloring game Incidence coloring game was first introduced by S. D. Andres. It is the incidence version of the vertex coloring game, in which the incidences of a graph are colored instead of vertices. Incidence game chromatic number is the new parameter defined as a game-theoretic analogous of the incidence chromatic number. The game is that two players, Alice and Bob construct a proper incidence coloring. The rules are stated below: - Alice and Bob color the incidences of a graph G with a set k of colors. - They are taking turns to provide a proper coloring to an uncolored incidence. Generally, Alice begins. - In the case of an incidence that cannot be colored properly, then Bob wins. - If every incidences of the graph is colored properly, Alice wins. The incidence game chromatic number of a graph G, denoted by $$ i_g(G) $$ , is the fewest colors required for Alice to win in an incidence coloring game. It unifies the ideas of incidence chromatic number of a graph and game chromatic number in case of an undirected graph. Andres found out that the upper bound for $$ i_g(G) $$ in case of k-degenerate graphs is 2Δ + 4k − 2. This bound was improved to 2Δ + 3k − 1 in case of graphs in which Δ is at least 5k.
https://en.wikipedia.org/wiki/Incidence_coloring
passage: ## Research Cartan worked in several fields across algebra, geometry and analysis, focussing primarily on algebraic topology and homological algebra. He was a founding member of the Bourbaki group in 1934 and one of its most active participants. After 1945 he started his own seminar in Paris, which deeply influenced Jean-Pierre Serre, Armand Borel, Alexander Grothendieck and Frank Adams, amongst others of the leading lights of the younger generation. The number of his official students was small, but includes Joséphine Guidy Wandja (the first African woman to gain a PhD in mathematics), Adrien Douady, Roger Godement, Max Karoubi, Jean-Louis Koszul, Jean-Pierre Serre and René Thom. Cartan's first research interests, until the 40's, were in the theory of functions of several complex variables, which later gave rise to the theory of complex varieties and analytic geometry. Motivated by the solution to the Cousin problems, he worked on sheaf cohomology and coherent sheaves and proved two powerful results, Cartan's theorems A and B. Since the 50's he became more interested in algebraic topology. Among his major contributions, he worked on cohomology operations and homology of the Eilenberg–MacLane spaces, he introduced the notion of Steenrod algebra, and, together with Jean-Pierre Serre, developed the method of "killing homotopy groups". His 1956 book with Samuel Eilenberg on homological algebra was an important text, treating the subject with a moderate level of abstraction with the help of category theory.
https://en.wikipedia.org/wiki/Henri_Cartan
passage: Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different states $$ \Phi_n $$ . A number $$ P_n $$ represents the probability of a randomly selected system being in the state $$ \Phi_n $$ . Unlike the linear combination case each system is in a definite eigenstate. The expectation value $$ {\langle A \rangle}_\sigma $$ of an observable is a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories. There is no state that is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement and the momentum measurement (at the same time ) are known exactly; at least one of them will have a range of possible values. This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement April 14 1928, 121: 580–590. More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate.
https://en.wikipedia.org/wiki/Quantum_state
passage: The Stanford Review is a conservative student newspaper founded in 1987. The Fountain Hopper (FoHo) is a financially independent, anonymous student-run campus rag publication, notable for having broken the Brock Turner story. Stanford hosts numerous environmental and sustainability-oriented student groups, including Students for a Sustainable Stanford, Students for Environmental and Racial Justice, and Stanford Energy Club. Stanford is a member of the Ivy Plus Sustainability Consortium, through which it has committed to best-practice sharing and the ongoing exchange of campus sustainability solutions along with other member institutions. Stanford is also home to a large number of pre-professional student organizations, organized around missions from startup incubation to paid consulting. The Business Association of Stanford Entrepreneurial Students (BASES) is one of the largest professional organizations in Silicon Valley, with over 5,000 members. Its goal is to support the next generation of entrepreneurs. StartX is a non-profit startup accelerator for student and faculty-led startups. It is staffed primarily by students. Stanford Women In Business (SWIB) is an on-campus business organization, aimed at helping Stanford women find paths to success in the generally male-dominated technology industry. Stanford Marketing is a student group that provides students hands-on training through research and strategy consulting projects with Fortune 500 clients, as well as workshops led by people from industry and professors in the Stanford Graduate School of Business. Stanford Finance provides mentoring and internships for students who want to enter a career in finance. Stanford Pre Business Association is intended to build connections among industry, alumni, and student communities.
https://en.wikipedia.org/wiki/Stanford_University
passage: As noted by Albert Einstein, Richard C. Tolman, and others, special relativity implies that faster-than-light particles, if they existed, could be used to communicate backwards in time. ### Neutrinos In 1985, Chodos proposed that neutrinos can have a tachyonic nature. The possibility of standard model particles moving at faster-than-light speeds can be modeled using Lorentz invariance violating terms, for example in the Standard-Model Extension. In this framework, neutrinos experience Lorentz-violating oscillations and can travel faster than light at high energies. This proposal was strongly criticized. ### Superluminal information If tachyons can transmit information faster than light, then, according to relativity, they violate causality, leading to logical paradoxes of the "kill your own grandfather" type. This is often illustrated with thought experiments such as the "tachyon telephone paradox" or "logically pernicious self-inhibitor. " The problem can be understood in terms of the relativity of simultaneity in special relativity, which says that different inertial reference frames will disagree on whether two events at different locations happened "at the same time" or not, and they can also disagree on the order of the two events. (Technically, these disagreements occur when the spacetime interval between the events is 'space-like', meaning that neither event lies in the future light cone of the other.)
https://en.wikipedia.org/wiki/Tachyon
passage: Strategic planning is particularly potent in enhancing an organization's capacity to achieve its goals (i.e., effectiveness). However, the study argues that just having a plan is not enough. For strategic planning to work, it needs to include some formality (i.e., including an analysis of the internal and external environment and the stipulation of strategies, goals and plans based on these analyses), comprehensiveness (i.e., producing many strategic options before selecting the course to follow) and careful stakeholder management (i.e., thinking carefully about whom to involve during the different steps of the strategic planning process, how, when and why). ### Strategic plans as tools to communicate and control Henry Mintzberg in the article "The Fall and Rise of Strategic Planning" (1994), argued that the lesson that should be accepted is that managers will never be able to take charge of strategic planning through a formalized process. Therefore, he underscored the role of plans as tools to communicate and control. It ensures that there is coordination wherein everyone in the organization is moving in the same direction. The plans are the prime media communicating the management's strategic intentions, thereby promoting a common direction instead of individual discretion. It is also the tool to secure the support of the organization's external sphere, such as financiers, suppliers or government agencies, who are helping achieve the organization's plans and goals. ## The strategic plan genre of communication Cornut et al (2012) studied the particular features of the strategic plan genre of communication by examining a corpus of strategic plans from public and non-profit organizations.
https://en.wikipedia.org/wiki/Strategic_planning
passage: With superparamagnetic beads, the sample is placed in a magnetic field so that the beads can collect on the side of the tube. This procedure is generally complete in approximately 30 seconds, and the remaining (unwanted) liquid is pipetted away. Washes are accomplished by resuspending the beads (off the magnet) with the washing solution and then concentrating the beads back on the tube wall (by placing the tube back on the magnet). The washing is generally repeated several times to ensure adequate removal of contaminants. If the superparamagnetic beads are homogeneous in size and the magnet has been designed properly, the beads will concentrate uniformly on the side of the tube and the washing solution can be easily and completely removed. After washing, the precipitated protein(s) are eluted and analyzed by gel electrophoresis, mass spectrometry, western blotting, or any number of other methods for identifying constituents in the complex. Protocol times for immunoprecipitation vary greatly due to a variety of factors, with protocol times increasing with the number of washes necessary or with the slower reaction kinetics of porous agarose beads. ### Steps 1. Lyse cells and prepare sample for immunoprecipitation. 1. Pre-clear the sample by passing the sample over beads alone or bound to an irrelevant antibody to soak up any proteins that non-specifically bind to the IP components. 1. Incubate solution with antibody against the protein of interest.
https://en.wikipedia.org/wiki/Immunoprecipitation
passage: The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, including crocodilians, dyrosaurs, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event. The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of Gastornis, eogruiids, bathornithids, ratites, geranoidids, mihirungs, and "terror birds"). It is often stated that mammals out-competed the neornithines for dominance of most terrestrial niches but many of these groups co-existed with rich mammalian faunas for most of the Cenozoic Era. Terror birds and bathornithids occupied carnivorous guilds alongside predatory mammals, and ratites are still fairly successful as midsized herbivores; eogruiids similarly lasted from the Eocene to Pliocene, becoming extinct only very recently after over 20 million years of co-existence with many mammal groups.
https://en.wikipedia.org/wiki/Dinosaur
passage: For example, if all birds a person has seen so far can fly, this person is justified in reaching the inductive conclusion that all birds fly. This conclusion is defeasible because the reasoner may have to revise it upon learning that penguins are birds that do not fly. Inductive Inductive reasoning starts from a set of individual instances and uses generalization to arrive at a universal law governing all cases. Some theorists use the term in a very wide sense to include any form of non-deductive reasoning, even if no generalization is involved. In the more narrow sense, it can be defined as "the process of inferring a general law or principle from the observations of particular instances." For example, starting from the empirical observation that "all ravens I have seen so far are black", inductive reasoning can be used to infer that "all ravens are black". In a slightly weaker form, induction can also be used to infer an individual conclusion about a single case, for example, that "the next raven I will see is black". Inductive reasoning is closely related to statistical reasoning and probabilistic reasoning. Like other forms of non-deductive reasoning, induction is not certain. This means that the premises support the conclusion by making it more probable but do not ensure its truth. In this regard, the conclusion of an inductive inference contains new information not already found in the premises. Various aspects of the premises are important to ensure that they offer significant support to the conclusion. In this regard, the sample size should be large to guarantee that many individual cases were considered before drawing the conclusion.
https://en.wikipedia.org/wiki/Logical_reasoning
passage: Relevant marketing research methods may include: - Qualitative marketing research such as focus groups - Quantitative marketing research such as statistical surveys - Experimental techniques such as test markets - Observational techniques such as ethnographic (on-site) observation Marketing managers may also design and oversee various environmental scanning and competitive intelligence processes to help identify trends and inform the company's marketing analysis. +SWOT analysis of the market position of a small management consultancy with a specialism in human resource management Strengths Weaknesses Opportunities Threats Reputation in marketplace Shortage of consultants at operating level rather than partner level Well established position with a well-defined market niche Large consultancies operating at a minor level Expertise at partner level in HRM consultancy Unable to deal with multidisciplinary assignments because of size or lack of ability Identified market for consultancy in areas other than HRM Other small consultancies looking to invade the marketplace ### In community organizations Although the SWOT analysis was originally designed for business and industries, it has been used in non-governmental organisations as a tool for identifying external and internal support to combat internal and external opposition for successful implementation of social services and social change efforts. Understanding particular communities can come from public forums, listening campaigns, and informational interviews and other data collection. SWOT analysis provides direction to the next stages of the change process. It has been used by community organizers and community members to further social justice in the context of social work practice, and can be applied directly to communities served by a specific nonprofit or community organization.
https://en.wikipedia.org/wiki/SWOT_analysis
passage: The generalization to vector time was developed several times, apparently independently, by different authors in the early 1980s. At least 6 papers contain the concept. The papers canonically cited in reference to vector clocks are Colin Fidge’s and Friedemann Mattern’s 1988 works, as they (independently) established the name "vector clock" and the mathematical properties of vector clocks. ## Partial ordering property Vector clocks allow for the partial causal ordering of events. Defining the following: - $$ VC(x) $$ denotes the vector clock of event $$ x $$ , and $$ VC(x)_z $$ denotes the component of that clock for process $$ z $$ . - $$ VC(x) < VC(y) \iff \forall z [VC(x)_z \le VC(y)_z] \land \exists z' [ VC(x)_{z'} < VC(y)_{z'} ] $$ -
https://en.wikipedia.org/wiki/Vector_clock
passage: All three can "roll" in four-dimensional space, each with its properties. In three dimensions, curves can form knots but surfaces cannot (unless they are self-intersecting). In four dimensions, however, knots made using curves can be trivially untied by displacing them in the fourth direction—but 2D surfaces can form non-trivial, non-self-intersecting knots in 4D space. Because these surfaces are two-dimensional, they can form much more complex knots than strings in 3D space can. The Klein bottle is an example of such a knotted surface. Another such surface is the real projective plane. ### Hypersphere The set of points in Euclidean 4-space having the same distance from a fixed point forms a hypersurface known as a 3-sphere. The hyper-volume of the enclosed space is: $$ \mathbf V = \begin{matrix} \frac{1}{2} \end{matrix} \pi^2 R^4 $$ This is part of the Friedmann–Lemaître–Robertson–Walker metric in General relativity where is substituted by function with meaning the cosmological age of the universe. Growing or shrinking with time means expanding or collapsing universe, depending on the mass density inside.
https://en.wikipedia.org/wiki/Four-dimensional_space
passage: The number of samples required from $$ Y $$ to obtain an accepted value thus follows a geometric distribution with probability $$ 1/M $$ , which has mean $$ M $$ . Intuitively, $$ M $$ is the expected number of the iterations that are needed, as a measure of the computational complexity of the algorithm. Rewrite the above equation, $$ M=\frac{1}{\mathbb{P}\left(U\le\frac{f(Y)}{M g(Y)}\right)} $$ Note that $$ 1 \le M<\infty $$ , due to the above formula, where $$ \mathbb{P}\left(U\le\frac{f(Y)}{M g(Y)}\right) $$ is a probability which can only take values in the interval $$ [0,1] $$ . When $$ M $$ is chosen closer to one, the unconditional acceptance probability is higher the less that ratio varies, since $$ M $$ is the upper bound for the likelihood ratio $$ f(x)/g(x) $$ . In practice, a value of $$ M $$ closer to 1 is preferred as it implies fewer rejected samples, on average, and thus fewer iterations of the algorithm.
https://en.wikipedia.org/wiki/Rejection_sampling
passage: In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a relative change in the other quantity proportional to the change raised to a constant exponent: one quantity varies as a power of another. The change is independent of the initial size of those quantities. For instance, the area of a square has a power law relationship with the length of its side, since if the length is doubled, the area is multiplied by 2, while if the length is tripled, the area is multiplied by 3, and so on. ## Empirical examples The distributions of a wide variety of physical, biological, and human-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on the moon and of solar flares, cloud sizes, the foraging pattern of various species, the sizes of activity patterns of neuronal populations, the frequencies of words in most languages, frequencies of family names, the species richness in clades of organisms, the sizes of power outages, volcanic eruptions, human judgments of stimulus intensity and many other quantities. Empirical distributions can only fit a power law for a limited range of values, because a pure power law would allow for arbitrarily large or small values. Acoustic attenuation follows frequency power-laws within wide frequency bands for many complex media. Allometric scaling laws for relationships between biological variables are among the best known power-law functions in nature. ## Properties
https://en.wikipedia.org/wiki/Power_law
passage: The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula $$ v = \omega r $$ so that $$ F_c = m r \omega^2 \,. $$ Expressed using the orbital period T for one revolution of the circle, $$ \omega = \frac{2\pi}{T} $$ the equation becomes $$ F_c = m r \left(\frac{2\pi}{T}\right)^2. $$ In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: $$ F_c = \frac{\gamma m v^2}{r} $$ where $$ \gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} $$ is the Lorentz factor. Thus the centripetal force is given by: $$ F_c = \gamma m v \omega $$ which is the rate of change of relativistic momentum $$ \gamma m v $$ .
https://en.wikipedia.org/wiki/Centripetal_force
passage: The method can be described as the FTCS (forward in time, centered in space) scheme with a numerical dissipation term of 1/2. One can view the Lax–Friedrichs method as an alternative to Godunov's scheme, where one avoids solving a Riemann problem at each cell interface, at the expense of adding artificial viscosity. ## Illustration for a Linear Problem Consider a one-dimensional, linear hyperbolic partial differential equation for $$ u(x,t) $$ of the form: $$ u_t + a u_x = 0 $$ on the domain $$ b \leq x \leq c,\; 0 \leq t \leq d $$ with initial condition $$ u(x,0) = u_0(x)\, $$ and the boundary conditions $$ \begin{align} u(b,t) &= u_b(t) \\ u(c,t) &= u_c(t). \end{align} $$ If one discretizes the domain $$ (b, c) \times (0, d) $$ to a grid with equally spaced points with a spacing of $$ \Delta x $$ in the $$ x $$ -direction and $$ \Delta t $$ in the $$ t $$ -direction, we introduce an approximation $$ \tilde u $$ of $$ u $$ $$ u_i^n = \tilde u(x_i, t^n) ~~\text{ with }~~
https://en.wikipedia.org/wiki/Lax%E2%80%93Friedrichs_method
passage: In differential geometry, the second fundamental form (or shape tensor) is a quadratic form on the tangent plane of a smooth surface in the three-dimensional Euclidean space, usually denoted by $$ \mathrm{I\!I} $$ (read "two"). Together with the first fundamental form, it serves to define extrinsic invariants of the surface, its principal curvatures. More generally, such a quadratic form is defined for a smooth immersed submanifold in a Riemannian manifold. ## Surface in R3 ### Motivation The second fundamental form of a parametric surface in was introduced and studied by Gauss. First suppose that the surface is the graph of a twice continuously differentiable function, , and that the plane is tangent to the surface at the origin. Then and its partial derivatives with respect to and vanish at (0,0). Therefore, the Taylor expansion of f at (0,0) starts with quadratic terms: $$ z=L\frac{x^2}{2} + Mxy + N\frac{y^2}{2} + \text{higher order terms}\,, $$ and the second fundamental form at the origin in the coordinates is the quadratic form $$ L \, dx^2 + 2M \, dx \, dy + N \, dy^2 \,. $$ For a smooth point on , one can choose the coordinate system so that the plane is tangent to at , and define the second fundamental form in the same way. ### Classical notation The second fundamental form of a general parametric surface is defined as follows.
https://en.wikipedia.org/wiki/Second_fundamental_form
passage: Stage 3 sleep is defined as having less than 50% delta wave activity, while stage 4 sleep has more than 50% delta wave activity. These stages have recently been combined and are now collectively referred to as stage N3 slow-wave sleep. During N3 SWS, delta waves account for 20% or more of the EEG record during this stage. Delta waves occur in all mammals, and potentially all animals as well. Delta waves are often associated with another EEG phenomenon, the K-complex. K-Complexes have been shown to immediately precede delta waves in slow wave sleep. Delta waves have also been classified according to the location of the activity into frontal (FIRDA), temporal (TIRDA), and occipital (OIRDA) intermittent delta activity. ## Neurophysiology ### Sex differences Females have been shown to have more delta wave activity, and this is true across most mammal species. This discrepancy does not become apparent until early adulthood (in the 30s or 40s in humans), with males showing greater age-related reductions in delta wave activity than females. ### Brain localization and biochemistry Delta waves can arise either in the thalamus or in the cortex. When associated with the thalamus, they are thought to arise in coordination with the reticular formation. Maquet, P., Degueldre, C., Delfiore, G., Aerts, J., Peters, J. M., Luxen, A., et al. (1997). Functional neuroanatomy of human slow wave sleep.
https://en.wikipedia.org/wiki/Delta_wave
passage: The usual strict total order on N, "less than" (denoted by "<"), can be defined in terms of addition via the rule . Equivalently, we get a definitional conservative extension of Q by taking "<" as primitive and adding this rule as an eighth axiom; this system is termed "Robinson arithmetic R" in . A different extension of Q, which we temporarily call Q+, is obtained if we take "<" as primitive and add (instead of the last definitional axiom) the following three axioms to axioms (1)–(7) of Q: - ¬(x < 0) - x < Sy ↔ (x < y ∨ x = y) - x < y ∨ x = y ∨ y < x Q+ is still a conservative extension of Q, in the sense that any formula provable in Q+ not containing the symbol "<" is already provable in Q. (Adding only the first two of the above three axioms to Q gives a conservative extension of Q that is equivalent to what calls Q*. See also , but note that the second of the above three axioms cannot be deduced from "the pure definitional extension" of Q obtained by adding only the axiom .) Among the axioms (1)–(7) of Q, axiom (3) needs an inner existential quantifier. gives an axiomatization that has only (implicit) outer universal quantifiers, by dispensing with axiom (3) of Q but adding the above three axioms with < as primitive.
https://en.wikipedia.org/wiki/Robinson_arithmetic
passage: It was difficult to speed up using specialized hardware because it involves a pipeline of complex steps, requiring data addressing, decision-making, and computation capabilities typically only provided by CPUs (although dedicated circuits for speeding up particular operations were proposed ). Supercomputers or specially designed multi-CPU computers or clusters were sometimes used for ray tracing. In 1981, James H. Clark and Marc Hannah designed the Geometry Engine, a VLSI chip for performing some of the steps of the 3D rasterization pipeline, and started the company Silicon Graphics (SGI) to commercialize this technology. Home computers and game consoles in the 1980s contained graphics coprocessors that were capable of scrolling and filling areas of the display, and drawing sprites and lines, though they were not useful for rendering realistic images. Towards the end of the 1980s PC graphics cards and arcade games with 3D rendering acceleration began to appear, and in the 1990s such technology became commonplace. Today, even low-power mobile processors typically incorporate 3D graphics acceleration features. GPUs The 3D graphics accelerators of the 1990s evolved into modern GPUs. GPUs are general-purpose processors, like CPUs, but they are designed for tasks that can be broken into many small, similar, mostly independent sub-tasks (such as rendering individual pixels) and performed in parallel.
https://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
passage: The point at the intersection of some column and row means that the value of the function at this point is included in the sum for the given coefficient of the polynomial (see figure). We call this table $$ T_N $$ , where N is the number of variables of the function. There is a pattern that allows you to get a table for a function of N variables, having a table for a function of $$ N-1 $$ variables. The new table $$ T_N + 1 $$ is arranged as a 2 × 2 matrix of $$ T_N $$ tables, and the right upper block of the matrix is cleared. #### Lattice-theoretic interpretation Consider the columns of a table $$ T_N $$ as corresponding to elements of a Boolean lattice of size $$ 2^N $$ . For each column $$ f_M $$ express number M as a binary number $$ M_2 $$ , then $$ f_M \le f_K $$ if and only if $$ M_2 \vee K_2 = K_2 $$ , where $$ \vee $$ denotes bitwise OR. If the rows of table $$ T_N $$ are numbered, from top to bottom, with the numbers from 0 to $$ 2^N - 1 $$ , then the tabular content of row number R is the ideal generated by element $$ f_R $$ of the lattice. Note incidentally that the overall pattern of a table $$ T_N $$ is that of a logical matrix Sierpiński triangle.
https://en.wikipedia.org/wiki/Zhegalkin_polynomial
passage: Another important model of percolation, in a different universality class altogether, is directed percolation, where connectivity along a bond depends upon the direction of the flow. Another variation of recent interest is Explosive Percolation, whose thresholds are listed on that page. Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries. Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of , and self-dual lattices (square, martini-B) have bond thresholds of . The notation such as (4,82) comes from Grünbaum and Shephard, and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the eleven Archimedean lattices composed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied. Error bars in the last digit or digits are shown by numbers in parentheses.
https://en.wikipedia.org/wiki/Percolation_threshold
passage: Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms: - Magnetic disk; - Floppy disk, used for off-line storage; - Hard disk drive, used for secondary storage. - Magnetic tape, used for tertiary and off-line storage; - Carousel memory (magnetic rolls). In early computers, magnetic storage was also used as: - Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory; - Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards; - Magnetic tape was then often used for secondary storage. Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts. Optical Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media).
https://en.wikipedia.org/wiki/Computer_data_storage
passage: For example, having decided to use [n]q as the q-analog of n, one may define the q-analog of the factorial, known as the q-factorial, by $$ \begin{align} \, [n]_q! & =[1]_q \cdot [2]_q \cdots [n-1]_q \cdot [n]_q \\[6pt] & =\frac{1-q}{1-q} \cdot \frac{1-q^2}{1-q} \cdots \frac{1-q^{n-1}}{1-q} \cdot \frac{1-q^n}{1-q} \\[6pt] & =1\cdot (1+q)\cdots (1+q+\cdots + q^{n-2}) \cdot (1+q+\cdots + q^{n-1}). \end{align} $$ This q-analog appears naturally in several contexts. Notably, while n! counts the number of permutations of length n, [n]q! counts permutations while keeping track of the number of inversions. That is, if inv(w) denotes the number of inversions of the permutation w and Sn denotes the set of permutations of length n, we have $$ \sum_{w \in S_n} q^{\text{inv}(w)} = [n]_q ! . $$ In particular, one recovers the usual factorial by taking the limit as $$ q\rightarrow 1 $$ .
https://en.wikipedia.org/wiki/Q-analog
passage: Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior.
https://en.wikipedia.org/wiki/Social_network
passage: However, while useful for photo-like images, a simple anti-aliasing approach (such as super-sampling and then averaging) may actually worsen the appearance of some types of line art or diagrams (making the image appear fuzzy), especially where most lines are horizontal or vertical. In these cases, a prior grid-fitting step may be useful (see hinting). In general, super-sampling is a technique of collecting data points at a greater resolution (usually by a power of two) than the final data resolution. These data points are then combined (down-sampled) to the desired resolution, often just by a simple average. The combined data points have less visible aliasing artifacts (or moiré patterns). Full-scene anti-aliasing by super-sampling usually means that each full frame is rendered at double (2x) or quadruple (4x) the display resolution, and then down-sampled to match the display resolution. Thus, a 2x FSAA would render 4 super-sampled pixels for each single pixel of each frame. Rendering at larger resolutions will produce better results; however, more processor power is needed, which can degrade performance and frame rate. Sometimes FSAA is implemented in hardware in such a way that a graphical application is unaware the images are being super-sampled and then down-sampled before being displayed.
https://en.wikipedia.org/wiki/Spatial_anti-aliasing
passage: ### Purple bacteria Purple bacteria contain a single photosystem that is structurally related to PSII in cyanobacteria and chloroplasts: P870 → P870* → ubiquinone → cyt bc1 → cyt c2 → P870 This is a cyclic process in which electrons are removed from an excited chlorophyll molecule (bacteriochlorophyll; P870), passed through an electron transport chain to a proton pump (cytochrome bc1 complex; similar to the chloroplastic one), and then returned to the chlorophyll molecule. The result is a proton gradient that is used to make ATP via ATP synthase. As in cyanobacteria and chloroplasts, this is a solid-state process that depends on the precise orientation of various functional groups within a complex transmembrane macromolecular structure. To make NADPH, purple bacteria use an external electron donor (hydrogen, hydrogen sulfide, sulfur, sulfite, or organic molecules such as succinate and lactate) to feed electrons into a reverse electron transport chain. ### Green sulfur bacteria Green sulfur bacteria contain a photosystem that is analogous to PSI in chloroplasts: P840 → P840* → ferredoxin → NADH ↑ ↓ cyt c553 ← bc1 ← menaquinol There are two pathways of electron transfer. In cyclic electron transfer, electrons are removed from an excited chlorophyll molecule, passed through an electron transport chain to a proton pump, and then returned to the chlorophyll.
https://en.wikipedia.org/wiki/Light-dependent_reactions
passage: There has been great progress in higher dimensions, although the general problem remains open. In particular, Birkar, Cascini, Hacon, and McKernan (2010) proved that every variety of general type over a field of characteristic zero has a minimal model. ## Uniruled varieties A variety is called uniruled if it is covered by rational curves. A uniruled variety does not have a minimal model, but there is a good substitute: Birkar, Cascini, Hacon, and McKernan showed that every uniruled variety over a field of characteristic zero is birational to a Fano fiber space. This leads to the problem of the birational classification of Fano fiber spaces and (as the most interesting special case) Fano varieties. By definition, a projective variety X is Fano if the anticanonical bundle $$ K_X^* $$ is ample. Fano varieties can be considered the algebraic varieties which are most similar to projective space. In dimension 2, every Fano variety (known as a Del Pezzo surface) over an algebraically closed field is rational. A major discovery in the 1970s was that starting in dimension 3, there are many Fano varieties which are not rational. In particular, smooth cubic 3-folds are not rational by Clemens–Griffiths (1972), and smooth quartic 3-folds are not rational by Iskovskikh–Manin (1971). Nonetheless, the problem of determining exactly which Fano varieties are rational is far from solved.
https://en.wikipedia.org/wiki/Birational_geometry
passage: Disruption of a single gene may also result from integration of genomic material from a DNA virus or retrovirus, and such an event may also result in the expression of viral oncogenes in the affected cell and its descendants. ### DNA damage DNA damage is considered to be the primary cause of cancer. More than 60,000 new naturally-occurring instances of DNA damage arise, on average, per human cell, per day, due to endogenous cellular processes (see article DNA damage (naturally occurring)). Additional DNA damage can arise from exposure to exogenous agents. As one example of an exogenous carcinogenic agent, tobacco smoke causes increased DNA damage, and this DNA damage likely cause the increase of lung cancer due to smoking. In other examples, UV light from solar radiation causes DNA damage that is important in melanoma, Helicobacter pylori infection produces high levels of reactive oxygen species that damage DNA and contribute to gastric cancer, and the Aspergillus flavus metabolite aflatoxin is a DNA damaging agent that is causative in liver cancer. DNA damage can also be caused by substances produced in the body. Macrophages and neutrophils in an inflamed colonic epithelium are the source of reactive oxygen species causing the DNA damage that initiates colonic tumorigenesis, and bile acids, at high levels in the colons of humans eating a high-fat diet, also cause DNA damage and contribute to colon cancer. Such exogenous and endogenous sources of DNA damage are indicated in the boxes at the top of the figure in this section. The central role of DNA damage in progression to cancer is indicated at the second level of the figure.
https://en.wikipedia.org/wiki/Carcinogenesis
passage: One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons. Fe → Fe2+ + 2 e− Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal. O2 + 4 H+ + 4 e− → 2 H2O Global reaction for the process: 2 Fe + O2 + 4 H+ → 2 Fe2+ + 2 H2O Standard emf for iron rusting: E° = E° (cathode) − E° (anode) E° = 1.23V − (−0.44 V) = 1.67 V Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize further, following this equation: 4 Fe2+ + O2 + (4+2) H2O → 2 Fe2O3·H2O + 8 H+ Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·H2O. An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. ### Corrosion of common metals Coinage metals, such as copper and silver, slowly corrode through use.
https://en.wikipedia.org/wiki/Electrochemistry
passage: Cartan's ability to handle many other types of fibers and groups allows one to credit him with the first general idea of a fiber bundle, although he never defined it explicitly. This concept has become one of the most important in all fields of modern mathematics, chiefly in global differential geometry and in algebraic and differential topology. Cartan used it to formulate his definition of a connection, which is now used universally and has superseded previous attempts by several geometers, made after 1917, to find a type of "geometry" more general than the Riemannian model and perhaps better adapted to a description of the universe along the lines of general relativity. Cartan showed how to use his concept of connection to obtain a much more elegant and simple presentation of Riemannian geometry. His chief contribution to the latter, however, was the discovery and study of the symmetric Riemann spaces, one of the few instances in which the initiator of a mathematical theory was also the one who brought it to its completion. Symmetric Riemann spaces may be defined in various ways, the simplest of which postulates the existence around each point of the space of a "symmetry" that is involutive, leaves the point fixed, and preserves distances. The unexpected fact discovered by Cartan is that it is possible to give a complete description of these spaces by means of the classification of the simple Lie groups; it should therefore not be surprising that in various areas of mathematics, such as automorphic functions and analytic number theory (apparently far removed from differential geometry), these spaces are playing a part that is becoming increasingly important.
https://en.wikipedia.org/wiki/%C3%89lie_Cartan
passage: Similarly an n-type (of ) over A is defined to be a set p(x1,...,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,...,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,...,bn ∈ M with $$ \mathcal{M}\models p_0(b_1,\ldots,b_n) $$ . A complete type of $$ \mathcal{M} $$ over A is one that is maximal with respect to inclusion. Equivalently, for every $$ \phi(\boldsymbol{x}) \in L(A,\boldsymbol{x}) $$ either $$ \phi(\boldsymbol{x}) \in p(\boldsymbol{x}) $$ or $$ \lnot\phi(\boldsymbol{x}) \in p(\boldsymbol{x}) $$ . Any non-complete type is called a partial type. So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set). An n-type p(x) is said to be realized in if there is an element b ∈ Mn such that $$ \mathcal{M}\models p(\boldsymbol{b}) $$ .
https://en.wikipedia.org/wiki/Type_%28model_theory%29
passage: ## Common extensions In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure. ### Outer joins Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far. The operators defined in this section assume the existence of a null value, ω, which we do not define, to be used for the fill values; in practice this corresponds to the NULL in SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection is extended to a three-valued logic, although we elide those details in this article. Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.) #### Left outer join The left outer join (⟕) is written as R ⟕ S where R and S are relations.
https://en.wikipedia.org/wiki/Relational_algebra
passage: Run the original algorithm on this expanded set of hashes. Doing so yields the weighted Jaccard Index as the collision probability. $$ J_\mathcal{W}(x,y) = \frac{\sum_i \min(x_i,y_i)}{\sum_i \max(x_i,y_i)} $$ Further extensions that achieve this collision probability on real weights with better runtime have been developed, one for dense data, and another for sparse data. Another family of extensions use exponentially distributed hashes. A uniformly random hash between 0 and 1 can be converted to follow an exponential distribution by CDF inversion. This method exploits the many beautiful properties of the minimum of a set of exponential variables. $$ H(x) = \underset{i}{\operatorname{arg\,min}} \frac{-\log(h(i))}{x_i} $$ This yields as its collision probability the probability Jaccard index $$ J_\mathcal{P}(x,y) = \sum_{x_i\neq 0 \atop y_i \neq 0} \frac{1}{\sum_{j} \max\left(\frac{x_j}{x_i}, \frac{y_j}{y_i}\right)} $$ ## Min-wise independent permutations
https://en.wikipedia.org/wiki/MinHash
passage: In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). #### Sigma points For a random vector $$ \mathbf{x}=(x_1, \dots, x_L) $$ , sigma points are any set of vectors $$ \{\mathbf{s}_0,\dots, \mathbf{s}_N \}=\bigl\{\begin{pmatrix} s_{0,1}& s_{0,2}&\ldots& s_{0,L} \end{pmatrix}, \dots, \begin{pmatrix} s_{N,1}& s_{N,2}&\ldots& s_{N,L} \end{pmatrix}\bigr\} $$ attributed with - first-order weights $$ W_0^a, \dots, W_N^a $$ that fulfill 1. $$ W_j^c $$ . for all $$ i=1, \dots, L $$ : $$ E[x_i]=\sum_{j=0}^N W_j^a s_{j,i} $$ - second-order weights $$ W_0^c, \dots, W_N^c $$ that fulfill 1. BLOCK81.
https://en.wikipedia.org/wiki/Kalman_filter
passage: ## Relation to other stochastic processes If $$ W(t) $$ is a standard Wiener process (i.e., for $$ t \geq 0 $$ , $$ W(t) $$ is normally distributed with expected value $$ 0 $$ and variance $$ t $$ , and the increments are stationary and independent), then $$ B(t) = W(t) - \frac{t}{T} W(T)\, $$ is a Brownian bridge for $$ t \in [0, T] $$ . It is independent of $$ W(T) $$ Conversely, if $$ B(t) $$ is a Brownian bridge for $$ t \in [0, 1] $$ and $$ Z $$ is a standard normal random variable independent of $$ B $$ , then the process $$ W(t) = B(t) + tZ\, $$ is a Wiener process for $$ t \in [0, 1] $$ . More generally, a Wiener process $$ W(t) $$ for $$ t \in [0, T] $$ can be decomposed into $$ W(t) = \sqrt{T}B\left(\frac{t}{T}\right) + \frac{t}{\sqrt{T}} Z. $$ Another representation of the Brownian bridge based on the Brownian motion is, for $$ t \in [0, T] $$ $$
https://en.wikipedia.org/wiki/Brownian_bridge
passage: Just as cosmologists have a sample size of one universe, biologists have a sample size of one fossil record. The problem is closely related to the anthropic principle. Another problem of limited sample sizes in astronomy, here practical rather than essential, is in the Titius–Bode law on spacing of satellites in an orbital system. Originally observed for the Solar System, the difficulty in observing other solar systems has limited data to test this.
https://en.wikipedia.org/wiki/Cosmic_variance
passage: In particular, the pullback of a line bundle is a line bundle. (Briefly, the fiber of $$ f^*E $$ at a point $$ x\in X $$ is the fiber of $$ E $$ at $$ f(x)\in Y $$ .) The notions described in this article are related to this construction in the case of a morphism to projective space $$ f\colon X \to \mathbb P^n, $$ with $$ E=\mathcal{O}(1) $$ the line bundle on projective space whose global sections are the homogeneous polynomials of degree 1 (that is, linear functions) in variables $$ x_0,\ldots,x_n $$ . The line bundle $$ \mathcal{O}(1) $$ can also be described as the line bundle associated to a hyperplane in $$ \mathbb P^n $$ (because the zero set of a section of $$ \mathcal{O}(1) $$ is a hyperplane). If $$ f $$ is a closed immersion, for example, it follows that the pullback $$ f^*O(1) $$ is the line bundle on $$ X $$ associated to a hyperplane section (the intersection of $$ X $$ with a hyperplane in $$ \mathbb{P}^n $$ ). ### Basepoint-free line bundles Let $$ X $$ be a scheme over a field $$ k $$ (for example, an algebraic variety) with a line bundle $$ L $$ .
https://en.wikipedia.org/wiki/Ample_line_bundle
passage: The eccentricity can also be defined in terms of the intersection of a plane and a double-napped cone associated with the conic section. If the cone is oriented with its axis vertical, the eccentricity is $$ e = \frac{\sin \beta}{\sin \alpha}, \ \ 0<\alpha<90^\circ, \ 0\le\beta\le90^\circ \ , $$ where β is the angle between the plane and the horizontal and α is the angle between the cone's slant generator and the horizontal. For $$ \beta=0 $$ the plane section is a circle, for $$ \beta=\alpha $$ a parabola. (The plane must not meet the vertex of the cone.) The linear eccentricity of an ellipse or hyperbola, denoted (or sometimes or ), is the distance between its center and either of its two foci. The eccentricity can be defined as the ratio of the linear eccentricity to the semimajor axis : that is, $$ e = \frac{c}{a} $$ (lacking a center, the linear eccentricity for parabolas is not defined). A parabola can be treated as a limiting case of an ellipse or a hyperbola with one focal point at infinity. ## Alternative names The eccentricity is sometimes called the first eccentricity to distinguish it from the second eccentricity and third eccentricity defined for ellipses (see below). The eccentricity is also sometimes called the numerical eccentricity.
https://en.wikipedia.org/wiki/Eccentricity_%28mathematics%29
passage: Earth's crust is soaked with water, and water plays an important role in the development of shear zones. Plate tectonics requires weak surfaces in the crust along which crustal slices can move, and it may well be that such weakening never took place on Venus because of the absence of water. However, some researchers remain convinced that plate tectonics is or was once active on this planet. Mars Mars is considerably smaller than Earth and Venus, and there is evidence for ice on its surface and in its crust. In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes. Scientists have since determined that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis or by a giant impact that excavated the Northern Lowlands. Valles Marineris may be a tectonic boundary. Observations made of the magnetic field of Mars by the Mars Global Surveyor spacecraft in 1999 showed patterns of magnetic striping discovered on this planet. Some scientists interpreted these as requiring plate tectonic processes, such as seafloor spreading. However, their data failed a "magnetic reversal test", which is used to see if they were formed by flipping polarities of a global magnetic field. ### Icy moons ### Exoplanets On Earth-sized planets, plate tectonics is more likely if there are oceans of water.
https://en.wikipedia.org/wiki/Plate_tectonics
passage: Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure. To formalize this distinction, we say that two probability density functions $$ \phi_1 $$ and $$ \phi_2 $$ are equimeasurable if $$ \forall \delta > 0,\,\mu\{x\in\mathbb R|\phi_1(x)\ge\delta\} = \mu\{x\in\mathbb R|\phi_2(x)\ge\delta\}, $$ where is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.)
https://en.wikipedia.org/wiki/Entropic_uncertainty
passage: Conformal symmetry is a property of spacetime that ensures angles remain unchanged even when distances are altered. If you stretch, compress, or otherwise distort spacetime, the local angular relationships between lines or curves stay the same. This idea extends the familiar Poincaré group —which accounts for rotations, translations, and boosts—into the more comprehensive conformal group. Conformal symmetry encompasses special conformal transformations and dilations. In three spatial plus one time dimensions, conformal symmetry has 15 degrees of freedom: ten for the Poincaré group, four for special conformal transformations, and one for a dilation. Harry Bateman and Ebenezer Cunningham were the first to study the conformal symmetry of Maxwell's equations. They called a generic expression of conformal symmetry a spherical wave transformation. General relativity in two spacetime dimensions also enjoys conformal symmetry.
https://en.wikipedia.org/wiki/Conformal_symmetry
passage: ## Basic constructions In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. ### Subspaces and quotient spaces A nonempty subset $$ W $$ of a vector space $$ V $$ that is closed under addition and scalar multiplication (and therefore contains the $$ \mathbf{0} $$ -vector of $$ V $$ ) is called a linear subspace of $$ V $$ , or simply a subspace of $$ V $$ , when the ambient space is unambiguously a vector space. Subspaces of $$ V $$ are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set $$ S $$ of vectors is called its span, and it is the smallest subspace of $$ V $$ containing the set $$ S $$ . Expressed in terms of elements, the span is the subspace consisting of all the linear combinations of elements of $$ S $$ . Linear subspace of dimension 1 and 2 are referred to as a line (also vector line), and a plane respectively. If W is an n-dimensional vector space, any subspace of dimension 1 less, i.e., of dimension $$ n-1 $$ is called a hyperplane. The counterpart to subspaces are quotient vector spaces.
https://en.wikipedia.org/wiki/Vector_space
passage: This amounts to choosing an axis vector for the rotations; the defining Jacobi identity is a well-known property of cross products. The earliest example of an infinitesimal transformation that may have been recognised as such was in Euler's theorem on homogeneous functions. Here it is stated that a function F of n variables x1, ..., xn that is homogeneous of degree r, satisfies $$ \Theta F=rF \, $$ with $$ \Theta=\sum_i x_i{\partial\over\partial x_i}, $$ the Theta operator. That is, from the property $$ F(\lambda x_1,\dots, \lambda x_n)=\lambda^r F(x_1,\dots,x_n)\, $$ it is possible to differentiate with respect to λ and then set λ equal to 1. This then becomes a necessary condition on a smooth function F to have the homogeneity property; it is also sufficient (by using Schwartz distributions one can reduce the mathematical analysis considerations here). This setting is typical, in that there is a one-parameter group of scalings operating; and the information is coded in an infinitesimal transformation that is a first-order differential operator. ## Operator version of Taylor's theorem The operator equation $$ e^{tD}f(x)=f(x+t)\, $$ where $$ D={d\over dx} $$ is an operator version of Taylor's theorem — and is therefore only valid under caveats about f being an analytic function.
https://en.wikipedia.org/wiki/Infinitesimal_transformation
passage: ### Grotthuss–Draper law and Stark–Einstein law Photoexcitation is the first step in a photochemical process where the reactant is elevated to a state of higher energy, an excited state. The first law of photochemistry, known as the Grotthuss–Draper law (for chemists Theodor Grotthuss and John W. Draper), states that light must be absorbed by a chemical substance in order for a photochemical reaction to take place. According to the second law of photochemistry, known as the Stark–Einstein law (for physicists Johannes Stark and Albert Einstein), for each photon of light absorbed by a chemical system, no more than one molecule is activated for a photochemical reaction, as defined by the quantum yield. Photochemistry, website of William Reusch (Michigan State University), accessed 26 June 2016 ### Fluorescence and phosphorescence When a molecule or atom in the ground state (S0) absorbs light, one electron is excited to a higher orbital level. This electron maintains its spin according to the spin selection rule; other transitions would violate the law of conservation of angular momentum. The excitation to a higher singlet state can be from HOMO to LUMO or to a higher orbital, so that singlet excitation states S1, S2, S3... at different energies are possible. Kasha's rule stipulates that higher singlet states would quickly relax by radiationless decay or internal conversion (IC) to S1. Thus, S1 is usually, but not always, the only relevant singlet excited state.
https://en.wikipedia.org/wiki/Photochemistry
passage: This becomes more problematic with short focal length telescopes which require larger secondary mirrors. ## Comparison to Gaussian beam focus A circular laser beam with uniform intensity profile, focused by a lens, will form an Airy pattern at the focal plane of the lens. The intensity at the center of the focus will be $$ I_{0,Airy} = (P_0 A)/(\lambda^2 f^2) $$ where $$ P_0 $$ is the total power of the beam, $$ A= \pi D^2 / 4 $$ is the area of the beam ( $$ D $$ is the beam diameter), $$ \lambda $$ is the wavelength, and $$ f $$ is the focal length of the lens. A Gaussian beam transmitted through a hard aperture will be clipped. Energy is lost and edge diffraction occurs, effectively increasing the divergence. Because of these effects there is a Gaussian beam diameter which maximizes the intensity in the far field. This occurs when the $$ 1 / e^2 $$ diameter of the Gaussian is 89% of the aperture diameter, and the on axis intensity in the far field will be 81% of that produced by a uniform intensity profile.
https://en.wikipedia.org/wiki/Airy_disk
passage: Children of parent-child or sibling-sibling unions are at an increased risk compared to cousin-cousin unions. Inbreeding may result in a greater than expected phenotypic expression of deleterious recessive alleles within a population. As a result, first-generation inbred individuals are more likely to show physical and health defects, including: The isolation of a small population for a period of time can lead to inbreeding within that population, resulting in increased genetic relatedness between breeding individuals. Inbreeding depression can also occur in a large population if individuals tend to mate with their relatives, instead of mating randomly. Due to higher prenatal and postnatal mortality rates, some individuals in the first generation of inbreeding will not live on to reproduce. Over time, with isolation, such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental factors, the deleterious inherited traits are culled. Island species are often very inbred, as their isolation from the larger group on a mainland allows natural selection to work on their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and permits the expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced, the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly, reproduce. The reduced genetic diversity, for example due to a bottleneck will unavoidably increase inbreeding for the entire population.
https://en.wikipedia.org/wiki/Inbreeding
passage: Further details on sums of projectors can be found in Banerjee and Roy (2014). Also see Banerjee (2004) for application of sums of projectors in basic spherical trigonometry. ### Oblique projections The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection. A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection. #### A matrix representation formula for a nonzero projection operator Let $$ P \colon V \to V $$ be a linear operator such that $$ P^2 = P $$ and assume that $$ P $$ is not the zero operator. Let the vectors $$ \mathbf u_1, \ldots, \mathbf u_k $$ form a basis for the range of $$ P $$ , and assemble these vectors in the $$ n \times k $$ matrix $$ A $$ .
https://en.wikipedia.org/wiki/Projection_%28linear_algebra%29
passage: Log loss is always greater than or equal to 0, equals 0 only in case of a perfect prediction (i.e., when $$ p_k = 1 $$ and $$ y_k = 1 $$ , or $$ p_k = 0 $$ and $$ y_k = 0 $$ ), and approaches infinity as the prediction gets worse (i.e., when $$ y_k = 1 $$ and $$ p_k \to 0 $$ or $$ y_k = 0 $$ and $$ p_k \to 1 $$ ), meaning the actual outcome is "more surprising". Since the value of the logistic function is always strictly between zero and one, the log loss is always greater than zero and less than infinity. Unlike in a linear regression, where the model can have zero loss at a point by passing through a data point (and zero loss overall if all points are on a line), in a logistic regression it is not possible to have zero loss at any points, since is either 0 or 1, but . These can be combined into a single expression: $$ \ell_k = -y_k\ln p_k - (1 - y_k)\ln (1 - p_k). $$ This expression is more formally known as the cross-entropy of the predicted distribution $$ \big(p_k, (1-p_k)\big) $$ from the actual distribution $$ \big(y_k, (1-y_k)\big) $$ , as probability distributions on the two-element space of (pass, fail).
https://en.wikipedia.org/wiki/Logistic_regression
passage: If $$ \mathrm H $$ is a normal algebraic subgroup of $$ \mathrm G $$ , then there exists an algebraic group $$ \mathrm G/\mathrm H $$ and a surjective morphism $$ \pi : \mathrm G \to \mathrm G/\mathrm H $$ such that $$ \mathrm H $$ is the kernel of $$ \pi $$ . Note that if the field $$ k $$ is not algebraically closed, then the morphism of groups $$ \mathrm G(k) \to \mathrm G(k)/\mathrm H(k) $$ may not be surjective (the defect of surjectivity is measured by Galois cohomology). ### Lie algebra of an algebraic group Similarly to the Lie group–Lie algebra correspondence, to an algebraic group over a field $$ k $$ is associated a Lie algebra over $$ k $$ . As a vector space, the Lie algebra is isomorphic to the tangent space at the identity element. The Lie bracket can be constructed from its interpretation as a space of derivations. ### Alternative definitions A more sophisticated definition of an algebraic group over a field $$ k $$ is that it is a group scheme over $$ k $$ (group schemes can more generally be defined over commutative rings).
https://en.wikipedia.org/wiki/Algebraic_group
passage: Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively. This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly: + Values of A(m, n) 0 1 2 3 4 n 0 0+1 1+1 2+1 3+1 4+1 n + 1 1 A(0, 1) A(0, A(1, 0))= A(0, 2) A(0, A(1, 1))= A(0, 3) A(0, A(1, 2))= A(0, 4) A(0, A(1, 3))= A(0, 5) A(0, A(1, n−1)) 2 A(1, 1) A(1, A(2, 0))= A(1, 3) A(1, A(2, 1))= A(1, 5) A(1, A(2, 2))= A(1, 7) A(1, A(2, 3))= A(1, 9) A(1, A(2, n−1)) 3 A(2, 1) A(2, A(3, 0))= A(2, 5) A(2, A(3, 1))= A(2, 13) A(2, A(3, 2))= A(2, 29) A(2, A(3, 3))= A(2, 61) A(2, A(3, n−1)) 4 A(3, 1) A(3, A(4, 0))= A(3, 13) A(3, A(4, 1))= A(3, 65533)
https://en.wikipedia.org/wiki/Ackermann_function
passage: - SUN workstation – Andy Bechtolsheim designed the SUN workstation, for the Stanford University Network communications project as a personal CAD workstation, which led to Sun Microsystems. - MIMO - Arogyaswami Paulraj and Thomas Kailath invented multiple-input and multiple-output (MIMO) radio communications, which involves simultaneously using multiple antennas on receivers and transmitters. Invented in 1992, MIMO is an essential element in many modern wireless technologies today. ### Businesses and entrepreneurship Stanford is one of the most successful universities worldwide in creating companies and licensing its inventions to existing companies, and it is often considered the model for technology transfer. Timothy Lenoir. Inventing the entrepreneurial university: Stanford and the co-evolution of Silicon Valley pp. 88–128 in Building Technology Transfer within Research Universities: An Entrepreneurial Approach Edited by Thomas J. Allen and Rory P. O'Shea. Cambridge University Press, 2014. Stanford's Office of Technology Licensing is responsible for commercializing university research, intellectual property, and university-developed projects. The university is described as having a strong venture culture in which students are encouraged, and often funded, to launch their own companies. Companies founded by Stanford alumni generate more than $2.7 trillion in annual revenue and have created some 5.4 million jobs since the 1930s. When combined, these companies would form the tenth-largest economy in the world.
https://en.wikipedia.org/wiki/Stanford_University
passage: Although a single substrate is involved, the existence of a modified enzyme intermediate means that the mechanism of catalase is actually a ping–pong mechanism, a type of mechanism that is discussed in the ## Multi-substrate reactions section below. ### Michaelis–Menten kinetics As enzyme-catalysed reactions are saturable, their rate of catalysis does not show a linear response to increasing substrate. If the initial rate of the reaction is measured over a range of substrate concentrations (denoted as [S]), the initial reaction rate ( $$ v_0 $$ ) increases as [S] increases, as shown on the right. However, as [S] gets higher, the enzyme becomes saturated with substrate and the initial rate reaches Vmax, the enzyme's maximum rate. The Michaelis–Menten kinetic model of a single-substrate reaction is shown on the right. There is an initial bimolecular reaction between the enzyme E and substrate S to form the enzyme–substrate complex ES. The rate of enzymatic reaction increases with the increase of the substrate concentration up to a certain level called Vmax; at Vmax, increase in substrate concentration does not cause any increase in reaction rate as there is no more enzyme (E) available for reacting with substrate (S). Here, the rate of reaction becomes dependent on the ES complex and the reaction becomes a unimolecular reaction with an order of zero.
https://en.wikipedia.org/wiki/Enzyme_kinetics
passage: The normal fan of $$ P $$ is the fan whose maximal cones are the normal cones at each vertex of $$ P $$ . It is well known that projective toric varieties are the ones coming from the normal fans of rational polytopes. For example, the complex projective plane $$ \mathbb{CP}^2 $$ comes from the triangle, or $$ 2 $$ -simplex. It may be represented by three complex coordinates satisfying $$ |z_1|^2+|z_2|^2+|z_3|^2 = 1 , \,\! $$ where the sum has been chosen to account for the real rescaling part of the projective map, and the coordinates must be moreover identified by the following action: $$ (z_1,z_2,z_3)\approx e^{i\phi} (z_1,z_2,z_3) . \,\! $$ The approach of toric geometry is to write $$ (x,y,z) = (|z_1|^2,|z_2|^2,|z_3|^2) . \,\! $$ The coordinates $$ x,y,z $$ are non-negative, and they parameterize a triangle because $$ x+y+z=1 ; \,\! $$ that is, $$ \quad z=1-x-y . \,\! $$ The triangle is the toric base of the complex projective plane.
https://en.wikipedia.org/wiki/Toric_variety
passage: The solution can be obtained using Nachbin summation as $$ f(x)= \sum_{n=0}^\infty \frac{a_n}{M(n+1)}x^n $$ with the $$ a_n $$ from $$ g(s) $$ and with $$ M(n) $$ the Mellin transform of $$ K(u) $$ . An example of this is the Gram series $$ \pi (x) \approx 1+\sum_{n=1}^{\infty} \frac{\log^{n}(x)}{n\cdot n!\zeta (n+1)}. $$ In some cases as an extra condition we require $$ \int_0^\infty K(t)t^{n}\,dt $$ to be finite and nonzero for $$ n=0,1,2,3,.... $$ ## Fréchet space Collections of functions of exponential type $$ \tau $$ can form a complete uniform space, namely a Fréchet space, by the topology induced by the countable family of norms $$ \|f\|_{n} = \sup_{z \in \mathbb{C}} \exp \left[-\left(\tau + \frac{1}{n}\right)|z|\right]|f(z)|. $$
https://en.wikipedia.org/wiki/Nachbin%27s_theorem
passage: + Saturn=15 4 9 2 3 5 7 8 1 6 + Jupiter=34 4 14 15 1 9 7 6 12 5 11 10 8 16 2 3 13 + Mars=65 11 24 7 20 3 4 12 25 8 16 17 5 13 21 9 10 18 1 14 22 23 6 19 2 15 + Sol=111 6 32 3 34 35 1 7 11 27 28 8 30 19 14 16 15 23 24 18 20 22 21 17 13 25 29 10 9 26 12 36 5 33 4 2 31 + Venus=175 22 47 16 41 10 35 4 5 23 48 17 42 11 29 30 6 24 49 18 36 12 13 31 7 25 43 19 37 38 14 32 1 26 44 20 21 39 8 33 2 27 45 46 15 40 9 34 3 28 + Mercury=260 8 58 59 5 4 62 63 1 49 15 14 52 53 11 10 56 41 23 22 44 45 19 18 48 32 34 35 29 28 38 39 25 40 26 27 37 36 30 31 33 17 47 46 20 21 43 42 24 9 55 54 12 13 51 50 16 64 2 3 61 60 6 7 57 + Luna=369 37 78 29 70 21 62 13 54 5 6 38 79 30 71 22 63 14 46 47 7 39 80 31 72 23 55 15 16 48 8 40 81 32 64 24 56 57 17 49 9 41 73 33 65 25 26 58 18 50 1 42 74 34 66 67 27 59 10 51 2 43 75 35 36 68 19 60 11 52 3 44 76 77 28 69 20 61 12 53 4 45 In 1624 France, Claude Gaspard Bachet described the "diamond method" for constructing Agrippa's odd ordered squares in his book Problèmes Plaisants. During 1640 Bernard Frenicle de Bessy and Pierre Fermat exchanged letters on magic squares and cubes, and in one of the letters Fermat boasts of being able to construct 1,004,144,995,344 magic squares of order 8 by his method.
https://en.wikipedia.org/wiki/Magic_square
passage: ## Examples Topological defects occur in partial differential equations and are believed to drive phase transitions in condensed matter physics. The authenticity of a topological defect depends on the nature of the vacuum in which the system will tend towards if infinite time elapses; false and true topological defects can be distinguished if the defect is in a false vacuum and a true vacuum, respectively. ### Solitary wave PDEs Examples include the soliton or solitary wave which occurs in exactly solvable models, such as - screw dislocations in crystalline materials, - Skyrmion in quantum field theory, - Magnetic skyrmion in condensed matter, - Topological solitons of the Wess–Zumino–Witten model. ### Lambda transitions Topological defects in lambda transition universality class systems including: - Screw/edge-dislocations in liquid crystals, - Magnetic flux "tubes" known as fluxons in superconductors, and - Vortices in superfluids. ## Cosmological defects Topological defects, of the cosmological type, are extremely high-energy phenomena which are deemed impractical to produce in Earth-bound physics experiments. Topological defects created during the universe's formation could theoretically be observed without significant energy expenditure. In the Big Bang theory, the universe cools from an initial hot, dense state triggering a series of phase transitions much like what happens in condensed-matter systems such as superconductors.
https://en.wikipedia.org/wiki/Topological_defect
passage: In physics, magnetosonic waves, also known as magnetoacoustic waves, are low-frequency compressive waves driven by mutual interaction between an electrically conducting fluid and a magnetic field. They are associated with compression and rarefaction of both the fluid and the magnetic field, as well as with an effective tension that acts to straighten bent magnetic field lines. The properties of magnetosonic waves are highly dependent on the angle between the wavevector and the equilibrium magnetic field and on the relative importance of fluid and magnetic processes in the medium. They only propagate with frequencies much smaller than the ion cyclotron or ion plasma frequencies of the medium, and they are nondispersive at small amplitudes. There are two types of magnetosonic waves, fast magnetosonic waves and slow magnetosonic waves, which—together with Alfvén waves—are the normal modes of ideal magnetohydro­dynamics. The fast and slow modes are distinguished by magnetic and gas pressure oscillations that are either in-phase or anti-phase, respectively. This results in the phase velocity of any given fast mode always being greater than or equal to that of any slow mode in the same medium, among other differences. Magnetosonic waves have been observed in the Sun's corona and provide an observational foundation for coronal seismology. ## Characteristics Magnetosonic waves are a type of low-frequency wave present in electrically conducting, magnetized fluids, such as plasmas and liquid metals.
https://en.wikipedia.org/wiki/Magnetosonic_wave
passage: Kernel threads do not own resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped. User threads Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). User threads as implemented by virtual machines are also called green threads. As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.
https://en.wikipedia.org/wiki/Thread_%28computing%29
passage: Here, using a technique such as Crank–Nicolson or the explicit method: the PDE is discretized per the technique chosen, such that the value at each lattice point is specified as a function of the value at later and adjacent points; see Stencil (numerical analysis); the value at each point is then found using the technique in question; working backwards in time from maturity, and inwards from the boundary prices. 4. The value of the option today, where the underlying is at its spot price, (or at any time/price combination,) is then found by interpolation. ## Application As above, these methods can solve derivative pricing problems that have, in general, the same level of complexity as those problems solved by tree approaches, but, given their relative complexity, are usually employed only when other approaches are inappropriate; an example here, being changing interest rates and / or time linked dividend policy. At the same time, like tree-based methods, this approach is limited in terms of the number of underlying variables, and for problems with multiple dimensions, Monte Carlo methods for option pricing are usually preferred. Note that, when standard assumptions are applied, the explicit technique encompasses the binomial- and trinomial tree methods. Tree based methods, then, suitably parameterized, are a special case of the explicit finite difference method. ## References
https://en.wikipedia.org/wiki/Finite_difference_methods_for_option_pricing
passage: But this does not help the parser work out how to parse the remainder of the input program to look for further, independent errors. If the parser recovers badly from the first error, it is very likely to mis-parse everything else and produce a cascade of unhelpful spurious error messages. In the yacc and bison parser generators, the parser has an ad hoc mechanism to abandon the current statement, discard some parsed phrases and lookahead tokens surrounding the error, and resynchronize the parse at some reliable statement-level delimiter like semicolons or braces. This often works well for allowing the parser and compiler to look over the rest of the program. Many syntactic coding errors are simple typos or omissions of a trivial symbol. Some LR parsers attempt to detect and automatically repair these common cases. The parser enumerates every possible single-symbol insertion, deletion, or substitution at the error point. The compiler does a trial parse with each change to see if it worked okay. (This requires backtracking to snapshots of the parse stack and input stream, normally unneeded by the parser.) Some best repair is picked. This gives a very helpful error message and resynchronizes the parse well. However, the repair is not trustworthy enough to permanently modify the input file. Repair of syntax errors is easiest to do consistently in parsers (like LR) that have parse tables and an explicit data stack.
https://en.wikipedia.org/wiki/LR_parser