text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: It is, however, defined in quite different ways. For example: Carl Troll conceives of landscape not as a mental construct but as an objectively given 'organic entity', a harmonic individuum of space. Ernst Neef defines landscapes as sections within the uninterrupted earth-wide interconnection of geofactors which are defined as such on the basis of their uniformity in terms of a specific land use, and are thus defined in an anthropocentric and relativistic way. According to Richard Forman and Michel Godron, a landscape is a heterogeneous land area composed of a cluster of interacting ecosystems that is repeated in similar form throughout, whereby they list woods, meadows, marshes and villages as examples of a landscape's ecosystems, and state that a landscape is an area at least a few kilometres wide. John A. Wiens opposes the traditional view expounded by Carl Troll, Isaak S. Zonneveld, Zev Naveh, Richard T. T. Forman/Michel Godron and others that landscapes are arenas in which humans interact with their environments on a kilometre-wide scale; instead, he defines 'landscape'—regardless of scale—as "the template on which spatial patterns influence ecological processes". Some define 'landscape' as an area containing two or more ecosystems in close proximity. ### Scale and heterogeneity (incorporating composition, structure, and function) A main concept in landscape ecology is scale. Scale represents the real world as translated onto a map, relating distance on a map image and the corresponding distance on earth.
https://en.wikipedia.org/wiki/Landscape_ecology
passage: For example, many data structures used in computational geometry are based on red–black trees, and the Completely Fair Scheduler and epoll system call of the Linux kernel use red–black trees. The AVL tree is another structure supporting $$ O(\log n) $$ search, insertion, and removal. AVL trees can be colored red–black, and thus are a subset of red-black trees. The worst-case height of AVL is 0.720 times the worst-case height of red-black trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. The performance of WAVL trees lie in between AVL trees and red-black trees. Red–black trees are also particularly valuable in functional programming, where they are one of the most common persistent data structures, used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires $$ O(\log n) $$ space for each insertion or deletion, in addition to time. For every 2–3–4 tree, there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–3–4 trees are also equivalent to color-flipping and rotations in red–black trees.
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
passage: In 3D rasterization, color is usually determined by a pixel shader or fragment shader, a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like shadows and reflections using only texture mapping and multiple passes. Older and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as flat shading (lighting is computed once for each triangle, which is then rendered entirely in one color), Gouraud shading (lighting is computed using normal vectors defined at vertices and then colors are interpolated across each triangle), or Phong shading (normal vectors are interpolated across each triangle and lighting is computed for each pixel). Until relatively recently, Pixar used rasterization for rendering its animated films. Unlike the renderers commonly used for real-time graphics, the Reyes rendering system in Pixar's RenderMan software was optimized for rendering very small (pixel-sized) polygons, and incorporated stochastic sampling techniques more typically associated with ray tracing. Ray casting One of the simplest ways to render a 3D scene is to test if a ray starting at the viewpoint (the "eye" or "camera") intersects any of the geometric shapes in the scene, repeating this test using a different ray direction for each pixel. This method, called ray casting, was important in early computer graphics, and is a fundamental building block for more advanced algorithms.
https://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
passage: On 23 January 2007, the Human ## Metabolome Project, led by David S. Wishart, completed the first draft of the human metabolome, consisting of a database of approximately 2,500 metabolites, 1,200 drugs and 3,500 food components. Similar projects have been underway in several plant species, most notably Medicago truncatula and Arabidopsis thaliana for several years. As late as mid-2010, metabolomics was still considered an "emerging field". Further, it was noted that further progress in the field depended in large part, through addressing otherwise "irresolvable technical challenges", by technical evolution of mass spectrometry instrumentation. In 2015, real-time metabolome profiling was demonstrated for the first time. Metabolome The metabolome refers to the complete set of small-molecule (<1.5 kDa) metabolites (such as metabolic intermediates, hormones and other signaling molecules, and secondary metabolites) to be found within a biological sample, such as a single organism. The word was coined in analogy with transcriptomics and proteomics; like the transcriptome and the proteome, the metabolome is dynamic, changing from second to second. Although the metabolome can be defined readily enough, it is not currently possible to analyse the entire range of metabolites by a single analytical method. In January 2007, scientists at the University of Alberta and the University of Calgary completed the first draft of the human metabolome.
https://en.wikipedia.org/wiki/Metabolomics
passage: Journal of the ACM, 31(2):245–281, 1984. Lemma 1: As the find function follows the path along to the root, the rank of node it encounters is increasing. Lemma 2: A node which is root of a subtree with rank has at least $$ 2^r $$ nodes. Lemma 3: The maximum number of nodes of rank is at most $$ \frac{n}{2^r}. $$ At any particular point in the execution, we can group the vertices of the graph into "buckets", according to their rank. We define the buckets' ranges inductively, as follows: Bucket 0 contains vertices of rank 0. Bucket 1 contains vertices of rank 1. Bucket 2 contains vertices of ranks 2 and 3. In general, if the -th bucket contains vertices with ranks from interval $$ \left[r, 2^r - 1\right] = [r, R - 1] $$ , then the (B+1)st bucket will contain vertices with ranks from interval $$ \left[R, 2^R - 1\right]. $$ For $$ B \in \mathbb{N} $$ , let $$ \text{tower}(B) = \underbrace{2^{2^{\cdots^2}}}_{B \text{ times}} $$ . Then bucket $$ B $$ will have vertices with ranks in the interval $$ [\text{tower}(B-1), \text{tower}(B)-1] $$ .
https://en.wikipedia.org/wiki/Disjoint-set_data_structure
passage: The Fundamentals of Engineering (FE) exam, also referred to as the Engineer in Training (EIT) exam, and formerly in some states as the Engineering Intern (EI) exam, is the first of two examinations that engineers must pass in order to be licensed as a in the United States. The second exam is the Principles and Practice of Engineering exam. The FE exam is open to anyone with a degree in engineering or a related field, or currently enrolled in the last year of an Accreditation Board for Engineering and Technology (ABET) accredited engineering degree program. Some state licensure boards permit students to take it prior to their final year, and numerous states allow those who have never attended an approved program to take the exam if they have a state-determined number of years of work experience in engineering. Some states allow those with ABET-accredited "Engineering Technology" or "ETAC" degrees to take the examination. The exam is administered by the National Council of Examiners for Engineering and Surveying (NCEES). ## History and structure In 1965, 30 states administered the first FE exam. The FE tests knowledge of what college graduates should have mastered during school. In 1966, a national uniform PE exam was offered. As of 2014, the FE and FS exams are offered only via Computer Based Testing (CBT). The exam consists of 110 questions and is given during a 6-hour session, of which 5 hours and 20 minutes is designated as time for answering the questions. The remaining time includes a tutorial, presented at the beginning of the session, and an optional 25-minute break.
https://en.wikipedia.org/wiki/Fundamentals_of_Engineering_exam
passage: Extensions of the Lévy–Steinitz theorem to series in infinite-dimensional spaces have been considered by a number of authors. ## Definitions A series $$ \sum_{n=1}^\infty a_n $$ converges if there exists a value $$ \ell $$ such that the sequence of the partial sums $$ (S_1, S_2, S_3, \ldots), \quad S_n = \sum_{k=1}^n a_k, $$ converges to $$ \ell $$ . That is, for any ε > 0, there exists an integer N such that if n ≥ N, then $$ \left\vert S_n - \ell \right\vert \le \varepsilon. $$ A series converges conditionally if the series $$ \sum_{n=1}^\infty a_n $$ converges but the series $$ \sum_{n=1}^\infty \left\vert a_n \right\vert $$ diverges. A permutation is simply a bijection from the set of positive integers to itself. This means that if $$ \sigma $$ is a permutation, then for any positive integer $$ b, $$ there exists exactly one positive integer $$ a $$ such that $$ \sigma (a) = b. $$ In particular, if $$ x \ne y $$ , then $$ \sigma (x) \ne \sigma (y) $$ .
https://en.wikipedia.org/wiki/Riemann_series_theorem
passage: Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of &vdash; is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when . This ability to mix external implication &vdash; and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus. Applications Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics. ### Computers In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic.
https://en.wikipedia.org/wiki/Boolean_algebra
passage: The error rate is less than 5%. In 1855, Adolf Fick, the 26-year-old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism similar to Fourier's law for heat conduction (1822) and Ohm's law for electric current (1827). Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied until the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist and former assistant of Thomas Graham studied systematically solid state diffusion on the example of gold in lead in 1896. : "... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals. " In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin.
https://en.wikipedia.org/wiki/Diffusion
passage: ### Data properties Some important properties of data for which requirements need to be met are: - definition-related properties - relevance: the usefulness of the data in the context of your business. - clarity: the availability of a clear and shared definition for the data. - consistency: the compatibility of the same type of data from different sources. - content-related properties - timeliness: the availability of data at the time required and how up-to-date that data is. - accuracy: how close to the truth the data is. - properties related to both definition and content - completeness: how much of the required data is available. - accessibility: where, how, and to whom the data is available or not available (e.g. security). - cost: the cost incurred in obtaining the data, and making it available for use. ### Data organization Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
https://en.wikipedia.org/wiki/Data_model
passage: Intel "My WiFi" and Windows 7 "virtual Wi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up and configure. Wireless LAN A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the network. Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name. Fixed wireless technology implements point-to-point links between computers or networks at two distant locations, often using dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without installing a wired link. To connect to Wi-Fi using a mobile device, one can use a device like a wireless router or the private hotspot capability of another mobile device. ### Wireless ad hoc network A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand distance-vector routing, and Dynamic Source Routing.
https://en.wikipedia.org/wiki/Wireless_network
passage: ### Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules. The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize in Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles. Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron"). This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur.
https://en.wikipedia.org/wiki/Compton_scattering
passage: - History of topography – history of the study of surface shape and features of the Earth and other observable astronomical objects including planets, moons, and asteroids. - History of volcanology – history of the study of volcanoes, lava, magma, and related geological, geophysical and geochemical phenomena. ## General principles - Principle – law or rule that has to be, or usually is to be followed, or can be desirably followed, or is an inevitable consequence of something, such as the laws observed in nature or the way that a system is constructed. The principles of such a system are understood by its users as the essential characteristics of the system, or reflecting system's designed purpose, and the effective operation or use of which would be impossible if any one of the principles was to be ignored. ### Basic principles of physics Physics – branch of science that studies matter and its motion through space and time, along with related concepts such as energy and force. Physics is one of the "fundamental sciences" because the other natural sciences (like biology, geology, etc.) deal with systems that seem to obey the laws of physics. According to physics, the physical laws of matter, energy, and the fundamental forces of nature govern the interactions between particles and physical entities (such as planets, molecules, atoms, or subatomic particles).
https://en.wikipedia.org/wiki/Outline_of_physical_science
passage: Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through: - Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions. - G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity. - Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate.
https://en.wikipedia.org/wiki/Cell_biology
passage: Impulse response coefficients taken at intervals of $$ L $$ form a subsequence, and there are $$ L $$ such subsequences (called phases) multiplexed together. Each of $$ L $$ phases of the impulse response is filtering the same sequential values of the $$ x $$ data stream and producing one of $$ L $$ sequential output values. In some multi-processor architectures, these dot products are performed simultaneously, in which case it is called a polyphase filter. For completeness, we now mention that a possible, but unlikely, implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the $$ h $$ array, and process the $$ x_L[n] $$   sequence at $$ L $$ times faster than the original input rate. Then $$ L-1 $$ of every $$ L $$ outputs are zero. The desired $$ y $$ sequence is the sum of the phases, where $$ L-1 $$ terms of the each sum are identically zero.  Computing $$ L-1 $$ zeros between the useful outputs of a phase and adding them to a sum is effectively decimation. It's the same result as not computing them at all. That equivalence is known as the second Noble identity. It is sometimes used in derivations of the polyphase method.
https://en.wikipedia.org/wiki/Upsampling
passage: The solution to this variational problem is that $$ C $$ must satisfy a Fredholm integral equation of the second kind $$ C (\boldsymbol{x}) = f ( \boldsymbol{x} ) + \int K(\boldsymbol{x}, \boldsymbol{y}) C ( \boldsymbol{y} ) d\boldsymbol{y} $$ where the functions $$ K(\boldsymbol{x}, \boldsymbol{y}) $$ and $$ f ( \boldsymbol{x} ) $$ are defined in terms of the resolved fields $$ L_{ij},\alpha_{ij},\beta_{ij} $$ and are therefore known at each time step and the integral ranges over the whole fluid domain. The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme. Even though this variational approach removes an inherent inconsistency in Lilly's approach, the $$ C(x,y,z,t) $$ obtained from the integral equation still displayed the instability associated with negative viscosities. This can be resolved by insisting that $$ E[C] $$ be minimized subject to the constraint $$ C(x,y,z,t) \geq 0 $$ .
https://en.wikipedia.org/wiki/Large_eddy_simulation
passage: When the color is represented as RGB values, as often is the case in computer graphics, this equation is typically modeled separately for R, G and B intensities, allowing different reflection constants $$ k_\text{a}, $$ $$ k_\text{d} $$ and $$ k_\text{s} $$ for the different color channels. When implementing the Phong reflection model, there are a number of methods for approximating the model, rather than implementing the exact formulas, which can speed up the calculation; for example, the Blinn–Phong reflection model is a modification of the Phong reflection model, which is more efficient if the viewer and the light source are treated to be at infinity. Another approximation that addresses the calculation of the exponentiation in the specular term is the following: Considering that the specular term should be taken into account only if its dot product is positive, it can be approximated as $$ \max(0, \hat{R}_m \cdot \hat{V})^\alpha = \max(0, 1-\lambda)^{\beta \gamma} = \left(\max(0,1-\lambda)^\beta\right)^\gamma \approx \max(0, 1 - \beta \lambda)^\gamma $$ where $$ \lambda = 1 - \hat{R}_m \cdot \hat{V} $$ , and $$ \beta = \alpha / \gamma\, $$ is a real number which doesn't have to be an integer.
https://en.wikipedia.org/wiki/Phong_reflection_model
passage: ### Weighted BV functions It is possible to generalize the above notion of total variation so that different variations are weighted differently. More precisely, let $$ \varphi : [0, +\infty)\longrightarrow [0, +\infty) $$ be any increasing function such that $$ \varphi(0) = \varphi(0+) =\lim_{x\rightarrow 0_+}\varphi(x) = 0 $$ (the weight function) and let $$ f: [0, T]\longrightarrow X $$ be a function from the interval $$ [0 , T] $$ $$ \subset \mathbb{R} $$ taking values in a normed vector space $$ X $$ .
https://en.wikipedia.org/wiki/Bounded_variation
passage: \sum_{k=1}^n k^6 &= \phantom{-}\tfrac{1}{42}n+0n^2-\tfrac{1}{6}n^3+0n^4+\tfrac{1}{2}n^5+\tfrac{1}{2}n^6+\tfrac{1}{7}n^7 . \end{align} $$ Writing these polynomials as a product between matrices gives $$ \begin{pmatrix} \sum k^0 \\ \sum k^1 \\ \sum k^2 \\ \sum k^3 \\ \sum k^4 \\ \sum k^5 \\ \sum k^6 \end{pmatrix} = G_7 \begin{pmatrix} n \\ n^2 \\ n^3 \\ n^4 \\ n^5 \\ n^6 \\ n^7 \end{pmatrix} , $$ where $$ G_7 = \begin{pmatrix} 1& 0& 0& 0& 0&0& 0\\ {1\over 2}& {1\over 2}& 0& 0& 0& 0& 0\\ {1\over 6}& {1\over 2}&{1\over 3}& 0& 0& 0& 0\\ 0& {1\over 4}& {1\over 2}& {1\over 4}& 0&0& 0\\ -{1\over 30}& 0& {1\over 3}& {1\over 2}& {1\over 5}&0& 0\\
https://en.wikipedia.org/wiki/Faulhaber%27s_formula
passage: $$ H_F $$ is the Hessian matrix of $$ F $$ (matrix of the second derivatives). The proof of this formula relies (as in the case of an implicit curve) on the implicit function theorem and the formula for the normal curvature of a parametric surface. ## Applications of implicit surfaces As in the case of implicit curves it is an easy task to generate implicit surfaces with desired shapes by applying algebraic operations (addition, multiplication) on simple primitives. ### Equipotential surface of point charges The electrical potential of a point charge $$ q_i $$ at point $$ \mathbf p_i=(x_i,y_i,z_i) $$ generates at point $$ \mathbf p=(x,y,z) $$ the potential (omitting physical constants) $$ F_i(x,y,z)=\frac{q_i}{\|\mathbf p -\mathbf p_i\|}. $$ The equipotential surface for the potential value $$ c $$ is the implicit surface $$ F_i(x,y,z)-c=0 $$ which is a sphere with center at point $$ \mathbf p_i $$ .
https://en.wikipedia.org/wiki/Implicit_surface
passage: This improves the precision due to the relative ease of producing equal valued-matched resistors. - The successive approximation or cyclic DAC, which successively constructs the output during each cycle. Individual bits of the digital input are processed each cycle until the entire input is accounted for. - The thermometer-coded DAC, which contains an equal resistor or current-source segment for each possible value of DAC output. An 8-bit thermometer DAC would have 255 segments, and a 16-bit thermometer DAC would have 65,535 segments. This is a fast and highest precision DAC architecture but at the expense of requiring many components which, for practical implementations, fabrication requires high-density IC processes. - Hybrid DACs, which use a combination of the above techniques in a single converter. Most DAC integrated circuits are of this type due to the difficulty of getting low cost, high speed and high precision in one device. - The segmented DAC, which combines the thermometer-coded principle for the most significant bits and the binary-weighted principle for the least significant bits. In this way, a compromise is obtained between precision (by the use of the thermometer-coded principle) and number of resistors or current sources (by the use of the binary-weighted principle). The full binary-weighted design means 0% segmentation, the full thermometer-coded design means 100% segmentation. - Most DACs shown in this list rely on a constant reference voltage or current to create their output value.
https://en.wikipedia.org/wiki/Digital-to-analog_converter
passage: For the first two of these there are associated expressions in asymptotic O notation: the first is that $$ a_k - L = o(b_k - L) $$ in small o notation and the second is that $$ a_k - L = \Theta(b_k - L) $$ in Knuth notation. The third is also called asymptotic equivalence, expressed $$ a_k - L \sim b_k - L. $$ ### Examples For any two geometric progressions $$ (a r^k)_{k \in \mathbb{N}} $$ and $$ (b s^k)_{k \in \mathbb{N}}, $$ with shared limit zero, the two sequences are asymptotically equivalent if and only if both $$ a = b $$ and $$ r = s. $$ They converge with the same order if and only if $$ r = s. $$ $$ (a r^k) $$ converges with a faster order than $$ (b s^k) $$ if and only if $$ r < s. $$ The convergence of any geometric series to its limit has error terms that are equal to a geometric progression, so similar relationships hold among geometric series as well.
https://en.wikipedia.org/wiki/Rate_of_convergence
passage: that is invertible modulo (that is, the image of $$ \alpha $$ in $$ R/I $$ is a unit in $$ R/I $$ ). Suppose that for some positive integer there is a factorization $$ h\equiv \alpha fg \pmod {I^k}, $$ such that and are monic polynomials that are coprime modulo , in the sense that there exist $$ a,b \in R[X], $$ such that $$ af+bg\equiv 1\pmod I. $$ Then, there are polynomials $$ \delta_f, \delta_g\in I^k R[X], $$ such that $$ \deg \delta_f <\deg f, $$ $$ \deg \delta_g <\deg g, $$ and $$ h\equiv \alpha(f+\delta_f)(g+\delta_g) \pmod {I^{k+1}}. $$ Under these conditions, $$ \delta_f $$ and $$ \delta_g $$ are unique modulo $$ I^{k+1}R[X]. $$ Moreover, $$ f+\delta_f $$ and $$ g+\delta_g $$ satisfy the same Bézout's identity as and , that is, $$ a(f+\delta_f)+b(g+\delta_g)\equiv 1\pmod I. $$
https://en.wikipedia.org/wiki/Hensel%27s_lemma
passage: Java, SQL Multi-model document and graph database OWLIM Java, SPARQL 1.1 RDF triple store Profium Sense Java, SPARQL RDF triple store RedisGraph Cypher Graph database Sqrrl Enterprise Java Graph databaseTerminusDBJavaScript, Python, datalogOpen source RDF triple-store and document store ## Performance The performance of NoSQL databases is usually evaluated using the metric of throughput, which is measured as operations per second. Performance evaluation must pay attention to the right benchmarks such as production configurations, parameters of the databases, anticipated data volume, and concurrent user workloads. Ben Scofield rated different categories of NoSQL databases as follows: Data model Performance Scalability Flexibility Complexity Data integrity Functionality Key–value store high high high none low variable (none) Column-oriented store high high moderate low low minimal Document-oriented store high variable (high) high low low variable (low) Graph database variable variable high high low-med graph theory Relational database variable variable low moderate high relational algebra Performance and scalability comparisons are most commonly done using the YCSB benchmark. ## Handling relational data Since most NoSQL databases lack ability for joins in queries, the database schema generally needs to be designed differently. There are three main techniques for handling relational data in a NoSQL database. (See table join and ACID support for NoSQL databases that support joins.) ### Multiple queries Instead of retrieving all the data with one query, it is common to do several queries to get the desired data.
https://en.wikipedia.org/wiki/NoSQL
passage: ### Function The initial stage of visual processing within the visual cortex, known as V1, plays a fundamental role in shaping our perception of the visual world. V1 possesses a meticulously defined map, referred to as the retinotopic map, which intricately organizes spatial information from the visual field. In humans, the upper bank of the calcarine sulcus in the occipital lobe robustly responds to the lower half of the visual field, while the lower bank responds to the upper half. This retinotopic mapping conceptually represents a projection of the visual image from the retina to V1. The importance of this retinotopic organization lies in its ability to preserve spatial relationships present in the external environment. Neighboring neurons in V1 exhibit responses to adjacent portions of the visual field, creating a systematic representation of the visual scene. This mapping extends both vertically and horizontally, ensuring the conservation of both horizontal and vertical relationships within the visual input. Moreover, the retinotopic map demonstrates a remarkable degree of plasticity, adapting to alterations in visual experience. Studies have revealed that changes in sensory input, such as those induced by visual training or deprivation, can lead to shifts in the retinotopic map. Beyond its spatial processing role, the retinotopic map in V1 establishes connections with other visual areas, forming a network crucial for integrating diverse visual features and constructing a coherent visual percept. This dynamic mapping mechanism is indispensable for our ability to navigate and interpret the visual world effectively.
https://en.wikipedia.org/wiki/Visual_cortex
passage: if and only if $$ R \cap S \in \mathcal{F}. $$ ### Free or principal If $$ P $$ is any non-empty family of sets then the Kernel of is the intersection of all sets in $$ P: $$ $$ \operatorname{ker} P := \bigcap_{B \in P} B. $$ A non-empty family of sets $$ P $$ is called: - if $$ \operatorname{ker} P = \varnothing $$ and otherwise (that is, if $$ \operatorname{ker} P \neq \varnothing $$ ). - if $$ \operatorname{ker} P \in P. $$ - if $$ \operatorname{ker} P \in P $$ and $$ \operatorname{ker} P $$ is a singleton set; in this case, if $$ \operatorname{ker} P = \{x\} $$ then $$ P $$ is said to be principal at If a family of sets $$ P $$ is fixed then $$ P $$ is ultra if and only if some element of $$ P $$ is a singleton set, in which case $$ P $$ will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter $$ P $$ is ultra if and only if $$ \operatorname{ker} P $$ is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set.
https://en.wikipedia.org/wiki/Ultrafilter_on_a_set
passage: A function $$ f:E \to M $$ is called a càdlàg function if, for every $$ t \in E $$ , - the left limit $$ f(t-) := \lim_{s \to t^{-}}f(s) $$ exists; and - the right limit $$ f(t+) := \lim_{s \to t^{+}}f(s) $$ exists and equals $$ f(t) $$ . That is, $$ f $$ is right-continuous with left limits. ## Examples - All functions continuous on a subset of the real numbers are càdlàg functions on that subset. - As a consequence of their definition, all cumulative distribution functions are càdlàg functions. For instance the cumulative at point $$ r $$ correspond to the probability of being lower or equal than $$ r $$ , namely $$ \mathbb{P}[X\leq r] $$ . In other words, the semi-open interval of concern for a two-tailed distribution $$ (-\infty, r] $$ is right-closed. - The right derivative $$ f^\prime_+ $$ of any convex function $$ f $$ defined on an open interval, is an increasing cadlag function.
https://en.wikipedia.org/wiki/C%C3%A0dl%C3%A0g
passage: Neurolinguistics Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. Neuro-ophthalmology Neuro-ophthalmology is an academically oriented subspecialty that merges the fields of neurology and ophthalmology, often dealing with complex systemic diseases that have manifestations in the visual system. Neurophysics Neurophysics is the branch of biophysics dealing with the development and use of physical methods to gain information about the nervous system. Neurophysiology Neurophysiology is the study of the structure and function of the nervous system, generally using physiological techniques that include measurement and stimulation with electrodes or optically with ion- or voltage-sensitive dyes or light-sensitive channels. Neuropsychology Neuropsychology is a discipline that resides under the umbrellas of both psychology and neuroscience, and is involved in activities in the arenas of both basic science and applied science. In psychology, it is most closely associated with biopsychology, clinical psychology, cognitive psychology, and developmental psychology. In neuroscience, it is most closely associated with the cognitive, behavioral, social, and affective neuroscience areas. In the applied and medical domain, it is related to neurology and psychiatry. Gluck, Mark A.; Mercado, Eduardo; Myers, Catherine E. (2016). Learning and Memory: From Brain to Behavior. New York/NY, USA: Worth Publishers. p. 57. ISBN 978-1-319-15405-9.
https://en.wikipedia.org/wiki/Neuroscience
passage: A clinical decision support system (CDSS) is a health information technology that provides clinicians, staff, patients, and other individuals with knowledge and person-specific information to help health and health care. CDSS encompasses a variety of tools to enhance decision-making in the clinical workflow. These tools include computerized alerts and reminders to care providers and patients, clinical guidelines, condition-specific order sets, focused patient data reports and summaries, documentation templates, diagnostic support, and contextually relevant reference information, among other tools. CDSSs constitute a major topic in artificial intelligence in medicine. ## Characteristics A clinical decision support system is an active knowledge system that uses variables of patient data to produce advice regarding health care. This implies that a CDSS is simply a decision support system focused on using knowledge management. ### Purpose The main purpose of modern CDSS is to assist clinicians at the point of care. This means that clinicians interact with a CDSS to help to analyze and reach a diagnosis based on patient data for different diseases. In the early days, CDSSs were conceived to make decisions for the clinician literally. The clinician would input the information and wait for the CDSS to output the "right" choice, and the clinician would simply act on that output. However, the modern methodology of using CDSSs to assist means that the clinician interacts with the CDSS, utilizing both their knowledge and the CDSS's, better to analyse the patient's data than either human or CDSS could make on their own.
https://en.wikipedia.org/wiki/Clinical_decision_support_system
passage: At point $$ \vec C_a(a) $$ the involute is not regular (because $$ | \vec C_a'(a)|=0 $$ ), and from $$ \; \vec C_a'(s)\cdot\vec c'(s)=0 \; $$ follows: - The normal of the involute at point $$ \vec C_a(s) $$ is the tangent of the given curve at point $$ \vec c(s) $$ . - The involutes are parallel curves, because of $$ \vec C_a(s)=\vec C_0(s)+a\vec c'(s) $$ and the fact, that $$ \vec c'(s) $$ is the unit normal at $$ \vec C_0(s) $$ . The family of involutes and the family of tangents to the original curve makes up an orthogonal coordinate system. Consequently, one may construct involutes graphically. First, draw the family of tangent lines. Then, an involute can be constructed by always staying orthogonal to the tangent line passing the point. ### Cusps This section is based on. There are generically two types of cusps in involutes. The first type is at the point where the involute touches the curve itself. This is a #### cusp of order 3/2 . The second type is at the point where the curve has an inflection point. This is a
https://en.wikipedia.org/wiki/Involute
passage: The inequality $$ (a + b)^p \leq a^p + b^p, $$ valid for $$ a, b \geq 0, $$ implies that $$ N_p(f + g) \leq N_p(f) + N_p(g) $$ and so the function $$ d_p(f ,g) = N_p(f - g) = \|f - g\|_p^p $$ is a metric on $$ L^p(\mu). $$ The resulting metric space is complete. In this setting $$ L^p $$ satisfies a reverse Minkowski inequality, that is for $$ u, v \in L^p $$ $$ \Big\||u| + |v|\Big\|_p \geq \|u\|_p + \|v\|_p $$ This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces $$ L^p $$ for $$ 1 < p < \infty $$ . The space $$ L^p $$ for $$ 0 < p < 1 $$ is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous.
https://en.wikipedia.org/wiki/Lp_space
passage: A decision problem $$ A $$ can be solved in time $$ f(n) $$ if there exists a Turing machine operating in time $$ f(n) $$ that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time $$ f(n) $$ on a deterministic Turing machine is then denoted by DTIME( $$ f(n) $$ ). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. ### Best, worst and average case complexity The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size $$ n $$ may be faster to solve than others, we define the following complexities: 1. Best-case complexity: This is the complexity of solving the problem for the best input of size $$ n $$ . 1. Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs.
https://en.wikipedia.org/wiki/Computational_complexity_theory
passage: ## Multiplicative structure In a ring of integers, every element has a factorization into irreducible elements, but the ring need not have the property of unique factorization: for example, in the ring of integers , the element 6 has two essentially different factorizations into irreducibles: $$ 6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}). $$ A ring of integers is always a Dedekind domain, and so has unique factorization of ideals into prime ideals. The units of a ring of integers is a finitely generated abelian group by Dirichlet's unit theorem. The torsion subgroup consists of the roots of unity of . A set of torsion-free generators is called a set of fundamental units. ## Generalization One defines the ring of integers of a non-archimedean local field as the set of all elements of with absolute value ; this is a ring because of the strong triangle inequality. If is the completion of an algebraic number field, its ring of integers is the completion of the latter's ring of integers. The ring of integers of an algebraic number field may be characterised as the elements which are integers in every non-archimedean completion. For example, the -adic integers are the ring of integers of the -adic numbers .
https://en.wikipedia.org/wiki/Ring_of_integers
passage: $$ T(1)=0 $$ , and for any $$ u,v\in V $$ , $$ [T,Y(u,z)]v = TY(u,z)v - Y(u,z)Tv = \frac{d}{dz}Y(u,z)v $$ - Locality (Jacobi identity, or Borcherds identity). For any $$ u,v\in V $$ , there exists a positive integer such that: $$ (z-x)^N Y(u, z) Y(v, x) = (z-x)^N Y(v, x) Y(u, z). $$ ##### Equivalent formulations of locality axiom The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity: $$ \forall u,v,w\in V $$ , $$ \begin{aligned}&z^{-1}\delta\left(\frac{x-y}{z}\right)Y(u,x)Y(v,y)w - z^{-1}\delta\left(\frac{-y+x}{z}\right)Y(v,y)Y(u,x)w \\&= y^{-1}\delta\left(\frac{x-z}{y}\right)Y(Y(u,z)v,y)w\end{aligned}, $$ where we define the formal delta series by: $$
https://en.wikipedia.org/wiki/Vertex_operator_algebra
passage: $$ where $$ \operatorname{ad}(\mathbf{X})\mathbf{Y} = [\mathbf{X}, \mathbf{Y}] = \mathbf{X}\mathbf{Y} - \mathbf{Y}\mathbf{X} $$ and for orientation, if $$ \operatorname{det} \mathbf{Y} \ne 0 $$ then $$ \operatorname{ad}(\mathbf{X}) = \mathbf{X} - \mathbf{Y}\mathbf{X}\mathbf{Y}^{-1} ~. $$ $$ B(\mathbf{X}, \mathbf{Y}) $$ is called the Killing form; it is used to classify Lie algebras.
https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29
passage: The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the Fortuin–Kasteleyn representation. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a percolation process, based on the interaction states of the spins. Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction. - Starting from a given configuration of spins, we associate to each pair of nearest neighbours on sites $$ n,m $$ a random variable $$ b_{n,m}\in \lbrace 0,1\rbrace $$ which is interpreted in the following way: if $$ b_{n,m}=0 $$ then there is no link between the sites $$ n $$ and $$ m $$ (the bond is closed); if $$ b_{n,m}=1 $$ then there is a link connecting the spins $$ \sigma_n \text{ and } \sigma_m $$ (the bond is open).
https://en.wikipedia.org/wiki/Swendsen%E2%80%93Wang_algorithm
passage: After each step of iteration, one computes $$ \boldsymbol{y}_i=\boldsymbol{H}_i^{-1}(\lVert\boldsymbol{r}_0\rVert_2\boldsymbol{e}_1) $$ and the new iterate $$ \boldsymbol{x}_i=\boldsymbol{x}_0+\boldsymbol{V}_i\boldsymbol{y}_i $$ . ### The direct Lanczos method For the rest of discussion, we assume that $$ \boldsymbol{A} $$ is symmetric positive-definite. With symmetry of $$ \boldsymbol{A} $$ , the upper Hessenberg matrix $$ \boldsymbol{H}_i=\boldsymbol{V}_i^\mathrm{T}\boldsymbol{AV}_i $$ becomes symmetric and thus tridiagonal. It then can be more clearly denoted by $$ \boldsymbol{H}_i=\begin{bmatrix} a_1 & b_2\\ b_2 & a_2 & b_3\\ & \ddots & \ddots & \ddots\\ & & b_{i-1} & a_{i-1} & b_i\\ & & & b_i & a_i \end{bmatrix}\text{.} $$ This enables a short three-term recurrence for $$ \boldsymbol{v}_i $$ in the iteration, and the Arnoldi iteration is reduced to the Lanczos iteration.
https://en.wikipedia.org/wiki/Derivation_of_the_conjugate_gradient_method
passage: It follows from their being composed of fixed proportions of two or more types of atoms that chemical compounds can be converted, via chemical reaction, into compounds or substances each having fewer atoms. A chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using chemical symbols for the chemical elements, and subscripts to indicate the number of atoms involved. For example, water is composed of two hydrogen atoms bonded to one oxygen atom: the chemical formula is H2O. In the case of non-stoichiometric compounds, the proportions may be reproducible with regard to their preparation, and give fixed proportions of their component elements, but proportions that are not integral [e.g., for palladium hydride, PdHx (0.02 < x < 0.58)]. Chemical compounds have a unique and defined chemical structure held together in a defined spatial arrangement by chemical bonds. Chemical compounds can be molecular compounds held together by covalent bonds, salts held together by ionic bonds, intermetallic compounds held together by metallic bonds, or the subset of chemical complexes that are held together by coordinate covalent bonds. Pure chemical elements are generally not considered chemical compounds, failing the two or more atom requirement, though they often consist of molecules composed of multiple atoms (such as in the diatomic molecule H2, or the polyatomic molecule S8, etc.). Many chemical compounds have a unique numerical identifier assigned by the Chemical Abstracts Service (CAS): its CAS number.
https://en.wikipedia.org/wiki/Chemical_compound
passage: These identities have natural interpretations in terms of linear algebra. Recall that $$ \tbinom{m}{r}_q $$ counts r-dimensional subspaces $$ V\subset \mathbb{F}_q^m $$ , and let $$ \pi:\mathbb{F}_q^m \to \mathbb{F}_q^{m-1} $$ be a projection with one-dimensional nullspace $$ E_1 $$ . The first identity comes from the bijection which takes $$ V\subset \mathbb{F}_q^m $$ to the subspace $$ V' = \pi(V)\subset \mathbb{F}_q^{m-1} $$ ; in case $$ E_1\not\subset V $$ , the space $$ V' $$ is r-dimensional, and we must also keep track of the linear function $$ \phi:V'\to E_1 $$ whose graph is $$ V $$ ; but in case $$ E_1\subset V $$ , the space $$ V' $$ is (r−1)-dimensional, and we can reconstruct $$ V=\pi^{-1}(V') $$ without any extra information. The second identity has a similar interpretation, taking $$ V $$ to $$ V' = V\cap E_{n-1} $$ for an (m−1)-dimensional space $$ E_{m-1} $$ , again splitting into two cases.
https://en.wikipedia.org/wiki/Gaussian_binomial_coefficient
passage: Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of the Windows NT family still ran on IA-32 up to Windows 10 (the server line of the Windows NT family still ran on IA-32 up to Windows Server 2008). With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition (Version 2003), released in 2003, is the last Windows client operating system to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. As of 2024, x64 is still supported. An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture, and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated.
https://en.wikipedia.org/wiki/Microsoft_Windows
passage: The excess kurtosis is given by $$ \gamma_2=\frac{-6\Gamma_1^4+12\Gamma_1^2\Gamma_2-3\Gamma_2^2-4\Gamma_1 \Gamma_3 +\Gamma_4}{[\Gamma_2-\Gamma_1^2]^2} $$ where $$ \Gamma_i=\Gamma(1+i/k) $$ . The kurtosis excess may also be written as: $$ \gamma_2=\frac{\lambda^4\Gamma(1+\frac{4}{k})-4\gamma_1\sigma^3\mu-6\mu^2\sigma^2-\mu^4}{\sigma^4}-3. $$ ### Moment generating function A variety of expressions are available for the moment generating function of X itself.
https://en.wikipedia.org/wiki/Weibull_distribution
passage: This algebra has a unit element if and only if the quiver has only finitely many vertices. In this case, the modules over are naturally identified with the representations of . If the quiver has infinitely many vertices, then has an approximate identity given by $$ e_F:=\sum_{v\in F} 1_v $$ where ranges over finite subsets of the vertex set of . If the quiver has finitely many vertices and arrows, and the end vertex and starting vertex of any path are always distinct (i.e. has no oriented cycles), then is a finite-dimensional hereditary algebra over . Conversely, if is algebraically closed, then any finite-dimensional, hereditary, associative algebra over is Morita equivalent to the path algebra of its Ext quiver (i.e., they have equivalent module categories). ## Representations of quivers A representation of a quiver is an association of an -module to each vertex of , and a morphism between each module for each arrow. A representation of a quiver is said to be trivial if $$ V(x)=0 $$ for all vertices in . A morphism, between representations of the quiver , is a collection of linear maps such that for every arrow in from to , $$ V'(a)f(x) = f(y)V(a), $$ i.e. the squares that forms with the arrows of and all commute.
https://en.wikipedia.org/wiki/Quiver_%28mathematics%29
passage: Any family of sets that satisfies (2–4) is called a filter (an example: the complements to the finite sets, it is called the Fréchet filter and it is used in the usual limit theory). If (1) also holds, U is called an ultrafilter (because you can add no more sets to it without breaking it). The only explicitly known example of an ultrafilter is the family of sets containing a given element (in our case, say, the number 10). Such ultrafilters are called trivial, and if we use it in our construction, we come back to the ordinary real numbers. Any ultrafilter containing a finite set is trivial. It is known that any filter can be extended to an ultrafilter, but the proof uses the axiom of choice. The existence of a nontrivial ultrafilter (the ultrafilter lemma) can be added as an extra axiom, as it is weaker than the axiom of choice. Now if we take a nontrivial ultrafilter (which is an extension of the Fréchet filter) and do our construction, we get the hyperreal numbers as a result. If $$ f $$ is a real function of a real variable $$ x $$ then $$ f $$ naturally extends to a hyperreal function of a hyperreal variable by composition: $$ f(\{x_n\})=\{f(x_n)\} $$ where $$ \{ \dots\} $$ means "the equivalence class of the sequence $$ \dots $$
https://en.wikipedia.org/wiki/Hyperreal_number
passage: ### Probability density function of the sum of two terms Next we compute the density of the sum of two independent variables, each having the above density. The density of the sum is the convolution of the above density with itself. The sum of two variables has mean 0. The density shown in the figure at right has been rescaled by $$ \sqrt{2} $$ , so that its standard deviation is 1. This density is already smoother than the original. There are obvious lumps, which correspond to the intervals on which the original density was defined. ### Probability density function of the sum of three terms We then compute the density of the sum of three independent variables, each having the above density. The density of the sum is the convolution of the first density with the second. The sum of three variables has mean 0. The density shown in the figure at right has been rescaled by , so that its standard deviation is 1. This density is even smoother than the preceding one. The lumps can hardly be detected in this figure. ### Probability density function of the sum of four terms Finally, we compute the density of the sum of four independent variables, each having the above density. The density of the sum is the convolution of the first density with the third (or the second density with itself). The sum of four variables has mean 0.
https://en.wikipedia.org/wiki/Illustration_of_the_central_limit_theorem
passage: \end{align} $$ ## ### Properties ### Conditional entropy equals zero $$ \Eta(Y|X)=0 $$ if and only if the value of $$ Y $$ is completely determined by the value of $$ X $$ . ### Conditional entropy of independent random variables Conversely, $$ \Eta(Y|X) = \Eta(Y) $$ if and only if $$ Y $$ and $$ X $$ are independent random variables. ### Chain rule Assume that the combined system determined by two random variables $$ X $$ and $$ Y $$ has joint entropy $$ \Eta(X,Y) $$ , that is, we need $$ \Eta(X,Y) $$ bits of information on average to describe its exact state. Now if we first learn the value of $$ X $$ , we have gained $$ \Eta(X) $$ bits of information. Once $$ X $$ is known, we only need $$ \Eta(X,Y)-\Eta(X) $$ bits to describe the state of the whole system.
https://en.wikipedia.org/wiki/Conditional_entropy
passage: There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl and the model was later generalized to all of morphogenesis. ### Branching morphogenesis In the development of the lung a bronchus branches into bronchioles forming the respiratory tree. The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli. Branching morphogenesis is also evident in the ductal formation of the mammary gland. Primitive duct formation begins in development, but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development. ## Cancer morphogenesis Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis. Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling.
https://en.wikipedia.org/wiki/Morphogenesis
passage: Including such additional explanatory variables using regression or anova reduces the otherwise unexplained variance, and commonly yields greater power to detect differences than do two-sample t-tests. ## Software implementations Many spreadsheet programs and statistics packages, such as QtiPlot, LibreOffice Calc, Microsoft Excel, SAS, SPSS, Stata, DAP, gretl, R, Python, PSPP, Wolfram Mathematica, MATLAB and Minitab, include implementations of Student's t-test. Language/Program Function Notes Microsoft Excel pre 2010 `TTEST(array1, array2, tails, type)` See Microsoft Excel 2010 and later `T.TEST(array1, array2, tails, type)` See Apple Numbers`TTEST(sample-1-values, sample-2-values, tails, test-type)` See LibreOffice Calc `TTEST(Data1 ; Data2; Mode; Type)` See Google Sheets `TTEST(range1, range2, tails, type)` See Python `scipy.stats.ttest_ind(a, b, equal_var=True)` See MATLAB `ttest(data1, data2)` See Mathematica `TTest[{data1,data2}]` See R `t.test(data1, data2, var.equal=TRUE)` See SAS `PROC TTEST` See Java `tTest(sample1, sample2)` See Julia `EqualVarianceTTest(sample1, sample2)` See Stata `ttest data1 == data2` See
https://en.wikipedia.org/wiki/Student%27s_t-test
passage: However, photometry does at least allow a qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of , it would be brightest in the infrared (1000nm) rather than at the blue-green (500nm) color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.) Determining the redshift of an object with spectroscopy requires the wavelength of the emitted light in the rest frame of the source. Astronomical applications rely on distinct spectral lines. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum). Thus gamma-ray bursts themselves cannot be used for reliable redshift measurements, but optical afterglow associated with the burst can be analyzed for redshifts. ### Local observations In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts enable astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries.
https://en.wikipedia.org/wiki/Redshift
passage: In that case, the lemma says that each closed 1-form on U is exact. This version can be seen using algebraic topology as follows. The rational Hurewicz theorem (or rather the real analog of that) says that $$ \operatorname{H}_1(U; \mathbb{R}) = 0 $$ since U is simply connected. Since $$ \mathbb{R} $$ is a field, the k-th cohomology $$ \operatorname{H}^k(U; \mathbb{R}) $$ is the dual vector space of the k-th homology $$ \operatorname{H}_k(U; \mathbb{R}) $$ . In particular, $$ \operatorname{H}^1(U; \mathbb{R}) = 0. $$ By the de Rham theorem (which follows from the Poincaré lemma for open balls), $$ \operatorname{H}^1(U; \mathbb{R}) $$ is the same as the first de Rham cohomology group (see §Implication to de Rham cohomology). Hence, each closed 1-form on U is exact. ## Poincaré lemma with compact support There is a version of Poincaré lemma for compactly supported differential forms: The usual proof in the non-compact case does not go through since the homotopy h is not proper. Thus, somehow a different argument is needed for the compact case.
https://en.wikipedia.org/wiki/Poincar%C3%A9_lemma
passage: For instance, 0.3 is equal to $$ \tfrac{3}{10} $$ , and 25.12 is equal to $$ \tfrac{2512}{100} $$ . Every rational number corresponds to a finite or a repeating decimal. Irrational numbers are numbers that cannot be expressed through the ratio of two integers. They are often required to describe geometric magnitudes. For example, if a right triangle has legs of the length 1 then the length of its hypotenuse is given by the irrational number $$ \sqrt 2 $$ . is another irrational number and describes the ratio of a circle's circumference to its diameter. The decimal representation of an irrational number is infinite without repeating decimals. The set of rational numbers together with the set of irrational numbers makes up the set of real numbers. The symbol of the real numbers is $$ \R $$ . Even wider classes of numbers include complex numbers and quaternions. ### Numeral systems A numeral is a symbol to represent a number and numeral systems are representational frameworks. They usually have a limited amount of basic numerals, which directly refer to certain numbers. The system governs how these basic numerals may be combined to express any number. Numeral systems are either positional or non-positional. All early numeral systems were non-positional. For non-positional numeral systems, the value of a digit does not depend on its position in the numeral. The simplest non-positional system is the unary numeral system.
https://en.wikipedia.org/wiki/Arithmetic
passage: S2 may split into two distinct sounds, either as a result of inspiration or different valvular or cardiac problems. Additional heart sounds may also be present and these give rise to gallop rhythms. A third heart sound, S3 usually indicates an increase in ventricular blood volume. A fourth heart sound S4 is referred to as an atrial gallop and is produced by the sound of blood being forced into a stiff ventricle. The combined presence of S3 and S4 give a quadruple gallop. Heart murmurs are abnormal heart sounds which can be either related to disease or benign, and there are several kinds. There are normally two heart sounds, and abnormal heart sounds can either be extra sounds, or "murmurs" related to the flow of blood between the sounds. Murmurs are graded by volume, from 1 (the quietest), to 6 (the loudest), and evaluated by their relationship to the heart sounds, position in the cardiac cycle, and additional features such as their radiation to other sites, changes with a person's position, the frequency of the sound as determined by the side of the stethoscope by which they are heard, and site at which they are heard loudest. Murmurs may be caused by damaged heart valves or congenital heart disease such as ventricular septal defects, or may be heard in normal hearts. A different type of sound, a pericardial friction rub can be heard in cases of pericarditis where the inflamed membranes can rub together. #### Blood tests Blood tests play an important role in the diagnosis and treatment of many cardiovascular conditions.
https://en.wikipedia.org/wiki/Heart
passage: It can be shown mathematically that one maximizes one's average winnings in this game by submitting a number of entries equal to the total number of entries of others. Of course, if others take this into account, then this strategy translates into a runaway reaction to unbounded number of entries being submitted. According to the magazine, the superrational thing was for each contestant to roll a simulated die with the number of sides equal to the number of expected responders (about 5% of the readership), and then send "1" if you roll "1". If all contestants had followed this strategy, it is possible that the magazine would have received a single postcard, with a "1", and would have had to pay a million dollars to the sender of that postcard. Reputedly the publisher and owners were very concerned about betting the company on a game. Although the magazine had previously discussed the concept of superrationality from which the above-mentioned algorithm can be deduced, many of the contestants submitted entries consisting of an astronomically large number (including several who entered a googolplex). Some took this game further by filling their postcards with mathematical expressions designed to evaluate to the largest possible number in the limited space allowed. As a result, the contestants caused the prize to become moot (as it would have been a minuscule fraction of a cent) and the magazine was unable to tell who won the prize.
https://en.wikipedia.org/wiki/Platonia_dilemma
passage: In the laboratory, slow muons are produced; and in the atmosphere, very fast-moving muons are introduced by cosmic rays. Taking the muon lifetime at rest as the laboratory value of 2.197 μs, the lifetime of a cosmic-ray-produced muon traveling at 98% of the speed of light is about five times longer, in agreement with observations. An example is Rossi and Hall (1941), who compared the population of cosmic-ray-produced muons at the top of a mountain to that observed at sea level. - The lifetime of particles produced in particle accelerators are longer due to time dilation. In such experiments, the "clock" is the time taken by processes leading to muon decay, and these processes take place in the moving muon at its own "clock rate", which is much slower than the laboratory clock. This is routinely taken into account in particle physics, and many dedicated measurements have been performed. For instance, in the muon storage ring at CERN the lifetime of muons circulating with γ = 29.327 was found to be dilated to 64.378 μs, confirming time dilation to an accuracy of 0.9 ± 0.4 parts per thousand. Doppler effect - The stated purpose by Ives and Stilwell (1938, 1941) of these experiments was to verify the time dilation effect, predicted by Larmor–Lorentz ether theory, due to motion through the ether using Einstein's suggestion that Doppler effect in canal rays would provide a suitable experiment.
https://en.wikipedia.org/wiki/Time_dilation
passage: The expectation of this approximation technique is polynomial, as it is the expectation of a function of a binomial RV. The proof below illustrates that this achieves a uniform approximation of f. The crux of the proof is to (1) justify replacing an arbitrary point with a binomially chosen lattice point by concentration properties of a Binomial distribution, and (2) justify the inference from $$ x \approx X $$ to $$ f(x) \approx f(X) $$ by uniform continuity. #### Bernstein's proof Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value $$ \operatorname{\mathcal E}\left[\frac{K}{n}\right] = x\ $$ and $$ p(K) = {n \choose K} x^{K} \left( 1 - x \right)^{n - K} = b_{K,n}(x) $$ By the weak law of large numbers of probability theory, $$ \lim_{n \to \infty}{ P\left( \left| \frac{K}{n} - x \right|>\delta \right) } = 0 $$ for every δ > 0.
https://en.wikipedia.org/wiki/Bernstein_polynomial
passage: ## ### Fields and correlation functions Fields Given a simple representation $$ \rho $$ of the Lie algebra of $$ G $$ , an affine primary field $$ \Phi^\rho(z) $$ is a field that takes values in the representation space of $$ \rho $$ , such that $$ J^a(y) \Phi^\rho(z) = -\frac{\rho(t^a)\Phi^\rho(z)}{y-z} + O(1)\ . $$ An affine primary field is also a primary field for the Virasoro algebra that results from the Sugawara construction. The conformal dimension of the affine primary field is given in terms of the quadratic Casimir $$ C_2(\rho) $$ of the representation $$ \rho $$ (i.e. the eigenvalue of the quadratic Casimir element $$ K_{ab}t^at^b $$ where $$ K_{ab} $$ is the inverse of the matrix $$ \mathcal{K}(t^a,t^b) $$ of the Killing form) by $$ \Delta_\rho = \frac{C_2(\rho)}{2(k+h^\vee)}\ . $$ For example, in the $$ SU(2) $$ WZW model, the conformal dimension of a primary field of spin $$ j $$ is $$ \Delta_j = \frac{j(j+1)}{k+2} \ . $$ By
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten_model
passage: The Manin conjecture is a more precise statement that would describe the asymptotics of the number of rational points of bounded height on a Fano variety. ## Counting points over finite fields A variety over a finite field has only finitely many -rational points. The Weil conjectures, proved by André Weil in dimension 1 and by Pierre Deligne in any dimension, give strong estimates for the number of -points in terms of the Betti numbers of . For example, if is a smooth projective curve of genus over a field of order (a prime power), then $$ \big| |X(k)|-(q+1)\big| \leq 2g\sqrt{q}. $$ For a smooth hypersurface of degree in over a field of order , Deligne's theorem gives the bound: $$ \big| |X(k)|-(q^{n-1}+\cdots+q+1)\big| \leq \bigg( \frac{(d-1)^{n+1}+(-1)^{n+1}(d-1)}{d}\bigg) q^{(n-1)/2}. $$ There are also significant results about when a projective variety over a finite field has at least one -rational point. For example, the Chevalley–Warning theorem implies that any hypersurface of degree in over a finite field has a -rational point if .
https://en.wikipedia.org/wiki/Rational_point
passage: Recombination has been shown to occur between the minichromosomes. ### Human population genetic studies The near-absence of genetic recombination in mitochondrial DNA makes it a useful source of information for studying population genetics and evolutionary biology. Because all the mitochondrial DNA is inherited as a single unit, or haplotype, the relationships between mitochondrial DNA from different individuals can be represented as a gene tree. Patterns in these gene trees can be used to infer the evolutionary history of populations. The classic example of this is in human evolutionary genetics, where the molecular clock can be used to provide a recent date for mitochondrial Eve. This is often interpreted as strong support for a recent modern human expansion out of Africa. Another human example is the sequencing of mitochondrial DNA from Neanderthal bones. The relatively large evolutionary distance between the mitochondrial DNA sequences of Neanderthals and living humans has been interpreted as evidence for the lack of interbreeding between Neanderthals and modern humans. However, mitochondrial DNA reflects only the history of the females in a population. This can be partially overcome by the use of paternal genetic sequences, such as the non-recombining region of the Y-chromosome. Recent measurements of the molecular clock for mitochondrial DNA reported a value of 1 mutation every 7884 years dating back to the most recent common ancestor of humans and apes, which is consistent with estimates of mutation rates of autosomal DNA (10 per base per generation).
https://en.wikipedia.org/wiki/Mitochondrion
passage: ### Ellipses In an ellipse with major axis and minor axis , the vertices on the major axis have the smallest radius of curvature of any points, and the vertices on the minor axis have the largest radius of curvature of any points, . The radius of curvature of an ellipse as a function of the geocentric coordinate $$ t $$ with $$ \tan t = \frac{y}{x} $$ is $$ R(t)= \frac{(b^2 \cos^2t + a^2\sin^2t)^{3/2}}{ab}. $$ It has its minima at $$ t=0 $$ and $$ t=180^\circ $$ and its maxima at $$ t= \pm 90^\circ $$ . ## Applications - For the use in differential geometry, see Cesàro equation. - For the radius of curvature of the Earth (approximated by an oblate ellipsoid); see also: arc measurement - Radius of curvature is also used in a three part equation for bending of beams. - Radius of curvature (optics) - Thin films technologies - Printed electronics - Minimum railway curve radius - AFM probe ### Stress in semiconductor structures Stress in the semiconductor structure involving evaporated thin films usually results from the thermal expansion (thermal stress) during the manufacturing process. Thermal stress occurs because film depositions are usually made above room temperature.
https://en.wikipedia.org/wiki/Radius_of_curvature
passage: In other systems, such as Lynn Margulis's system of five kingdoms, the plants included just the land plants (Embryophyta), and Protoctista has a broader definition. Following publication of Whittaker's system, the five-kingdom model began to be commonly used in high school biology textbooks. But despite the development from two kingdoms to five among most scientists, some authors as late as 1975 continued to employ a traditional two-kingdom system of animals and plants, dividing the plant kingdom into subkingdoms Prokaryota (bacteria and cyanobacteria), Mycota (fungi and supposed relatives), and Chlorota (algae and land plants). + Kingdom Monera - Branch Myxomonera - Phylum Cyanophyta - Phylum Myxobacteriae - Branch Mastigomonera - Phylum Eubacteriae - Phylum Actinomycota - Phylum SpirochaetaeKingdom Protista - Phylum Euglenophyta - Phylum Chrysophyta - Phylum Pyrrophyta - Phylum Hyphochytridiomycota - Phylum Plasmodiophoromycota - Phylum Sporozoa - Phylum Cnidosporidia - Phylum Zoomastigina - Phylum Sarcodina - Phylum CiliophoraKingdom Plantae - Subkingdom Rhodophycophyta - Phylum Rhodophyta - Subkingdom Phaeophycophyta - Phylum Phaeophyta - Subkingdom Euchlorophyta - Branch Chlorophycophyta - Phylum Chlorophyta - Phylum Charophyta
https://en.wikipedia.org/wiki/Kingdom_%28biology%29
passage: ## Connectome Drosophila is one of the few animals (C. elegans being another) where detailed neural circuits (connectomes) of the brain and nerve cord are available. In May 2017 a paper published in bioRxiv presented an electron microscopy image stack of the whole adult female brain at synaptic resolution. Since then, additional Drosophila connectome datasets have been published. These datasets represent several complete maps of larval and adult brains at the synapse level, and analyses of their architecture. The larval brain and nerve cord consist of 3,016 neurons and 548,000 synapses. The Drosophila adult brain contains around 139,000 neurons and over 50 million synapses, and an adult ventral nerve cord has roughly 14,600 neurons and 45 million synapses. These datasets allow scientists to generate testable hypotheses about how the brain processes information and gives rise to behavior. ## Misconceptions Drosophila is sometimes referred to as a pest due to its tendency to live in human settlements where fermenting fruit is found. Flies may collect in homes, restaurants, stores, and other locations. The name and behavior of this species of fly have led to the misconception that it is a biological security risk in Australia and elsewhere. While other "fruit fly" species do pose a risk, D. melanogaster is attracted to fruit that is already rotting, rather than causing fruit to rot.
https://en.wikipedia.org/wiki/Drosophila_melanogaster
passage: The output of the attention unit for token $$ i $$ is the weighted sum of the value vectors of all tokens, weighted by $$ a_{ij} $$ , the attention from token $$ i $$ to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices $$ Q $$ , $$ K $$ and $$ V $$ are defined as the matrices where the $$ i $$ th rows are vectors $$ q_i $$ , $$ k_i $$ , and $$ v_i $$ respectively. Then we can represent the attention as $$ \begin{align} \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\mathrm{T}}{\sqrt{d_k}}\right)V \end{align} $$ where the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector is query size $$ d_{\text{query}} $$ and similarly for the key size $$ d_{\text{key}} $$ and value size $$ d_{\text{value}} $$ . The output dimension of an attention head is its head dimension $$ d_{\text{head}} $$ .
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29
passage: An equalizer can be used to correct or modify the frequency response of a loudspeaker system rather than designing the speaker itself to have the desired response. For instance, the Bose 901 speaker system does not use separate larger and smaller drivers to cover the bass and treble frequencies. Instead it uses nine drivers all of the same four-inch diameter, more akin to what one would find in a table radio. However, this speaker system is sold with an active equalizer. That equalizer must be inserted into the amplifier system so that the amplified signal that is finally sent to the speakers has its response increased at the frequencies where the response of these drivers falls off, and vice versa, producing the response intended by the manufacturer. Tone controls (usually designated "bass" and "treble") are simple shelving filters included in most hi-fi equipment for gross adjustment of the frequency balance. The bass control may be used, for instance, to increase the drum and bass parts at a dance party, or to reduce annoying bass sounds when listening to a person speaking. The treble control might be used to give the percussion a sharper or more "brilliant" sound, or can be used to cut such high frequencies when they have been overemphasized in the program material or simply to accommodate a listener's preference. A "rumble filter" is a high-pass (low cut) filter with a cutoff typically in the 20 to 40 Hz range; this is the low frequency end of human hearing.
https://en.wikipedia.org/wiki/Equalization_%28audio%29
passage: Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted on a geographic map are line features (e.g. roads), area features (countries, parcels, forests, lakes, etc.), and point features (villages, cities, etc.). In addition to depicting the map's features in a geographically accurate manner, it is of critical importance to place the names that identify these features, in a way that the reader knows instantly which name describes which feature. Automatic text placement is one of the most difficult, complex, and time-consuming problems in mapmaking and GIS (Geographic Information System). Other kinds of computer-generated graphics – like charts, graphs etc. – require good placement of labels as well, not to mention engineering drawings, and professional programs which produce these drawings and charts, like spreadsheets (e.g. Microsoft Excel) or computational software programs (e.g. Mathematica). Naively placed labels overlap excessively, resulting in a map that is difficult or even impossible to read. Therefore, a GIS must allow a few possible placements of each label, and often also an option of resizing, rotating, or even removing (suppressing) the label. Then, it selects a set of placements that results in the least overlap, and has other desirable properties. For all but the most trivial setups, the problem is NP-hard.
https://en.wikipedia.org/wiki/Automatic_label_placement
passage: The $$ n $$ -dimensional system of first-order coupled differential equations is then $$ \begin{array}{rcl} y_1'&=&y_2\\ y_2'&=&y_3\\ &\vdots&\\ y_{n-1}'&=&y_n\\ y_n'&=&F(x,y_1,\ldots,y_n). \end{array} $$ more compactly in vector notation: $$ \mathbf{y}'=\mathbf{F}(x,\mathbf{y}) $$ where $$ \mathbf{y}=(y_1,\ldots,y_n),\quad \mathbf{F}(x,y_1,\ldots,y_n)=(y_2,\ldots,y_n,F(x,y_1,\ldots,y_n)). $$ ## Summary of exact solutions Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here. In the table below, $$ P(x) $$ , $$ Q(x) $$ , $$ P(y) $$ , $$ Q(y) $$ , and $$ M(x,y) $$ , $$ N(x,y) $$ are any integrable functions of $$ x $$ , $$ y $$ ; $$ b $$ and $$ c $$ are real given constants; $$ C_1,C_2,\ldots $$ are arbitrary constants (complex in general).
https://en.wikipedia.org/wiki/Ordinary_differential_equation
passage: Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal undiscovered or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier. ### Limitations The factor–label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0 (ratio scale in Stevens's typology). Most conversions fit this paradigm. An example for which it cannot be used is the conversion between the Celsius scale and the Kelvin scale (or the Fahrenheit scale). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (, rather than a linear transform ) between them. For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change.
https://en.wikipedia.org/wiki/Conversion_of_units
passage: ## Non-compact Riemann surfaces are Stein manifolds Let X be a connected, non-compact Riemann surface. A deep theorem of Heinrich Behnke and Stein (1948) asserts that X is a Stein manifold. Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so $$ H^1(X, \mathcal O_X^*) =0 $$ . The exponential sheaf sequence leads to the following exact sequence: $$ H^1(X, \mathcal O_X) \longrightarrow H^1(X, \mathcal O_X^*) \longrightarrow H^2(X, \Z) \longrightarrow H^2(X, \mathcal O_X) $$ Now Cartan's theorem B shows that $$ H^1(X,\mathcal{O}_X)= H^2(X,\mathcal{O}_X)=0 $$ , therefore $$ H^2(X,\Z) =0 $$ . This is related to the solution of the second Cousin problem. ## Properties and examples of Stein manifolds - The standard complex space $$ \Complex^n $$ is a Stein manifold. - Every domain of holomorphy in $$ \Complex^n $$ is a Stein manifold. - Every closed complex submanifold of a Stein manifold is a Stein manifold, too. -
https://en.wikipedia.org/wiki/Stein_manifold
passage: For any value $$ n<\mu(x) $$ , the infinite set of all rationals $$ p/q $$ satisfying the above inequality yields good approximations of $$ x $$ . Conversely, if $$ n>\mu(x) $$ , then there are at most finitely many coprime $$ (p,q) $$ with $$ q>0 $$ that satisfy the inequality. For example, whenever a rational approximation $$ \frac pq \approx x $$ with $$ p,q\in\N $$ yields $$ n+1 $$ exact decimal digits, then $$ \frac{1}{10^n} \ge \left| x- \frac{p}{q} \right| \ge \frac{1}{q^{\mu(x)+\varepsilon}} $$ for any $$ \varepsilon >0 $$ , except for at most a finite number of "lucky" pairs $$ (p,q) $$ . A number $$ x\in\mathbb R $$ with irrationality exponent $$ \mu(x)\le 2 $$ is called a diophantine number, while numbers with $$ \mu(x)=\infty $$ are called Liouville numbers. ### Corollaries Rational numbers have irrationality exponent 1, while (as a consequence of Dirichlet's approximation theorem) every irrational number has irrationality exponent at least 2.
https://en.wikipedia.org/wiki/Irrationality_measure
passage: $$ where the differential operator Q(x, ∂) is given by the formula $$ Q(x, \partial)\varphi (x) = \sum (-1)^{| \alpha |} \partial^{\alpha_1} \partial^{\alpha_2} \cdots \partial^{\alpha_n} \left[a_{\alpha_1, \alpha_2, \dots, \alpha_n}(x) \varphi(x) \right]. $$ The number $$ (-1)^{| \alpha |} = (-1)^{\alpha_1+\alpha_2+\cdots+\alpha_n} $$ shows up because one needs α1 + α2 + ⋯ + αn integrations by parts to transfer all the partial derivatives from u to $$ \varphi $$ in each term of the differential equation, and each integration by parts entails a multiplication by −1.
https://en.wikipedia.org/wiki/Weak_solution
passage: Most operations of linear algebra preserve coherent sheaves. In particular, for coherent sheaves $$ \mathcal F $$ and $$ \mathcal G $$ on a ringed space $$ X $$ , the tensor product sheaf $$ \mathcal F \otimes_{\mathcal O_X}\mathcal G $$ and the sheaf of homomorphisms $$ \mathcal Hom_{\mathcal O_X}(\mathcal F, \mathcal G) $$ are coherent. - A simple non-example of a quasi-coherent sheaf is given by the extension by zero functor. For example, consider $$ i_!\mathcal{O}_X $$ for $$ X = \operatorname{Spec}(\Complex[x,x^{-1}]) \xrightarrow{i} \operatorname{Spec}(\Complex[x])=Y $$ Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category of modules over the underlying ring, and the adjunction comes from taking global sections. ## Functoriality Let $$ f: X\to Y $$ be a morphism of ringed spaces (for example, a morphism of schemes).
https://en.wikipedia.org/wiki/Coherent_sheaf
passage: \delta_{\lambda\mu} \delta_{nn'}\delta_{mm'} \frac{|G|}{l_\lambda}. $$ Here $$ \Gamma^{(\lambda)} (R)_{nm}^* $$ is the complex conjugate of $$ \Gamma^{(\lambda)} (R)_{nm}\, $$ and the sum is over all elements of G. The Kronecker delta $$ \delta_{\lambda\mu} $$ is 1 if the matrices are in the same irreducible representation $$ \Gamma^{(\lambda)} = \Gamma^{(\mu)} $$ .
https://en.wikipedia.org/wiki/Schur_orthogonality_relations
passage: Plaintext (original): `brownfox` Plaintext (shifted): `nfox____` Result (difference): `omaz` This result `omaz` corresponds with the 9th through 12th letters in the result of the larger examples above. The known section and its location is verified. Subtract `brow` from that range of the ciphertext. Ciphertext: `EPSDFQQXMZCJYNCKUCACDWJRCBVRWINLOWU` Plaintext: `________brow_______________________ ` Key: `LION` This produces the final result, the reveal of the key `LION`. ## Variants ### Running key The running key variant of the Vigenère cipher was also considered unbreakable at one time. For the key, this version uses a block of text as long as the plaintext. Since the key is as long as the message, the Friedman and Kasiski tests no longer work, as the key is not repeated. If multiple keys are used, the effective key length is the least common multiple of the lengths of the individual keys. For example, using the two keys `GO` and `CAT`, whose lengths are 2 and 3, one obtains an effective key length of 6 (the least common multiple of 2 and 3). This can be understood as the point where both keys line up.
https://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher
passage: which is known to include the optimal solution. - The function $$ M(x) $$ satisfies the regularity conditions as follows: - There exists $$ \beta>0 $$ and $$ B>0 $$ such that $$ |x'-\theta|+|x-\theta|<\beta \quad \Longrightarrow \quad |M(x')-M(x)|<B|x'-x''| $$ - There exists $$ \rho>0 $$ and $$ R>0 $$ such that $$ |x'-x|<\rho \quad \Longrightarrow \quad |M(x')-M(x)|<R $$ - For every $$ \delta>0 $$ , there exists some $$ \pi(\delta)>0 $$ such that $$ |z-\theta|>\delta \quad \Longrightarrow \quad \inf_{\delta/2>\varepsilon>0}\frac{|M(z+\varepsilon)-M(z-\varepsilon)|}{\varepsilon}>\pi(\delta) $$ - The selected sequences $$ \{a_n\} $$
https://en.wikipedia.org/wiki/Stochastic_approximation
passage: Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms. A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction. Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell. The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory. ## History Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth.
https://en.wikipedia.org/wiki/Chemical_reaction
passage: And if we map each element s of G to the indicator function of {s}, which is the vector f defined by $$ f(g)= 1\cdot s + \sum_{g\not= s}0 \cdot g= \mathbf{1}_{\{s\}}(g)=\begin{cases} 1 & g = s \\ 0 & g \ne s \end{cases} $$ the resulting mapping is an injective group homomorphism (with respect to multiplication, not addition, in R[G]). If R and G are both commutative (i.e., R is commutative and G is an abelian group), R[G] is commutative. If H is a subgroup of G, then R[H] is a subring of R[G]. Similarly, if S is a subring of R, S[G] is a subring of R[G]. If G is a finite group of order greater than 1, then R[G] always has zero divisors. For example, consider an element g of G of order . Then 1 − g is a zero divisor: $$ (1 - g)(1 + g+\cdots+g^{m-1}) = 1 - g^m = 1 - 1 =0. $$ For example, consider the group ring Z[S3] and the element of order 3 g = (123).
https://en.wikipedia.org/wiki/Group_ring
passage: In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. Let $$ X, Y $$ be itemsets, $$ X \Rightarrow Y $$ an association rule and a set of transactions of a given database. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant, and datasets often contain thousands or millions of transactions. Support Support is an indication of how frequently the itemset appears in the dataset. In our example, it can be easier to explain support by writing $$ \text{support} = P(A\cap B)= \frac{(\text{number of transactions containing }A\text{ and }B)}\text{ (total number of transactions)} $$ where A and B are separate item sets that occur at the same time in a transaction. Using Table 2 as an example, the itemset $$ X=\{\mathrm{beer, diapers}\} $$ has a support of since it occurs in 20% of all transactions (1 out of 5 transactions). The argument of support of X is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive). Furthermore, the itemset $$ Y=\{\mathrm{milk, bread, butter}\} $$ has a support of as it appears in 20% of all transactions as well.
https://en.wikipedia.org/wiki/Association_rule_learning
passage: ## Factorization of decimal repunits (Prime factors colored means "new factors", i. e. the prime factor divides Rn but does not divide Rk for all k < n) R1 =1R2 =R3 = · R4 =11 · R5 = · R6 =3 · · 11 · · 37R7 = · R8 =11 · · 101 · R9 =32 · 37 · R10 =11 · 41 · 271 · R11 = · R12 =3 · 7 · 11 · 13 · 37 · 101 · R13 = · · R14 =11 · 239 · 4649 · R15 =3 · · 37 · 41 · 271 · R16 =11 · · 73 · 101 · 137 · R17 = · R18 =32 · 7 · 11 · 13 · · 37 · · 333667R19 =R20 =11 · 41 · 101 · 271 · · 9091 · R21 =3 · 37 · · 239 · · 4649 · R22 =112 · · · · 21649 · 513239R23 =R24 =3 · 7 · 11 · 13 · 37 · 73 · 101 · 137 · 9901 · R25 =41 · 271 · · · R26 =11 · 53 · 79 · · 265371653 · R27 =33 · 37 · · 333667 · R28 =11 · · 101 · 239 · · 4649 · 909091 · R29 = · · · · R30 =3 · 7 · 11 · 13 · 31 · 37 · 41 · · · 271 · · 9091 · 2906161 Smallest prime factor of Rn for n > 1 are 11, 3, 11, 41, 3, 239, 11, 3, 11, 21649, 3, 53, 11, 3, 11, 2071723, 3, 1111111111111111111, 11, 3, 11, 11111111111111111111111, 3, 41, 11, 3, 11, 3191, 3, 2791, 11, 3, 11, 41, 3, 2028119, 11, 3, 11, 83, 3, 173, 11, 3, 11, 35121409, 3, 239, 11, ...
https://en.wikipedia.org/wiki/Repunit
passage: To summarize, observe the points below, we will define the number D as the depth of the tree. Possible advantages of increasing the number D: - Accuracy of the decision-tree classification model increases. Possible disadvantages of increasing D -  Runtime issues - Decrease in accuracy in general - Pure node splits while going deeper can cause issues. The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model. ### The choice of node-splitting functions The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using the information-gain function may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction in entropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes. The main advantages and disadvantages of information gain and phi function - One major drawback of information gain is that the feature that is chosen as the next node in the tree tends to have more unique values. - An advantage of information gain is that it tends to choose the most impactful features that are close to the root of the tree.
https://en.wikipedia.org/wiki/Decision_tree
passage: The number of leap seconds is changed so that mean solar noon at the prime meridian (Greenwich) does not deviate from UTC noon by more than 0.9 seconds. National metrology institutions maintain an approximation of UTC referred to as UTC(k) for laboratory k. UTC(k) is distributed by the BIPM's Consultative Committee for Time and Frequency. The offset UTC-UTC(k) is calculated every 5 days, the results are published monthly. Atomic clocks record UTC(k) to no more than 100 nanoseconds. In some countries, UTC(k) is the legal time that is distributed by radio, television, telephone, Internet, fiber-optic cables, time signal transmitters, and speaking clocks. In addition, GNSS provides time information accurate to a few tens of nanoseconds or better. ### Fiber Optics In a next phase, these labs strive to transmit comparison signals in the visible spectrum through fibre-optic cables. This will allow their experimental optical clocks to be compared with an accuracy similar to the expected accuracies of the optical clocks themselves. Some of these labs have already established fibre-optic links, and tests have begun on sections between Paris and Teddington, and Paris and Braunschweig. Fibre-optic links between experimental optical clocks also exist between the American NIST lab and its partner lab JILA, both in Boulder, Colorado but these span much shorter distances than the European network and are between just two labs.
https://en.wikipedia.org/wiki/Atomic_clock
passage: This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result, these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Telocentric Telocentric chromosomes have a centromere at one end of the chromosome and therefore exhibit only one arm at the cytological (microscopic) level. They are not present in humans but can form through cellular chromosomal errors. Telocentric chromosomes occur naturally in many species, such as the house mouse, in which all chromosomes except the Y are telocentric.
https://en.wikipedia.org/wiki/Centromere
passage: on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: BLOCK31. on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be $$ \pi $$ Comparing these two computations yields the integral, though one should take care about the improper integrals involved. $$ \begin{align} \iint_{\R^2} e^{-\left(x^2 + y^2\right)}dx\,dy &= \int_0^{2\pi} \int_0^{\infty} e^{-r^2}r\,dr\,d\theta\\[6pt] &= 2\pi \int_0^\infty re^{-r^2}\,dr\\[6pt] &= 2\pi \int_{-\infty}^0 \tfrac{1}{2} e^s\,ds && s = -r^2\\[6pt] &= \pi \int_{-\infty}^0 e^s\,ds \\[6pt] &= \pi \, \left[ e^s\right]_{-\infty}^{0} \\[6pt] &= \pi \,\left(e^0 - e^{-\infty}\right) \\[6pt] &= \pi \,\left(1 - 0\right) \\[6pt] &=\pi, \end{align} $$ where the factor of is the Jacobian determinant which appears because of the transform to polar coordinates ( is the standard measure on the plane, expressed in polar coordinates
https://en.wikipedia.org/wiki/Gaussian_integral
passage: Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus 1. ### Group properties The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension , called the orthogonal group and denoted by . The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of of index 2, the special orthogonal group of rotations. The quotient group is isomorphic to , with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, is a semidirect product of by . In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with matrices. If is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns.
https://en.wikipedia.org/wiki/Orthogonal_matrix
passage: Given the set of strings $$ S = \{S_1, \ldots, S_K\} $$ , where $$ |S_i|=n_i $$ and $$ \sum n_i = N $$ . Find for each $$ 2 \leq k \leq K $$ , a longest string which occurs as substring of at least $$ k $$ strings. ## Algorithms One can find the lengths and starting positions of the longest common substrings of $$ S $$ and $$ T $$ in $$ (n+m) $$ time with the help of a generalized suffix tree. A faster algorithm can be achieved in the word RAM model of computation if the size $$ \sigma $$ of the input alphabet is in $$ 2^{o \left( \sqrt{\log(n+m)} \right) } $$ . In particular, this algorithm runs in $$ O\left( (n+m) \log \sigma/\sqrt{\log (n+m)} \right) $$ time using $$ O\left((n+m)\log\sigma/\log (n+m) \right) $$ space. Solving the problem by dynamic programming costs $$ \Theta(nm) $$ .
https://en.wikipedia.org/wiki/Longest_common_substring
passage: $$ The metric is $$ g_{jk}(\theta) = \frac{\partial^2 A(\theta)}{\partial \theta_j \,\partial \theta_k} - \frac{\partial^2 \eta(\theta)}{\partial \theta_j \,\partial \theta_k} \cdot \mathrm{E}[T(x)] $$ The metric has a particularly simple form if we are using the natural parameters.
https://en.wikipedia.org/wiki/Fisher_information_metric
passage: Infinite sequences of symbols may be considered as well (see Omega language). It is often necessary for practical purposes to restrict the symbols in an alphabet so that they are unambiguous when interpreted. For instance, if the two-member alphabet is {00,0}, a string written on paper as "000" is ambiguous because it is unclear if it is a sequence of three "0" symbols, a "00" followed by a "0", or a "0" followed by a "00". ## Notation By definition, the alphabet of a formal language $$ L $$ over $$ \Sigma $$ is the set $$ \Sigma $$ , which can be any non-empty set of symbols from which every string in $$ L $$ is built. For example, the set $$ \Sigma = \{\_,\mathrm{a}, \dots, \mathrm{z}, \mathrm{A}, \dots, \mathrm{Z}, 0, \mathrm{1}, \dots, \mathrm{9}\} $$ can be the alphabet of the formal language $$ L $$ that means "all variable identifiers in C programming language". Notice that it is not required to use every symbol in the alphabet of $$ L $$ for its strings.
https://en.wikipedia.org/wiki/Alphabet_%28formal_languages%29
passage: The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution. Thus, the Bayes Decision Rule is stated as "decide $$ \;w_1\; $$ if $$ ~\operatorname{\mathbb P}(w_1|x) \; > \; \operatorname{\mathbb P}(w_2|x)~;~ $$ otherwise decide $$ \;w_2\; $$ " where $$ \;w_1\,, w_2\; $$ are predictions of different classes. From a perspective of minimizing error, it can also be stated as $$ w = \underset{ w }{\operatorname{arg\;max}} \; \int_{-\infty}^\infty \operatorname{\mathbb P}(\text{ error}\mid x)\operatorname{\mathbb P}(x)\,\operatorname{d}x~ $$ where $$ \operatorname{\mathbb P}(\text{ error}\mid x) = \operatorname{\mathbb P}(w_1\mid x)~ $$ if we decide $$ \;w_2\; $$ and $$ \;\operatorname{\mathbb P}(\text{ error}\mid x) = \operatorname{\mathbb P}(w_2\mid x)\; $$ if we decide $$ \;w_1\;. $$ By applying Bayes' theorem $$
https://en.wikipedia.org/wiki/Maximum_likelihood_estimation
passage: A similar abuse of language refers to the modulus as a norm. A split-complex number is invertible if and only if its modulus is nonzero thus numbers of the form have no inverse. The multiplicative inverse of an invertible element is given by $$ z^{-1} = \frac{z^*}{ {\lVert z \rVert}^2} ~. $$ Split-complex numbers which are not invertible are called null vectors. These are all of the form for some real number . ### The diagonal basis There are two nontrivial idempotent elements given by $$ e=\tfrac{1}{2}(1-j) $$ and $$ e^* = \tfrac{1}{2}(1+j). $$ Idempotency means that $$ ee=e $$ and $$ e^*e^*=e^*. $$ Both of these elements are null: $$ \lVert e \rVert = \lVert e^* \rVert = e^* e = 0 ~. $$ It is often convenient to use and ∗ as an alternate basis for the split-complex plane. This basis is called the diagonal basis or null basis.
https://en.wikipedia.org/wiki/Split-complex_number
passage: The following limits illustrate that the expression $$ 0^0 $$ is an indeterminate form: $$ \begin{align} \lim_{x \to 0^+} x^0 &= 1, \\ \lim_{x \to 0^+} 0^x &= 0. \end{align} $$ Thus, in general, knowing that $$ \textstyle\lim_{x \to c} f(x) \;=\; 0 $$ and $$ \textstyle\lim_{x \to c} g(x) \;=\; 0 $$ is not sufficient to evaluate the limit $$ \lim_{x \to c} f(x)^{g(x)}. $$ If the functions $$ f $$ and $$ g $$ are analytic at $$ c $$ , and $$ f $$ is positive for $$ x $$ sufficiently close (but not equal) to $$ c $$ , then the limit of $$ f(x)^{g(x)} $$ will be $$ 1 $$ . Otherwise, use the transformation in the table below to evaluate the limit. ### Expressions that are not indeterminate forms The expression $$ 1/0 $$ is not commonly regarded as an indeterminate form, because if the limit of $$ f/g $$ exists then there is no ambiguity as to its value, as it always diverges.
https://en.wikipedia.org/wiki/Indeterminate_form
passage: The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics. The name, (, "omission"), was given by Apollonius of Perga in his Conics. ## Definition as locus of points An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane: The midpoint $$ C $$ of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices $$ V_1,V_2 $$ , which have distance $$ a $$ to the center. The distance $$ c $$ of the foci to the center is called the focal distance or linear eccentricity. The quotient $$ e = \tfrac{c}{a} $$ is defined as the eccentricity. The case $$ F_1 = F_2 $$ yields a circle and is included as a special type of ellipse. The equation $$ \left|PF_2\right| + \left|PF_1\right| = 2a $$ can be viewed in a different way (see figure): $$ c_2 $$ is called the circular directrix (related to focus of the ellipse.
https://en.wikipedia.org/wiki/Ellipse
passage: The optimal algorithm is by Andris Ambainis. Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended his work to obtain the lower bound for all functions. ## Generalization: Finding repeated elements Elements that occur more than $$ n/k $$ times in a multiset of size $$ n $$ may be found by a comparison-based algorithm, the Misra–Gries heavy hitters algorithm, in time $$ O(n\log k) $$ . The element distinctness problem is a special case of this problem where $$ k=n $$ . This time is optimal under the decision tree model of computation.
https://en.wikipedia.org/wiki/Element_distinctness_problem
passage: The historian Jean-Claude Martzloff theorized that the importance of duality in Chinese natural philosophy made it easier for the Chinese to accept the idea of negative numbers. The Chinese were able to solve simultaneous equations involving negative numbers. The Nine Chapters used red counting rods to denote positive coefficients and black rods for negative. This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values. Liu Hui writes: The ancient Indian Bakhshali Manuscript carried out calculations with negative numbers, using "+" as a negative sign. Teresi, Dick. (2002). Lost Discoveries: The Ancient Roots of Modern Science–from the Babylonians to the Mayas. New York: Simon & Schuster. . Page 65. The date of the manuscript is uncertain. LV Gurjar dates it no later than the 4th century, Hoernle dates it between the third and fourth centuries, Ayyangar and Pingree dates it to the 8th or 9th centuries, and George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century. Teresi, Dick. (2002). Lost Discoveries: The Ancient Roots of Modern Science–from the Babylonians to the Mayas. New York: Simon & Schuster. . Page 65–66. During the 7th century AD, negative numbers were used in India to represent debts.
https://en.wikipedia.org/wiki/Negative_number
passage: Urban areas in natural desert settings often bring in water from far areas to maintain the human population and will likely have effects on the local desert climate. Modification of aquatic systems in urban areas also results in decreased stream diversity and increased pollution. ### Trade, shipping, and spread of invasive species Both local shipping and long-distance trade are required to meet the resource demands important in maintaining urban areas. Carbon dioxide emissions from the transport of goods also contribute to accumulating greenhouse gasses and nutrient deposits in the soil and air of urban environments. In addition, shipping facilitates the unintentional spread of living organisms, and introduces them to environments that they would not naturally inhabit. Introduced or alien species are populations of organisms living in a range in which they did not naturally evolve due to intentional or inadvertent human activity. Increased transportation between urban centers furthers the incidental movement of animal and plant species. Alien species often have no natural predators and pose a substantial threat to the dynamics of existing ecological populations in the environment into which they are introduced. Invasive species are successful when they are able to have proliferate reproduction due to short life cycles, contain or adapt to have traits that suit the environment and appear in high densities. Such invasive species are numerous and include house sparrows, ring-necked pheasants, European starlings, brown rats, Asian carp, American bullfrogs, emerald ash borer, kudzu vines, and zebra mussels among numerous others, most notably domesticated animals.
https://en.wikipedia.org/wiki/Urban_ecology
passage: Hybrids were proposed as a way of greatly accelerating their market introduction, producing energy even before the fusion systems reached break-even. However, detailed studies of the economics of the systems suggested they could not compete with existing fission reactors. The idea was abandoned and lay dormant until the continued delays in reaching break-even led to a brief revival of the concept around 2009. These studies generally concentrated on the nuclear waste disposal aspects of the design, as opposed to the production of energy. The concept has seen cyclical interest since then, based largely on the success or failure of more conventional solutions like the Yucca Mountain nuclear waste repository Another major design effort for energy production was started at Lawrence Livermore National Laboratory (LLNL) under their LIFE program. Industry input led to the abandonment of the hybrid approach for LIFE, which was then re-designed as a pure-fusion system. LIFE was cancelled when the underlying technology, from the National Ignition Facility, failed to reach its design performance goals. Apollo Fusion, a company founded by Google executive Mike Cassidy in 2017, was also reported to be focused on using the subcritical nuclear fusion-fission hybrid method. Their web site is now focussed on their Hall-effect thrusters, and mentions fusion only in passing. On 2022, September 9, Professor Peng Xianjue of the Chinese Academy of Engineering Physics announced that the Chinese government had approved the construction of the world's largest pulsed-powerplant - the Z-FFR, namely Z(-pinch)-Fission-Fusion Reactor- in Chengdu, Sichuan province.
https://en.wikipedia.org/wiki/Nuclear_fusion%E2%80%93fission_hybrid
passage: In computer science, a ternary search tree is a type of trie (sometimes called a prefix tree) where nodes are arranged in a manner similar to a binary search tree, but with up to three children rather than the binary tree's limit of two. Like other prefix trees, a ternary search tree can be used as an associative map structure with the ability for incremental string search. However, ternary search trees are more space efficient compared to standard prefix trees, at the cost of speed. Common applications for ternary search trees include spell-checking and auto-completion. ## Description Each node of a ternary search tree stores a single character, an object (or a pointer to an object depending on implementation), and pointers to its three children conventionally named equal kid, lo kid and hi kid, which can also be referred respectively as middle (child), lower (child) and higher (child). A node may also have a pointer to its parent node as well as an indicator as to whether or not the node marks the end of a word. The lo kid pointer must point to a node whose character value is less than the current node. The hi kid pointer must point to a node whose character is greater than the current node. The equal kid points to the next character in the word.
https://en.wikipedia.org/wiki/Ternary_search_tree
passage: The longest family tree in the world is that of the Chinese philosopher and educator Confucius (551–479 BC), who is descended from King Tang (1675–1646 BC). The tree spans more than 80 generations from him and includes more than 2 million members. An international effort involving more than 450 branches around the world was started in 1998 to retrace and revise this family tree. A new edition of the Confucius genealogy was printed in September 2009 by the Confucius Genealogy Compilation Committee, to coincide with the 2560th anniversary of the birth of the Chinese thinker. This latest edition was expected to include some 1.3 million living members who are scattered around the world today. ### Europe and West Asia Before the Dark Ages, in the Greco-Roman world, some reliable pedigrees dated back perhaps at least as far as the first half of the first millennium BC; with claimed or mythological origins reaching back further. Roman clan and family lineages played an important part in the structure of their society and were the basis of their intricate system of personal names. However, there was a break in the continuity of record-keeping at the end of Classical Antiquity. Records of the lines of succession of the Popes and the Eastern Roman Emperors through this transitional period have survived, but these are not continuous genealogical histories of single families. Refer to descent from antiquity.
https://en.wikipedia.org/wiki/Family_tree
passage: i.e., when the proposed operations are commutative operations for the state machine. In such cases, the conflicting operations can both be accepted, avoiding the delays required for resolving conflicts and re-proposing the rejected operations. This concept is further generalized into ever-growing sequences of commutative operations, some of which are known to be stable (and thus may be executed). The protocol tracks these sequences ensuring that all proposed operations of one sequence are stabilized before allowing any operation non-commuting with them to become stable. ### Example In order to illustrate Generalized Paxos, the example below shows a message flow between two concurrently executing clients and a replicated state machine implementing read/write operations over two distinct registers A and B. + Commutativity Table Read(A) Write(A) Read(B) Write(B) Read(A) Write(A) Read(B) Write(B) Note that in this table indicates operations which are non-commutative.
https://en.wikipedia.org/wiki/Paxos_%28computer_science%29
passage: A regular tetrahedron can be embedded inside a cube in two ways such that each vertex is a vertex of the cube, and each edge is a diagonal of one of the cube's faces. For one such embedding, the Cartesian coordinates of the vertices are $$ \begin{align} (1,1,1), &\quad (1,-1,-1), \\ (-1,1,-1), &\quad (-1,-1,1). \end{align} $$ This yields a tetrahedron with edge-length $$ 2 \sqrt{2} $$ , centered at the origin. For the other tetrahedron (which is dual to the first), reverse all the signs. These two tetrahedra's vertices combined are the vertices of a cube, demonstrating that the regular tetrahedron is the 3-demicube, a polyhedron that is by alternating a cube. This form has Coxeter diagram and Schläfli symbol $$ \mathrm{h}\{4,3\} $$ . ### Symmetry The vertices of a cube can be grouped into two groups of four, each forming a regular tetrahedron, showing one of the two tetrahedra in the cube. The symmetries of a regular tetrahedron correspond to half of those of a cube: those that map the tetrahedra to themselves, and not to each other. The tetrahedron is the only Platonic solid not mapped to itself by point inversion.
https://en.wikipedia.org/wiki/Tetrahedron
passage: ### Integration challenges The primary challenges to integrating nonplanar multigate devices into conventional semiconductor manufacturing processes include: - Fabrication of a thin silicon "fin" tens of nanometers wide - Fabrication of matched gates on multiple sides of the fin ## Compact modeling BSIMCMG106.0.0, officially released on March 1, 2012 by UC Berkeley BSIM Group, is the first standard model for FinFETs. BSIM-CMG is implemented in Verilog-A. Physical surface-potential-based formulations are derived for both intrinsic and extrinsic models with finite body doping. The surface potentials at the source and drain ends are solved analytically with poly-depletion and quantum mechanical effects. The effect of finite body doping is captured through a perturbation approach. The analytic surface potential solution agrees closely with the 2-D device simulation results. If the channel doping concentration is low enough to be neglected, computational efficiency can be further improved by a setting a specific flag (COREMOD = 1). All of the important multi-gate (MG) transistor behavior is captured by this model. Volume inversion is included in the solution of Poisson's equation, hence the subsequent I–V formulation automatically captures the volume-inversion effect. Analysis of electrostatic potential in the body of MG MOSFETs provided a model equation for short-channel effects (SCE). The extra electrostatic control from the end gates (top/bottom gates) (triple or quadruple-gate) is also captured in the short-channel model.
https://en.wikipedia.org/wiki/Multigate_device
passage: It does not, however, provide information about inflection points. Specifically, a twice-differentiable function f is concave up if $$ f''(x) > 0 $$ and concave down if $$ f''(x) < 0 $$ . Note that if $$ f(x) = x^4 $$ , then $$ x = 0 $$ has zero second derivative, yet is not an inflection point, so the second derivative alone does not give enough information to determine whether a given point is an inflection point. ### Higher-order derivative test The higher-order derivative test or general derivative test is able to determine whether a function's critical points are maxima, minima, or points of inflection for a wider variety of functions than the second-order derivative test. As shown below, the second-derivative test is mathematically identical to the special case of n = 1 in the higher-order derivative test. Let f be a real-valued, sufficiently differentiable function on an interval $$ I \subset \R $$ , let $$ c \in I $$ , and let $$ n \ge 1 $$ be a natural number.
https://en.wikipedia.org/wiki/Derivative_test
passage: Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques. It may be applied as part of broader Model-driven engineering (MDE) concept. ## Overview Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system. There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them. Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource.
https://en.wikipedia.org/wiki/Data_modeling
passage: Such quadratic polynomials can be expressed as $$ f_c(z) = z^2 + c ~, $$ where c is a complex parameter. Fix some $$ R > 0 $$ large enough that $$ R^2 - R \ge |c|. $$ (For example, if is in the Mandelbrot set, then $$ |c| \le 2, $$ so we may simply let $$ R = 2~. $$ ) Then the filled Julia set for this system is the subset of the complex plane given by $$ K(f_c) = \left\{z \in \mathbb C : \forall n \in \mathbb N, |f_c^n(z)| \le R \right\}~, $$ where $$ f_c^n(z) $$ is the nth iterate of $$ f_c(z). $$ The Julia set $$ J(f_c) $$ of this function is the boundary of $$ K(f_c) $$ . The parameter plane of quadratic polynomials – that is, the plane of possible c values – gives rise to the famous Mandelbrot set. Indeed, the Mandelbrot set is defined as the set of all c such that $$ J(f_c) $$ is connected. For parameters outside the Mandelbrot set, the Julia set is a Cantor space: in this case it is sometimes referred to as Fatou dust. In many cases, the Julia set of c looks like the Mandelbrot set in sufficiently small neighborhoods of c.
https://en.wikipedia.org/wiki/Julia_set