text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: Each of these deletions take $$ O(\log n) $$ time (the amount of time to search for the element and flag it as deleted). The $$ n/2 $$ deletion causes the tree to be rebuilt and takes $$ O(\log n) + O(n) $$ (or just $$ O(n) $$ ) time. Using aggregate analysis it becomes clear that the amortized cost of a deletion is $$ O(\log n) $$ : $$ {\sum_{1}^{n/2} O(\log n) + O(n) \over n/2} = {{n \over 2}O(\log n) + O(n) \over n/2} = O(\log n) \ $$ ## Etymology The name Scapegoat tree "[...] is based on the common wisdom that, when something goes wrong, the first thing people tend to do is find someone to blame (the scapegoat)." In the Bible, a scapegoat is an animal that is ritually burdened with the sins of others, and then driven away.
https://en.wikipedia.org/wiki/Scapegoat_tree
passage: Initialization of centroids, distance metric between points and centroids, and the calculation of new centroids are design choices and will vary with different implementations. In this example pseudocode, argmin is used to find the index of the minimum value. ```python def k_means_cluster(k, points): 1. Initialization: choose k centroids (Forgy, Random Partition, etc.) centroids = [c1, c2, ..., ck] 1. Initialize clusters list clusters = [[] for _ in range(k)] 1. Loop until convergence converged = false while not converged: 1. Clear previous clusters clusters = [[] for _ in range(k)] 1. Assign each point to the "closest" centroid for point in points: distances_to_each_centroid = [distance(point, centroid) for centroid in centroids] cluster_assignment = argmin(distances_to_each_centroid) clusters[cluster_assignment].append(point) 1. Calculate new centroids 1. (the standard implementation uses the mean of all points in a 1. cluster to determine the new centroid) new_centroids = [calculate_centroid(cluster) for cluster in clusters] converged = (new_centroids == centroids) centroids = new_centroids if converged: return clusters ``` #### Initialization methods Commonly used initialization methods are Forgy and Random Partition. The Forgy method randomly chooses k observations from the dataset and uses these as the initial means.
https://en.wikipedia.org/wiki/K-means_clustering
passage: The most common color filter array pattern, the Bayer pattern, uses a checkerboard arrangement of two green pixels for each red and blue pixel, although many other color filter patterns have been developed, including patterns using cyan, magenta, yellow, and white pixels. Integral color sensors were initially manufactured by transferring colored dyes through photoresist windows onto a polymer receiving layer coated on top of a monochrome CCD sensor. Since each pixel provides only a single color (such as green), the "missing" color values (such as red and blue) for the pixel are interpolated using neighboring pixels. This processing is also referred to as demosaicing or de-bayering. - Foveon X3 sensor, using an array of layered pixel sensors, separating light via the inherent wavelength-dependent absorption property of silicon, such that every location senses all three color channels. This method is similar to how color film for photography works. - 3CCD, using three discrete image sensors, with the color separation done by a dichroic prism. The dichroic elements provide a sharper color separation, thus improving color quality. Because each sensor is equally sensitive within its passband, and at full resolution, 3-CCD sensors produce better color quality and better low light performance. 3-CCD sensors produce a full 4:4:4 signal, which is preferred in television broadcasting, video editing and chroma key visual effects.
https://en.wikipedia.org/wiki/Image_sensor
passage: The set of all cosets, typically denoted by $$ \mathcal{L}^p(S, \mu) / \mathcal{N} ~~\stackrel{\scriptscriptstyle\text{def}}{=}~~ \{f + \mathcal{N} : f \in \mathcal{L}^p(S, \mu)\}, $$ forms a vector space with origin $$ 0 + \mathcal{N} = \mathcal{N} $$ when vector addition and scalar multiplication are defined by $$ (f + \mathcal{N}) + (g + \mathcal{N}) \;\stackrel{\scriptscriptstyle\text{def}}{=}\; (f + g) + \mathcal{N} $$ and $$ s (f + \mathcal{N}) \;\stackrel{\scriptscriptstyle\text{def}}{=}\; (s f) + \mathcal{N}. $$ This particular quotient vector space will be denoted by $$ L^p(S,\, \mu) ~\stackrel{\scriptscriptstyle\text{def}}{=}~ \mathcal{L}^p(S, \mu) / \mathcal{N}. $$ Two cosets are equal $$ f + \mathcal{N} = g + \mathcal{N} $$ if and only if $$ g \in f + \mathcal{N} $$ (or equivalently, $$ f - g \in \mathcal{N} $$ ), which happens if and only if $$ f = g $$ almost everywhere; if this is the case then $$ f $$ and $$ g $$ are identified in the quotient space.
https://en.wikipedia.org/wiki/Lp_space
passage: Participants who chose 3B over 3A provided evidence of the certainty effect, while those who chose 3A over 3B showed evidence of the zero effect. Participants who chose (1A,2B,3B) only deviated from the rational choice when presented with a zero-variance lottery. Participants who chose (1A,2B,3A) deviated from the rational lottery choice to avoid the risk of winning nothing (aversion to zero). Findings of the six-lottery experiment indicated the zero effect was statistically significant with a p-value < 0.01. The certainty effect was found to be statistically insignificant and not the intuitive explanation individuals deviating from the expected utility theory. ## Mathematical proof of inconsistency Expected utility theory predicts that a rational person's choices can be predicted using a utility function $$ U(W) $$ , where $$ W $$ is "wealth". Specifically, it predicts that a person will choose the outcome that maximizes the expected value of $$ U(W) $$ :
https://en.wikipedia.org/wiki/Allais_paradox
passage: This increase beta frequency at more anterior subregions corresponds to an anatomical model of LPFC function wherein cognitive control is hierarchically organized, with more abstract and sophisticated control mechanisms subserved by the most anterior regions and more direct//concrete control of goal-directed action at posterior sites. This is further in agreement with the fact that posterior LPFC beta is slower in frequency, similar to that observed over resting motor cortex. ## Relationship with GABA Beta waves are often considered indicative of inhibitory cortical transmission mediated by gamma aminobutyric acid (GABA), the principal inhibitory neurotransmitter of the mammalian nervous system. Benzodiazepines, drugs that modulate GABAA receptors, induce beta waves in EEG recordings from humans and rats. Spontaneous beta waves are also observed diffusely in scalp EEG recordings from children with duplication 15q11.2-q13.1 syndrome (Dup15q) who have duplications of GABAA receptor subunit genes GABRA5, GABRB3, and GABRG3. Similarly, children with Angelman syndrome with deletions of the same GABAA receptor subunit genes feature diminished beta amplitude. Thus, beta waves are likely biomarkers of GABAergic dysfunction, especially in neurodevelopmental disorders caused by 15q deletions/duplications.
https://en.wikipedia.org/wiki/Beta_wave
passage: That is, $$ \begin{align} \left |f_r(e^{i\theta})-f_{1}(e^{i\theta}) \right | &\to 0 && \text{for almost every } \theta\in [0,2\pi] \\ \|f_r-f_1\|_{L^p(S^1)} &\to 0 \end{align} $$ Now, notice that this pointwise limit is a radial limit. That is, the limit being taken is along a straight line from the center of the disk to the boundary of the circle, and the statement above hence says that $$ f(re^{i\theta})\to f_1(e^{i\theta}) \qquad \text{for almost every } \theta. $$ The natural question is, with this boundary function defined, will we converge pointwise to this function by taking a limit in any other way? That is, suppose instead of following a straight line to the boundary, we follow an arbitrary curve $$ \gamma:[0,1)\to \mathbb{D} $$ converging to some point $$ e^{i\theta} $$ on the boundary. Will $$ f $$ converge to $$ f_{1}(e^{i\theta}) $$ ? (Note that the above theorem is just the special case of $$ \gamma(t)=te^{i\theta} $$ ).
https://en.wikipedia.org/wiki/Fatou%27s_theorem
passage: However, one does not work directly with the "category of all sets". Instead, theorems are expressed in terms of the category SetU whose objects are the elements of a sufficiently large Grothendieck universe U, and are then shown not to depend on the particular choice of U. As a foundation for category theory, this approach is well matched to a system like Tarski–Grothendieck set theory in which one cannot reason directly about proper classes; its principal disadvantage is that a theorem can be true of all SetU but not of Set. Various other solutions, and variations on the above, have been proposed. The same issues arise with other concrete categories, such as the category of groups or the category of topological spaces.
https://en.wikipedia.org/wiki/Category_of_sets
passage: ## Example Regular splitting In equation (), let Let us apply the splitting () which is used in the Jacobi method: we split A in such a way that B consists of all of the diagonal elements of A, and C consists of all of the off-diagonal elements of A, negated. (Of course this is not the only useful way to split a matrix into two matrices.) We have $$ \begin{align} & \mathbf{A^{-1}} = \frac{1}{47} \begin{pmatrix} 18 & 13 & 16 \\ 11 & 21 & 15 \\ 13 & 12 & 22 \end{pmatrix}, \quad \mathbf{B^{-1}} = \begin{pmatrix} \frac{1}{6} & 0 & 0 \\[4pt] 0 & \frac{1}{4} & 0 \\[4pt] 0 & 0 & \frac{1}{5} \end{pmatrix}, \end{align} $$ $$ \begin{align} \mathbf{D} = \mathbf{B^{-1}C} = \begin{pmatrix} 0 & \frac{1}{3} & \frac{1}{2} \\[4pt] \frac{1}{4} & 0 & \frac{1}{2} \\[4pt] \frac{3}{5} & \frac{1}{5} & 0 \end{pmatrix}, \quad \mathbf{B^{-1}k} = \begin{pmatrix} \frac{5}{6} \\[4pt] -3 \\[4pt] 2 \end{pmatrix}. \end{align} $$ Since B−1 ≥ 0 and C ≥ 0, the splitting () is a regular splitting.
https://en.wikipedia.org/wiki/Matrix_splitting
passage: Specifically, if $$ G $$ is a group, $$ N \triangleleft G $$ , a normal subgroup of $$ G $$ , $$ \mathcal{G} = \{ A \mid N \subseteq A \leq G \} $$ , the set of all subgroups $$ A $$ of $$ G $$ that contain $$ N $$ , and $$ \mathcal{N} = \{ S \mid S \leq G/N \} $$ , the set of all subgroups of $$ G/N $$ , then there is a bijective map $$ \phi: \mathcal{G} \to \mathcal{N} $$ such that $$ \phi(A) = A/N $$ for all $$ A \in \mathcal{G}. $$ One further has that if $$ A $$ and $$ B $$ are in $$ \mathcal{G} $$ then - $$ A \subseteq B $$ if and only if $$ A/N \subseteq B/N $$ ; - if $$ A \subseteq B $$ then $$ |B:A| = |B/N:A/N| $$ , where $$ |B:A| $$ is the index of $$ A $$ in $$ B $$ (the number of cosets $$ bA $$ of $$ A $$ in $$ B $$ ); - $$ \langle A,B \rangle / N = \left\langle A/N, B/N \right\rangle, $$ where $$ \langle A, B \rangle $$ is the subgroup of $$ G $$ generated by $$ A\cup B; $$ - $$ (A \cap B)/N = A/N \cap B/N $$ , and - $$ A $$ is a normal subgroup of $$ G $$
https://en.wikipedia.org/wiki/Correspondence_theorem
passage: A freely reduced word w ∈ F(X) satisfies w = 1 in G if and only if the loop labeled by w in the presentation complex for G corresponding to (∗) is null-homotopic. This fact can be used to show that Area(w) is the smallest number of 2-cells in a van Kampen diagram over (∗) with boundary cycle labelled by w. ### Isoperimetric function An isoperimetric function for a finite presentation (∗) is a monotone non-decreasing function $$ f: \mathbb N\to [0,\infty) $$ such that whenever w ∈ F(X) is a freely reduced word satisfying w = 1 in G, then Area(w) ≤ f(|w|), where |w| is the length of the word w. Dehn function Then the Dehn function of a finite presentation (∗) is defined as $$ {\rm Dehn}(n)=\max\{{\rm Area}(w): w=1 \text{ in } G, |w|\le n, w \text{ freely reduced}.\} $$ Equivalently, Dehn(n) is the smallest isoperimetric function for (∗), that is, Dehn(n) is an isoperimetric function for (∗) and for any other isoperimetric function f(n) we have Dehn(n) ≤ f(n) for every n ≥ 0.
https://en.wikipedia.org/wiki/Dehn_function
passage: In the following discussion the diagonal components of the tensor would be termed gravitoelectric components, and the other components will be termed gravitomagnetic. Two gravitoelectrically interacting particle ensembles, e.g., two planets or stars moving at constant velocity with respect to each other, each feel a force toward the instantaneous position of the other body without a speed-of-light delay because Lorentz invariance demands that what a moving body in a static field sees and what a moving body that emits that field sees be symmetrical. A moving body's seeing no aberration in a static field emanating from a "motionless body" therefore means Lorentz invariance requires that in the previously moving body's reference frame the (now moving) emitting body's field lines must not at a distance be retarded or aberred. Moving charged bodies (including bodies that emit static gravitational fields) exhibit static field lines that do not bend with distance and show no speed of light delay effects, as seen from bodies moving relative to them. In other words, since the gravitoelectric field is, by definition, static and continuous, it does not propagate. If such a source of a static field is accelerated (for example stopped) with regard to its formerly constant velocity frame, its distant field continues to be updated as though the charged body continued with constant velocity. This effect causes the distant fields of unaccelerated moving charges to appear to be "updated" instantly for their constant velocity motion, as seen from distant positions, in the frame where the source-object is moving at constant velocity.
https://en.wikipedia.org/wiki/Speed_of_gravity
passage: Unfortunately, there are several incompatible band designation systems, and even within a system the frequency ranges corresponding to some of the letters vary somewhat between different application fields. The letter system had its origin in World War 2 in a top-secret U.S. classification of bands used in radar sets; this is the origin of the oldest letter system, the IEEE radar bands. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below:
https://en.wikipedia.org/wiki/Microwave
passage: Let F : Set → Grp be the functor assigning to each set Y the free group generated by the elements of Y, and let G : Grp → Set be the forgetful functor, which assigns to each group X its underlying set. Then F is left adjoint to G: Initial morphisms. For each set Y, the set GFY is just the underlying set of the free group FY generated by Y. Let $$ \eta_Y:Y\to GFY $$ be the set map given by "inclusion of generators". This is an initial morphism from Y to G, because any set map from Y to the underlying set GW of some group W will factor through $$ \eta_Y:Y\to GFY $$ via a unique group homomorphism from FY to W. This is precisely the universal property of the free group on Y. Terminal morphisms. For each group X, the group FGX is the free group generated freely by GX, the elements of X. Let $$ \varepsilon_X:FGX\to X $$ be the group homomorphism that sends the generators of FGX to the elements of X they correspond to, which exists by the universal property of free groups. Then each $$ (GX,\varepsilon_X) $$ is a terminal morphism from F to X, because any group homomorphism from a free group FZ to X will factor through $$ \varepsilon_X:FGX\to X $$ via a unique set map from Z to GX.
https://en.wikipedia.org/wiki/Adjoint_functors
passage: Cooperativity is a phenomenon displayed by systems involving identical or near-identical elements, which act dependently of each other, relative to a hypothetical standard non-interacting system in which the individual elements are acting independently. One manifestation of this is enzymes or receptors that have multiple binding sites where the affinity of the binding sites for a ligand is apparently increased, positive cooperativity, or decreased, negative cooperativity, upon the binding of a ligand to a binding site. For example, when an oxygen atom binds to one of hemoglobin's four binding sites, the affinity to oxygen of the three remaining available binding sites increases; i.e. oxygen is more likely to bind to a hemoglobin bound to one oxygen than to an unbound hemoglobin. This is referred to as cooperative binding. We also see cooperativity in large chain molecules made of many identical (or nearly identical) subunits (such as DNA, proteins, and phospholipids), when such molecules undergo phase transitions such as melting, unfolding or unwinding. This is referred to as subunit cooperativity. However, the definition of cooperativity based on apparent increase or decrease in affinity to successive ligand binding steps is problematic, as the concept of "energy" must always be defined relative to a standard state. When we say that the affinity is increased upon binding of one ligand, it is empirically unclear what we mean since a non-cooperative binding curve is required to rigorously define binding energy and hence also affinity.
https://en.wikipedia.org/wiki/Cooperativity
passage: However, with the addition of each tag comes the risk that the native function of the protein may be compromised by interactions with the tag. Therefore, after purification, tags are sometimes removed by specific proteolysis (e.g. by TEV protease, Thrombin, Factor Xa or Enteropeptidase) or intein splicing. ## List of protein tags (See Proteinogenic amino acid#Chemical properties for the A-Z amino-acid codes) ### Peptide tags - ALFA-tag, a de novo designed helical peptide tag (SRLEEELRRRLTE) for biochemical and microscopy applications. The tag is recognized by a repertoire of single-domain antibodies - AviTag, a peptide allowing biotinylation by the enzyme BirA and so the protein can be isolated by streptavidin (GLNDIFEAQKIEWHE) - C-tag, a peptide that binds to a single-domain camelid antibody developed through phage display (EPEA) - Calmodulin-tag, a peptide bound by the protein calmodulin (KRRWKKNFIAVSAANRFKKISSSGAL) - iCapTag™ (intein Capture Tag), a self-removing peptide-based tag (MIKIATRKYLGKQNVYGIGVERDHNFALKNGFIAHN). The iCapTag™ is controlled by pH change (typically pH 8.5 to pH 6.2). Therefore, this technology can be adapted to a wide range of buffers adjusted to the target pH values of 8.5 and 6.2. The expected purity of target proteins or peptides is between 95-99%.
https://en.wikipedia.org/wiki/Protein_tag
passage: where $$ J=\bigcup\limits_{j=1}^q[a_j,b_j] $$ is the support of the measure and define $$ q(x)=-\left(\frac{Q'(x)}{2}\right)^2+\int \frac{Q'(x)-Q'(y)}{x-y}\mathrm{d}\nu_{Q}(y) $$ .
https://en.wikipedia.org/wiki/Random_matrix
passage: When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another. ## Barriers Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure. Interdisciplinary programs may also fail if they are not given sufficient autonomy.
https://en.wikipedia.org/wiki/Interdisciplinarity
passage: There are various properties that uniquely specify them; for instance, all unbounded, connected, and separable order topologies are necessarily homeomorphic to the reals. Every nonnegative real number has a square root in $$ \mathbb{R} $$ , although no negative number does. This shows that the order on $$ \mathbb{R} $$ is determined by its algebraic structure. Also, every polynomial of odd degree admits at least one real root: these two properties make $$ \mathbb{R} $$ the premier example of a real closed field. Proving this is the first half of one proof of the fundamental theorem of algebra. The reals carry a canonical measure, the Lebesgue measure, which is the Haar measure on their structure as a topological group normalized such that the unit interval [0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g. Vitali sets. The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set of hyperreal numbers satisfies the same first order sentences as $$ \mathbb{R} $$ .
https://en.wikipedia.org/wiki/Real_number
passage: This tensor should not be confused with the tensor density field mentioned above. The presentation in this section closely follows . The covariant Levi-Civita tensor (also known as the Riemannian volume form) in any coordinate system that matches the selected orientation is $$ E_{a_1\dots a_n} = \sqrt{\left|\det [g_{ab}]\right|}\, \varepsilon_{a_1\dots a_n} \,, $$ where is the representation of the metric in that coordinate system.
https://en.wikipedia.org/wiki/Levi-Civita_symbol
passage: If $$ Z_\text{TS} $$ is the equivalent impedance of series impedances and $$ Z_\text{TP} $$ is the equivalent impedance of parallel impedances, then $$ Z_\text{TS} = Z_1 + Z_2 + Z_3 + ... \, $$ $$ \frac{1}{Z_\text{TP}} = \frac{1}{Z_1} + \frac{1}{Z_2} + \frac{1}{Z_3} + ... \, $$ For admittances the reverse is true, that is $$ Y_\text{TP} = Y_1 + Y_2 + Y_3 + ... \, $$ $$ \frac{1}{Y_\text{TS}} = \frac{1}{Y_1} + \frac{1}{Y_2} + \frac{1}{Y_3} + ... \, $$ Dealing with the reciprocals, especially in complex numbers, is more time-consuming and error-prone than using linear addition. In general therefore, most RF engineers work in the plane where the circuit topography supports linear addition. The following table gives the complex expressions for impedance (real and normalised) and admittance (real and normalised) for each of the three basic passive circuit elements: resistance, inductance and capacitance. Using just the characteristic impedance (or characteristic admittance) and test frequency an equivalent circuit can be found and vice versa. +
https://en.wikipedia.org/wiki/Smith_chart
passage: ### Real symmetric matrices As a special case, for every real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix can be decomposed as $$ \mathbf{A}=\mathbf{Q} \mathbf{\Lambda}\mathbf{Q}^\mathsf{T} $$ , where is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of , and is a diagonal matrix whose entries are the eigenvalues of . ### Diagonalizable matrices Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as $$ \mathbf{A}=\mathbf{P} \mathbf{D}\mathbf{P}^{-1} $$ , where $$ \mathbf{P} $$ is a matrix whose columns are eigenvectors of $$ \mathbf{A} $$ and $$ \mathbf{D} $$ is a diagonal matrix consisting of the corresponding eigenvalues of $$ \mathbf{A} $$ . ### Positive definite matrices Positive definite matrices are matrices for which all eigenvalues are positive.
https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
passage: The English word therapy comes via Latin therapīa from and literally means "curing" or "healing". The term is a somewhat archaic doublet of the word therapy. ## Types of therapies Therapy as a treatment for physical or mental condition is based on knowledge usually from one of three separate fields (or a combination of them): conventional medicine (allopathic, Western biomedicine, relying on scientific approach and evidence-based practice), traditional medicine (age-old cultural practices), and alternative medicine (healthcare procedures "not readily integrated into the dominant healthcare model"). ### By chronology, priority, or intensity #### Levels of care Levels of care classify health care into categories of chronology, priority, or intensity, as follows: - Urgent care handles health issues that need to be handled today but are not necessarily emergencies; the urgent care venue can send a patient to the emergency care level if it turns out to be needed. - In the United States (and possibly various other countries), urgent care centers also serve another function as their other main purpose: U.S. primary care practices have evolved in recent decades into a configuration whereby urgent care centers provide portions of primary care that cannot wait a month, because getting an appointment with the primary care practitioner is often subject to a waitlist of 2 to 8 weeks. - Emergency care handles medical emergencies and is a first point of contact or intake for less serious problems, which can be referred to other levels of care as appropriate. This therapy is often given to patients before a definitive diagnosis is made.
https://en.wikipedia.org/wiki/Therapy
passage: 1. Split the level. However, we have to skew and split the entire level this time instead of just a node, complicating our code. function delete is input: X, the value to delete, and T, the root of the tree from which it should be deleted. output: T, balanced, without the value X. if nil(T) then return T else if X > value(T) then right(T) := delete(X, right(T)) else if X < value(T) then left(T) := delete(X, left(T)) else If we're a leaf, easy, otherwise reduce to leaf case. if leaf(T) then return right(T) else if nil(left(T)) then L := successor(T) right(T) := delete(value(L), right(T)) value(T) := value(L) else L := predecessor(T) left(T) := delete(value(L), left(T)) value(T) := value(L) end if end if Rebalance the tree. Decrease the level of all nodes in this level if necessary, and then skew and split all nodes in the new level. T := decrease_level(T) T := skew(T) right(T) := skew(right(T)) if not nil(right(T)) right(right(T)) := skew(right(right(T))) end if T := split(T) right(T) := split(right(T)) return T end function function decrease_level is input: T, a tree for which we want to remove links that skip levels. output: T with its level decreased. should_be = min(level(left(T)), level(right(T))) + 1 if should_be < level(T) then level(T) := should_be if should_be < level(right(T)) then level(right(T)) := should_be end if end if return T end function A good example of deletion by this algorithm is present in the Andersson paper.
https://en.wikipedia.org/wiki/AA_tree
passage: Factors that affect resistance and thus loss include temperature, spiraling, and the skin effect. Resistance increases with temperature. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance to increase at higher AC frequencies. Corona and resistive losses can be estimated using a mathematical model. US transmission and distribution losses were estimated at 6.6% in 1997, 6.5% in 2007 and 5% from 2013 to 2019. In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold; the difference constitutes transmission and distribution losses, assuming no utility theft occurs. As of 1980, the longest cost-effective distance for DC transmission was . For AC it was , though US transmission lines are substantially shorter. In any AC line, conductor inductance and capacitance can be significant. Currents that flow solely in reaction to these properties, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no power to the load. These reactive currents, however, cause extra heating losses. The ratio of real power transmitted to the load to apparent power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, reactive power increases and power factor decreases. For transmission systems with low power factor, losses are higher than for systems with high power factor.
https://en.wikipedia.org/wiki/Electric_power_transmission
passage: In number theory, an integer q is a quadratic residue modulo n if it is congruent to a perfect square modulo n; that is, if there exists an integer x such that $$ x^2\equiv q \pmod{n}. $$ Otherwise, q is a quadratic nonresidue modulo n. Quadratic residues are used in applications ranging from acoustical engineering to cryptography and the factoring of large numbers. ## History, conventions, and elementary facts Fermat, Euler, Lagrange, Legendre, and other number theorists of the 17th and 18th centuries established theorems and formed conjectures about quadratic residues, but the first systematic treatment is § IV of Gauss's Disquisitiones Arithmeticae (1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and says that if the context makes it clear, the adjective "quadratic" may be dropped. For a given n, a list of the quadratic residues modulo n may be obtained by simply squaring all the numbers 0, 1, ..., . Since a≡b (mod n) implies a2≡b2 (mod n), any other quadratic residue is congruent (mod n) to some in the obtained list. But the obtained list is not composed of mutually incongruent quadratic residues (mod n) only.
https://en.wikipedia.org/wiki/Quadratic_residue
passage: Shortwave radio is radio transmission using radio frequencies in the shortwave bands (SW). There is no official definition of the band range, but it always includes all of the high frequency band (HF), which extends from 3 to 30 MHz (approximately 100 to 10 metres in wavelength). It lies between the medium frequency band (MF) and the bottom of the VHF band. Radio waves in the shortwave band can be reflected or refracted from a layer of electrically charged atoms in the atmosphere called the ionosphere. Therefore, short waves directed at an angle into the sky can be reflected back to Earth at great distances, beyond the horizon. This is called skywave or "skip" propagation. Thus shortwave radio can be used for communication over very long distances, in contrast to radio waves of higher frequency, which travel in straight lines (line-of-sight propagation) and are generally limited by the visual horizon, about 64 km (40 miles). Shortwave broadcasts of radio programs played an important role in international broadcasting for many decades, serving both to provide news and information and as a propaganda tool for an international audience. The heyday of international shortwave broadcasting was during the Cold War between 1960 and 1990. With the wide implementation of other technologies for the long-distance distribution of radio programs, such as satellite radio, cable broadcasting and IP-based transmissions, shortwave broadcasting lost importance. Initiatives for the digitization of broadcasting did not bear fruit either, and , relatively few broadcasters continue to broadcast programs on shortwave.
https://en.wikipedia.org/wiki/Shortwave_radio
passage: Power: If $$ X \sim \operatorname{Lognormal}(\mu, \sigma^2) $$ then $$ X^a \sim \operatorname{Lognormal}(a\mu, a^2 \sigma^2) $$ for $$ a \neq 0. $$ ### Multiplication and division of independent, log-normal random variables If two independent, log-normal variables $$ X_1 $$ and $$ X_2 $$ are multiplied [divided], the product [ratio] is again log-normal, with parameters $$ \mu = \mu_1 + \mu_2 $$ and where More generally, if $$ X_j \sim \operatorname{Lognormal} (\mu_j, \sigma_j^2) $$ are $$ n $$ independent, log-normally distributed variables, then $$ Y = \prod_{j=1}^n X_j \sim \operatorname{Lognormal} \Big( \sum_{j=1}^n\mu_j, \sum_{j=1}^n \sigma_j^2 \Big). $$
https://en.wikipedia.org/wiki/Log-normal_distribution
passage: Jellyfish Lake is a marine lake where millions of golden jellyfish (Mastigias spp.) migrate horizontally across the lake daily. Although most jellyfish live well off the ocean floor and form part of the plankton, a few species are closely associated with the bottom for much of their lives and can be considered benthic. The upside-down jellyfish in the genus Cassiopea typically lie on the bottom of shallow lagoons where they sometimes pulsate gently with their umbrella top facing down. Even some deep-sea species of hydromedusae and scyphomedusae are usually collected on or near the bottom. All of the stauromedusae are found attached to either seaweed or rocky or other firm material on the bottom. Some species explicitly adapt to tidal flux. In Roscoe Bay, jellyfish ride the current at ebb tide until they hit a gravel bar, and then descend below the current. They remain in still waters until the tide rises, ascending and allowing it to sweep them back into the bay. They also actively avoid fresh water from mountain snowmelt, diving until they find enough salt. ### Parasites Jellyfish are hosts to a wide variety of parasitic organisms. They act as intermediate hosts of endoparasitic helminths, with the infection being transferred to the definitive host fish after predation. Some digenean trematodes, especially species in the family Lepocreadiidae, use jellyfish as their second intermediate hosts. Fish become infected by the trematodes when they feed on infected jellyfish. ## Relation to humans
https://en.wikipedia.org/wiki/Jellyfish
passage: ### Trilinear coordinates The circumcenter has trilinear coordinates $$ \cos \alpha : \cos \beta : \cos \gamma $$ where are the angles of the triangle. In terms of the side lengths , the trilinears are $$ a\left(b^2 + c^2 - a^2\right) : b\left(c^2 + a^2 - b^2\right) : c\left(a^2 + b^2 - c^2\right). $$ ### Barycentric coordinates The circumcenter has barycentric coordinates $$ a^2\left(b^2 + c^2 - a^2\right):\; b^2\left(c^2 + a^2 - b^2\right):\; c^2\left(a^2 + b^2 - c^2\right),\, $$ where are edge lengths respectively) of the triangle. In terms of the triangle's angles , the barycentric coordinates of the circumcenter are $$ \sin 2\alpha :\sin 2\beta :\sin 2\gamma . $$ ### Circumcenter vector
https://en.wikipedia.org/wiki/Circumcircle
passage: Master-slaveRun independent processes simultaneouslySeveral parallel while loops, one of which functions as the "master", controlling the "slave" loopsA simple GUI for data acquisition and visualizationAttention to and prevention of race conditions is required. Producer-consumerAsynchronous or multithreaded execution of loopsA master loop controls the execution of two slave loops, that communicate using notifiers, queues and semaphores; data-independent loops are automatically executed in separate threadsData sampling and visualizationOrder of execution is not obvious to control. Queued state machine with event-driven producer-consumerHighly responsive user-interface for multithreaded applicationsAn event-driven user interface is placed inside the producer loop and a state machine is placed inside the consumer loop, communicating using queues between themselves and other parallel VIsComplex applications ## Features and Resources ### Interfacing to devices LabVIEW includes extensive support for interfacing to instruments, cameras, and other devices. Users interface to hardware by either writing direct bus commands (USB, GPIB, Serial) or using high-level, device-specific drivers that provide native "G" function nodes for controlling the device. National Instruments makes thousands of device drivers available for download on their Instrument Driver Network (IDNet). LabVIEW has built-in support for other National Instruments products, such as the CompactDAQ and CompactRIO hardware platforms and Measurement and Automation eXplorer (MAX) and Virtual Instrument Software Architecture (VISA) toolsets.
https://en.wikipedia.org/wiki/LabVIEW
passage: ## Definition Suppose that $$ X $$ is a set and $$ Y $$ is a topological space, such as the real or complex numbers or a metric space, for example. A sequence of functions $$ \left(f_n\right) $$ all having the same domain $$ X $$ and codomain $$ Y $$ is said to converge pointwise to a given function $$ f : X \to Y $$ often written as $$ \lim_{n\to\infty} f_n = f\ \mbox{pointwise} $$ if (and only if) the limit of the sequence $$ f_n(x) $$ evaluated at each point $$ x $$ in the domain of $$ f $$ is equal to $$ f(x) $$ , written as $$ \forall x \in X, \lim_{n\to\infty} f_n(x) = f(x). $$ The function $$ f $$ is said to be the pointwise limit function of the $$ \left(f_n\right). $$ The definition easily generalizes from sequences to nets $$ f_\bull = \left(f_a\right)_{a \in A} $$ . We say $$ f_\bull $$ converges pointwise to $$ f $$ , written as $$ \lim_{a\in A} f_a = f\ \mbox{pointwise} $$ if (and only if) $$ f(x) $$ is the unique accumulation point of the net $$ f_\bull(x) $$ evaluated at each point $$ x $$
https://en.wikipedia.org/wiki/Pointwise_convergence
passage: ## Notable figures - Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society - Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic. - Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators - Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics - Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR). - Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs - Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering - John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University. - Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering. - J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University.
https://en.wikipedia.org/wiki/Biomedical_engineering
passage: Another closely related Lagrangian is found in Seiberg–Witten theory. ### Dirac Lagrangian The Lagrangian density for a Dirac field is: $$ \mathcal{L} = \bar \psi ( i \hbar c {\partial}\!\!\!/\ - mc^2) \psi $$ where $$ \psi $$ is a Dirac spinor, $$ \bar \psi = \psi^\dagger \gamma^0 $$ is its Dirac adjoint, and $$ {\partial}\!\!\!/ $$ is Feynman slash notation for $$ \gamma^\sigma \partial_\sigma $$ . There is no particular need to focus on Dirac spinors in the classical theory. The Weyl spinors provide a more general foundation; they can be constructed directly from the Clifford algebra of spacetime; the construction works in any number of dimensions, and the Dirac spinors appear as a special case. Weyl spinors have the additional advantage that they can be used in a vielbein for the metric on a Riemannian manifold; this enables the concept of a spin structure, which, roughly speaking, is a way of formulating spinors consistently in a curved spacetime. ### Quantum electrodynamic Lagrangian The Lagrangian density for QED combines the Lagrangian for the Dirac field together with the Lagrangian for electrodynamics in a gauge-invariant way.
https://en.wikipedia.org/wiki/Lagrangian_%28field_theory%29
passage: It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. ## Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability.
https://en.wikipedia.org/wiki/Citric_acid_cycle
passage: In mathematics, especially in the area of modern algebra known as combinatorial group theory, Nielsen transformations are certain automorphisms of a free group which are a non-commutative analogue of row reduction and one of the main tools used in studying free groups . Given a finite basis of a free group $$ F_n $$ , the corresponding set of elementary Nielsen transformations forms a finite generating set of $$ \mathrm{Aut}(F_n) $$ . This system of generators is analogous to elementary matrices for $$ GL_n(\Z) $$ and Dehn twists for mapping class groups of closed surfaces. Nielsen transformations were introduced in to prove that every subgroup of a free group is free (the ### Nielsen–Schreier theorem ). They are now used in a variety of mathematics, including computational group theory, k-theory, and knot theory. ## Definitions ### Free groups Let $$ F_n $$ be a finitely generated free group of rank $$ n $$ . An elementary Nielsen transformation maps an ordered basis $$ [x_1,\ldots, x_n] $$ to a new basis $$ [y_1,\ldots,y_n] $$ by one of the following operations: 1. Permute the $$ x_i $$ s by some permutation $$ \sigma\in S_n $$ , i.e. BLOCK61. Invert some $$ x_i $$ , i.e. BLOCK81.
https://en.wikipedia.org/wiki/Nielsen_transformation
passage: Refer to the timespace diagram to the right for a visualization of this problem. If the universe started with even slightly different temperatures in different places, the CMB should not be isotropic unless there is a mechanism that evens out the temperature by the time of decoupling. In reality, the CMB has the same temperature in the entire sky, . ## Inflationary model The theory of cosmic inflation has attempted to address the problem by positing a 10-second period of exponential expansion in the first second of the history of the universe due to a scalar field interaction. According to the inflationary model, the universe increased in size by a factor of more than 10, from a small and causally connected region in near equilibrium. Inflation then expanded the universe rapidly, isolating nearby regions of spacetime by growing them beyond the limits of causal contact, effectively "locking in" the uniformity at large distances. Essentially, the inflationary model suggests that the universe was entirely in causal contact in the very early universe. Inflation then expands this universe by approximately 60 e-foldings (the scale factor increases by factor $$ e^{60} $$ ). We observe the CMB after inflation has occurred at a very large scale. It maintained thermal equilibrium to this large size because of the rapid expansion from inflation. One consequence of cosmic inflation is that the anisotropies in the Big Bang due to quantum fluctuations are reduced but not eliminated. Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist.
https://en.wikipedia.org/wiki/Horizon_problem
passage: These trimmed midranges are also of interest as descriptive statistics or as L-estimators of central location or skewness: differences of midsummaries, such as midhinge minus the median, give measures of skewness at different points in the tail. ## Efficiency Despite its drawbacks, in some cases it is useful: the midrange is a highly efficient estimator of μ, given a small sample of a sufficiently platykurtic distribution, but it is inefficient for mesokurtic distributions, such as the normal. For example, for a continuous uniform distribution with unknown maximum and minimum, the mid-range is the uniformly minimum-variance unbiased estimator (UMVU) estimator for the mean. The sample maximum and sample minimum, together with sample size, are a sufficient statistic for the population maximum and minimum – the distribution of other samples, conditional on a given maximum and minimum, is just the uniform distribution between the maximum and minimum and thus add no information. See German tank problem for further discussion. Thus the mid-range, which is an unbiased and sufficient estimator of the population mean, is in fact the UMVU: using the sample mean just adds noise based on the uninformative distribution of points within this range. Conversely, for the normal distribution, the sample mean is the UMVU estimator of the mean.
https://en.wikipedia.org/wiki/Mid-range
passage: The one-dimensional wave equation: $$ \frac{\partial^2 u}{\partial t^2} - c^2\frac{\partial^2 u}{\partial x^2} = 0 $$ is an example of a hyperbolic equation. The two-dimensional and three-dimensional wave equations also fall into the category of hyperbolic PDE. This type of second-order hyperbolic partial differential equation may be transformed to a hyperbolic system of first-order differential equations. ## Hyperbolic systems of first-order equations The following is a system of first-order partial differential equations for $$ s $$ unknown functions where where $$ \vec {f}^j \in C^1(\mathbb{R}^s, \mathbb{R}^s) $$ are once continuously differentiable functions, nonlinear in general.
https://en.wikipedia.org/wiki/Hyperbolic_partial_differential_equation
passage: Though the age of the knight was over, armour continued to be used in many capacities. Soldiers in the American Civil War bought iron and steel vests from peddlers (both sides had considered but rejected body armour for standard issue). The effectiveness of the vests varied widely, some successfully deflected bullets and saved lives, but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their increased weight on long marches, as well as the stigma they got for being cowards from their fellow troops. At the start of World War I, thousands of the French Cuirassiers rode out to engage the German Cavalry. By that period, the shiny metallic cuirass was covered in a dark paint and a canvas wrap covered their elaborate Napoleonic style helmets, to help mitigate the sunlight being reflected off the surfaces, thereby alerting the enemy of their location. Their armour was only meant for protection against edged weapons such as bayonets, sabres, and lances. Cavalry had to be wary of repeating rifles, machine guns, and artillery, unlike the foot soldiers, who at least had a trench to give them some protection. ### ### Present Today, ballistic vests, also known as flak jackets, made of ballistic cloth (e.g. kevlar, dyneema, twaron, spectra etc.) and ceramic or metal plates are common among police officers, security guards, corrections officers and some branches of the military.
https://en.wikipedia.org/wiki/Armour
passage: Measure theory succeeded in extending the notion of volume to a vast class of sets, the so-called measurable sets. Indeed, non-measurable sets almost never occur in applications. Measurable sets, given in a measurable space by definition, lead to measurable functions and maps. In order to turn a topological space into a measurable space one endows it with a The of Borel sets is the most popular, but not the only choice. (Baire sets, universally measurable sets, etc, are also used sometimes.) The topology is not uniquely determined by the Borel for example, the norm topology and the weak topology on a separable Hilbert space lead to the same Borel . Not every is the Borel of some topology. Actually, a can be generated by a given collection of sets (or functions) irrespective of any topology. Every subset of a measurable space is itself a measurable space. Standard measurable spaces (also called standard Borel spaces) are especially useful due to some similarity to compact spaces (see EoM). Every bijective measurable mapping between standard measurable spaces is an isomorphism; that is, the inverse mapping is also measurable. And a mapping between such spaces is measurable if and only if its graph is measurable in the product space. Similarly, every bijective continuous mapping between compact metric spaces is a homeomorphism; that is, the inverse mapping is also continuous.
https://en.wikipedia.org/wiki/Space_%28mathematics%29
passage: Working in one spatial dimension for simplicity, we have: $$ (\Delta_{\psi(t)}X)^2 = \frac{t^2}{m^2}(\Delta_{\psi_0}P)^2+\frac{2t}{m}\left(\left\langle \tfrac{1}{2}({XP+PX})\right\rangle_{\psi_0} - \left\langle X\right\rangle_{\psi_0} \left\langle P\right\rangle_{\psi_0} \right)+(\Delta_{\psi_0}X)^2, $$ where $$ \psi_0 $$ is the time-zero wave function. The expression in parentheses in the second term on the right-hand side is the quantum covariance of $$ X $$ and $$ P $$ . Thus, for large positive times, the uncertainty in $$ X $$ grows linearly, with the coefficient of $$ t $$ equal to $$ (\Delta_{\psi_0}P)/m $$ . If the momentum of the initial wave function $$ \psi_0 $$ is highly localized, the wave packet will spread slowly and the group-velocity approximation will remain good for a long time.
https://en.wikipedia.org/wiki/Free_particle
passage: In the special case where , so that is a real-valued function, then this formula simplifies even further: $$ \frac{\partial y}{\partial x_i} = \sum_{\ell = 1}^m \frac{\partial y}{\partial u_\ell} \frac{\partial u_\ell}{\partial x_i}. $$ This can be rewritten as a dot product. Recalling that , the partial derivative is also a vector, and the chain rule says that: $$ \frac{\partial y}{\partial x_i} = \nabla y \cdot \frac{\partial \mathbf{u}}{\partial x_i}. $$ Example Given where and , determine the value of and using the chain rule. $$ \frac{\partial u}{\partial r}=\frac{\partial u}{\partial x} \frac{\partial x}{\partial r}+\frac{\partial u}{\partial y} \frac{\partial y}{\partial r} = (2x)(\sin(t)) + (2)(0) = 2r \sin^2(t), $$ and $$ \begin{align} \frac{\partial u}{\partial t} &= \frac{\partial u}{\partial x} \frac{\partial x}{\partial t}+\frac{\partial u}{\partial y} \frac{\partial y}{\partial t} \\ &= (2x)(r\cos(t)) + (2)(2\sin(t)\cos(t)) \\ &= (2r\sin(t))(r\cos(t)) + 4\sin(t)\cos(t) \\ &= 2(r^2 + 2) \sin(t)\cos(t) \\ &= (r^2 + 2) \sin(2t). \end{align} $$
https://en.wikipedia.org/wiki/Chain_rule
passage: Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression $$ \rho = \phi^*\phi $$ and this density is convected according to the probability current vector $$ J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*) $$ with the conservation of probability current and density following from the continuity equation: $$ \nabla\cdot J + \frac{\partial\rho}{\partial t} = 0~. $$ The fact that the density is positive definite and convected according to this continuity equation implies that one may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. To maintain the notion of a convected density, one must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function.
https://en.wikipedia.org/wiki/Dirac_equation
passage: In mathematics and physics, a nonlinear partial differential equation is a partial differential equation with nonlinear terms. They describe many different physical systems, ranging from gravitation to fluid dynamics, and have been used in mathematics to solve problems such as the Poincaré conjecture and the Calabi conjecture. They are difficult to study: almost no general techniques exist that work for all such equations, and usually each individual equation has to be studied as a separate problem. The distinction between a linear and a nonlinear partial differential equation is usually made in terms of the properties of the operator that defines the PDE itself. ## Methods for studying nonlinear partial differential equations ### Existence and uniqueness of solutions A fundamental question for any PDE is the existence and uniqueness of a solution for given boundary conditions. For nonlinear equations these questions are in general very hard: for example, the hardest part of Yau's solution of the Calabi conjecture was the proof of existence for a Monge–Ampere equation. The open problem of existence (and smoothness) of solutions to the Navier–Stokes equations is one of the seven Millennium Prize problems in mathematics. ### Singularities The basic questions about singularities (their formation, propagation, and removal, and regularity of solutions) are the same as for linear PDE, but as usual much harder to study.
https://en.wikipedia.org/wiki/Nonlinear_partial_differential_equation
passage: ### Geometric conjectures The geometric Langlands program, suggested by Gérard Laumon following ideas of Vladimir Drinfeld, arises from a geometric reformulation of the usual Langlands program that attempts to relate more than just irreducible representations. In simple cases, it relates -adic representations of the étale fundamental group of an algebraic curve to objects of the derived category of -adic sheaves on the moduli stack of vector bundles over the curve. A 9-person collaborative project led by Dennis Gaitsgory announced a proof of the (categorical, unramified) geometric Langlands conjecture leveraging Hecke eigensheaves as part of the proof. ## Status The Langlands correspondence for GL(1, K) follows from (and are essentially equivalent to) class field theory. Langlands proved the Langlands conjectures for groups over the archimedean local fields $$ \mathbb{R} $$ (the real numbers) and $$ \mathbb{C} $$ (the complex numbers) by giving the Langlands classification of their irreducible representations. Lusztig's classification of the irreducible representations of groups of Lie type over finite fields can be considered an analogue of the Langlands conjectures for finite fields. Andrew Wiles' proof of modularity of semistable elliptic curves over rationals can be viewed as an instance of the Langlands reciprocity conjecture, since the main idea is to relate the Galois representations arising from elliptic curves to modular forms.
https://en.wikipedia.org/wiki/Langlands_program
passage: In mathematics, a closure operator on a set S is a function $$ \operatorname{cl}: \mathcal{P}(S)\rightarrow \mathcal{P}(S) $$ from the power set of S to itself that satisfies the following conditions for all sets $$ X,Y\subseteq S $$ {| border="0" |- | $$ X \subseteq \operatorname{cl}(X) $$ | (cl is extensive), |- | $$ X\subseteq Y \Rightarrow \operatorname{cl}(X) \subseteq \operatorname{cl}(Y) $$ | (cl is increasing), |- | $$ \operatorname{cl}(\operatorname{cl}(X))=\operatorname{cl}(X) $$ | (cl is idempotent). |} Closure operators are determined by their closed sets, i.e., by the sets of the form cl(X), since the closure cl(X) of a set X is the smallest closed set containing X. Such families of "closed sets" are sometimes called closure systems or "Moore families". A set together with a closure operator on it is sometimes called a closure space. Closure operators are also called "hull operators", which prevents confusion with the "closure operators" studied in topology.
https://en.wikipedia.org/wiki/Closure_operator
passage: One difference between the first and second recursion theorems is that the fixed points obtained by the first recursion theorem are guaranteed to be least fixed points, while those obtained from the second recursion theorem may not be least fixed points. A second difference is that the first recursion theorem only applies to systems of equations that can be recast as recursive operators. This restriction is similar to the restriction to continuous operators in the Kleene fixed-point theorem of order theory. The second recursion theorem can be applied to any total recursive function. ## Generalized theorem In the context of his theory of numberings, Ershov showed that Kleene's recursion theorem holds for any precomplete numbering. A Gödel numbering is a precomplete numbering on the set of computable functions so the generalized theorem yields the Kleene recursion theorem as a special case. Given a precomplete numbering $$ \nu $$ , then for any partial computable function $$ f $$ with two parameters there exists a total computable function $$ t $$ with one parameter such that $$ \forall n \in \mathbb{N} : \nu \circ f(n,t(n)) = \nu \circ t(n). $$
https://en.wikipedia.org/wiki/Kleene%27s_recursion_theorem
passage: ## Representations of particular groups ### Symmetric groups Representation of the symmetric groups $$ S_n $$ have been intensely studied. Conjugacy classes in $$ S_n $$ (and therefore, by the above, irreducible representations) correspond to partitions of n. For example, $$ S_3 $$ has three irreducible representations, corresponding to the partitions 3; 2+1; 1+1+1 of 3. For such a partition, a Young tableau is a graphical device depicting a partition. The irreducible representation corresponding to such a partition (or Young tableau) is called a Specht module. Representations of different symmetric groups are related: any representation of $$ S_n \times S_m $$ yields a representation of $$ S_{n+m} $$ by induction, and vice versa by restriction. The direct sum of all these representation rings $$ \bigoplus_{n \ge 0} R(S_n) $$ inherits from these constructions the structure of a Hopf algebra which, it turns out, is closely related to symmetric functions. ### Finite groups of Lie type To a certain extent, the representations of the $$ GL_n(\mathbf F_q) $$ , as n varies, have a similar flavor as for the $$ S_n $$ ; the above-mentioned induction process gets replaced by so-called parabolic induction.
https://en.wikipedia.org/wiki/Representation_theory_of_finite_groups
passage: Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable . Since the stopped process converges almost surely to , the dominated convergence theorem implies $$ \mathbb{E}[X_\tau]=\lim_{t\to\infty}\mathbb{E}[X_t^\tau]. $$ By the martingale property of the stopped process, $$ \mathbb{E}[X_t^\tau]=\mathbb{E}[X_0],\quad t\in{\mathbb N}_0, $$ hence $$ \mathbb{E}[X_\tau]=\mathbb{E}[X_0]. $$ Similarly, if is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality. ## References 1. 1. ## External links - Doob's Optional Stopping Theorem Category:Theorems in probability theory Category:Theorems in statistics Category:Articles containing proofs Category: Martingale theory
https://en.wikipedia.org/wiki/Optional_stopping_theorem
passage: If $$ f $$ is a holomorphic function, then $$ \varphi(z) = \log \left| f(z) \right| $$ is a subharmonic function if we define the value of $$ \varphi(z) $$ at the zeros of $$ f $$ to be $$ -\infty $$ . It follows that $$ \psi_\alpha(z) = \left| f(z) \right|^\alpha $$ is subharmonic for every α > 0. This observation plays a role in the theory of Hardy spaces, especially for the study of H when 0 < p < 1. In the context of the complex plane, the connection to the convex functions can be realized as well by the fact that a subharmonic function $$ f $$ on a domain $$ G \subset \Complex $$ that is constant in the imaginary direction is convex in the real direction and vice versa. ### Harmonic majorants of subharmonic functions If $$ u $$ is subharmonic in a region $$ \Omega $$ of the complex plane, and $$ h $$ is harmonic on $$ \Omega $$ , then $$ h $$ is a harmonic majorant of $$ u $$ in $$ \Omega $$ if $$ u \leq h $$ in $$ \Omega $$ . Such an inequality can be viewed as a growth condition on $$ u $$ . ### Subharmonic functions in the unit disc.
https://en.wikipedia.org/wiki/Subharmonic_function
passage: Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks. ## Geographic scale Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale. ### Nanoscale network A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms.
https://en.wikipedia.org/wiki/Computer_network
passage: In practice, `MakeSet` must be preceded by an operation that allocates memory to hold . As long as memory allocation is an amortized constant-time operation, as it is for a good dynamic array implementation, it does not change the asymptotic performance of the random-set forest. ### Finding set representatives The `Find` operation follows the chain of parent pointers from a specified query node until it reaches a root element. This root element represents the set to which belongs and may be itself. `Find` returns the root element it reaches. Performing a `Find` operation presents an important opportunity for improving the forest. The time in a `Find` operation is spent chasing parent pointers, so a flatter tree leads to faster `Find` operations. When a `Find` is executed, there is no faster way to reach the root than by following each parent pointer in succession. However, the parent pointers visited during this search can be updated to point closer to the root. Because every element visited on the way to a root is part of the same set, this does not change the sets stored in the forest. But it makes future `Find` operations faster, not only for the nodes between the query node and the root, but also for their descendants. This updating is an important part of the disjoint-set forest's amortized performance guarantee. There are several algorithms for `Find` that achieve the asymptotically optimal time complexity.
https://en.wikipedia.org/wiki/Disjoint-set_data_structure
passage: Now, in a weakened weak (W2) formulation, we further reduce the requirement. We form a bilinear form using only the assumed function (not even the gradient). This is done by using the so-called generalized gradient smoothing technique, with which one can approximate the gradient of displacement functions for certain class of discontinuous functions, as long as they are in a proper G space. Since we do not have to actually perform even the 1st differentiation to the assumed displacement functions, the requirement on the consistence of the functions are further reduced, and hence the weakened weak or W2 formulation. ## History The development of systematic theory of the weakened weak form started from the works on meshfree methods. It is relatively new, but had very rapid development in the past few years. ## Features of W2 formulations 1. The W2 formulation offers possibilities for formulate various (uniformly) "soft" models that works well with triangular meshes. Because triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence automation in modeling and simulation. This is very important for our long-term goal of development of fully automated computational methods. 1. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides.
https://en.wikipedia.org/wiki/Weakened_weak_form
passage: Types of tendinopathy include: - Tendinosis: non-inflammatory injury to the tendon at the cellular level. The degradation is caused by damage to collagen, cells, and the vascular components of the tendon, and is known to lead to rupture. Observations of tendons that have undergone spontaneous rupture have shown the presence of collagen fibrils that are not in the correct parallel orientation or are not uniform in length or diameter, along with rounded tenocytes, other cell abnormalities, and the ingrowth of blood vessels. Other forms of tendinosis that have not led to rupture have also shown the degeneration, disorientation, and thinning of the collagen fibrils, along with an increase in the amount of glycosaminoglycans between the fibrils. - Tendinitis: degeneration with inflammation of the tendon as well as vascular disruption. - Paratenonitis: inflammation of the paratenon, or paratendinous sheet located between the tendon and its sheath. Tendinopathies may be caused by several intrinsic factors including age, body weight, and nutrition. The extrinsic factors are often related to sports and include excessive forces or loading, poor training techniques, and environmental conditions. ### Healing It was believed that tendons could not undergo matrix turnover and that tenocytes were not capable of repair. However, it has since been shown that, throughout the lifetime of a person, tenocytes in the tendon actively synthesize matrix components as well as enzymes such as matrix metalloproteinases (MMPs) can degrade the matrix.
https://en.wikipedia.org/wiki/Tendon
passage: Adler-32 is a checksum algorithm written by Mark Adler in 1995, modifying Fletcher's checksum. Compared to a cyclic redundancy check of the same length, it trades reliability for speed. Adler-32 is more reliable than Fletcher-16, and slightly less reliable than Fletcher-32. ## History The Adler-32 checksum is part of the widely used zlib compression library, as both were developed by Mark Adler. A "rolling checksum" version of Adler-32 is used in the rsync utility. ## Calculation An Adler-32 checksum is obtained by calculating two 16-bit checksums A and B and concatenating their bits into a 32-bit integer. A is the sum of all bytes in the stream plus one, and B is the sum of the individual values of A from each step. At the beginning of an Adler-32 run, A is initialized to 1, B to 0. The sums are done modulo 65521 (the largest prime number smaller than 216). The bytes are stored in network order (big endian), B occupying the two most significant bytes. The function may be expressed as A = 1 + D1 + D2 + ... + Dn (mod 65521) B = (1 + D1) + (1 + D1 + D2) + ... + (1 + D1 + D2 + ... + Dn) (mod 65521) = n×D1 + (n−1)×D2 + (n−2)×D3 + ... + Dn + n (mod 65521) Adler-32(D) = B × 65536 +
https://en.wikipedia.org/wiki/Adler-32
passage: At most two teams can avoid breaks entirely, alternating between home and away games; no other team can have the same home-away schedule as these two, because then it would be unable to play the team with which it had the same schedule. Therefore, an optimal schedule has two breakless teams and a single break for every other team. Once one of the breakless teams is chosen, one can set up a 2-satisfiability problem in which each variable represents the home-away assignment for a single team in a single game, and the constraints enforce the properties that any two teams have a consistent assignment for their games, that each team have at most one break before and at most one break after the game with the breakless team, and that no team has two breaks. Therefore, testing whether a schedule admits a solution with the optimal number of breaks can be done by solving a linear number of 2-satisfiability problems, one for each choice of the breakless team. A similar technique also allows finding schedules in which every team has a single break, and maximizing rather than minimizing the number of breaks (to reduce the total mileage traveled by the teams). ### Discrete tomography Tomography is the process of recovering shapes from their cross-sections. In discrete tomography, a simplified version of the problem that has been frequently studied, the shape to be recovered is a polyomino (a subset of the squares in the two-dimensional square lattice), and the cross-sections provide aggregate information about the sets of squares in individual rows and columns of the lattice.
https://en.wikipedia.org/wiki/2-satisfiability
passage: The extant platypus genus Ornithorhynchus in also known from Pliocene deposits, and the oldest fossil tachyglossids are Pleistocene (1.7 Ma) in age. ### Fossil species Excepting Ornithorhynchus anatinus, all the animals listed in this section are known only from fossils. Some family designations are hesitant, given the fragmentary nature of the specimens. - Family Kollikodontidae - Genus Kollikodon - Species Kollikodon ritchiei - Genus Kryoryctes - Species Kryoryctes cadburyi - Genus Sundrius - Species Sundrius ziegleri - Family Steropodontidae - Genus Parvopalus - Species Parvopalus clytiei - Genus Steropodon - Species Steropodon galmani - Family Teinolophidae - Genus Stirtodon - Species Stirtodon elizabethae - Genus Teinolophos - Species Teinolophos trusleri – 123 Ma, oldest monotreme specimen - Superfamily Ornithorhynchoidea - Family Opalionidae - Genus Opalios - Species Opalios splendens - Family Ornithorhynchidae - Genus Dharragarra - Species Dharragarra aurora - Genus Monotrematum - Species Monotrematum sudamericanum – 61 Ma, southern South America - Genus Ornithorhynchus – oldest Ornithorhynchus specimen 9 Ma - Species Ornithorhynchus anatinus (platypus) – oldest specimen 10,000 years old - Genus Obdurodon – includes a number of Miocene (24–5 Ma) Riversleigh platypuses - Species Obdurodon dicksoni
https://en.wikipedia.org/wiki/Monotreme
passage: such that $$ i_n \geq i, $$ and so $$ x_{\geq i} \supseteq x_{i_{\geq n}} $$ holds, as desired. Consequently, $$ \operatorname{TailsFilter}\left(x_{\bull}\right) \subseteq \operatorname{TailsFilter}\left(x_{i_{\bull}}\right). $$ The left hand side will be a subset of the right hand side if (for instance) every point of $$ x_{\bull} $$ is unique (that is, when $$ x_{\bull} : \N \to X $$ is injective) and $$ x_{i_{\bull}} $$ is the even-indexed subsequence $$ \left(x_2, x_4, x_6, \ldots\right) $$ because under these conditions, every tail $$ x_{i_{\geq n}} = \left\{x_{2n}, x_{2n + 2}, x_{2n + 4}, \ldots\right\} $$ (for every $$ n \in \N $$ ) of the subsequence will belong to the right hand side filter but not to the left hand side filter. For another example, if $$ \mathcal{B} $$ is any family then $$ \varnothing \leq \mathcal{B} \leq \mathcal{B} \leq \{\varnothing\} $$
https://en.wikipedia.org/wiki/Filter_%28set_theory%29
passage: For example, given a stretch of known plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using the Berlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext. Three general methods are employed to reduce this problem in LFSR-based stream ciphers: - Non-linear combination of several bits from the LFSR state; - Non-linear combination of the output bits of two or more LFSRs (see also: shrinking generator); or using Evolutionary algorithm to introduce non-linearity. - Irregular clocking of the LFSR, as in the alternating step generator. Important: LFSR-based stream ciphers include A5/1 and A5/2, used in GSM cell phones, E0, used in Bluetooth, and the shrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses. The linear feedback shift register has a strong relationship to linear congruential generators. ### Uses in circuit testing LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis. #### Test-pattern generation Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for an n-input circuit.
https://en.wikipedia.org/wiki/Linear-feedback_shift_register
passage: In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of discrete systems. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible. Transition systems coincide mathematically with abstract rewriting systems (as explained further in this article) and directed graphs. They differ from finite-state automata in several ways: - The set of states is not necessarily finite, or even countable. - The set of transitions is not necessarily finite, or even countable. - No "start" state or "final" states are given. Transition systems can be represented as directed graphs. ## Formal definition Formally, a transition system is a pair $$ (S, T) $$ where $$ S $$ is a set of states and $$ T $$ , the transition relation, is a subset of $$ S \times S $$ . We say that there is a transition from state $$ p $$ to state $$ q $$ if $$ (p, q) \in T $$ , and denote it $$ p \rightarrow q $$ .
https://en.wikipedia.org/wiki/Transition_system
passage: This method is the most computationally intensive, but is particularly useful if the normal equations matrix, XTX, is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis. ## Discussion The numerical methods for linear least squares are important because linear regression models are among the most important types of model, both as formal statistical models and for exploration of data-sets. The majority of statistical computer packages contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to round-off error. Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be - Computations where a number of similar, and often nested, models are considered for the same data-set. That is, where models with the same dependent variable but different sets of independent variables are to be considered, for essentially the same set of data-points.
https://en.wikipedia.org/wiki/Numerical_methods_for_linear_least_squares
passage: This is fastest possible deterministic protocol for proportional division, and the fastest possible protocol for proportional division which can guarantee that the pieces are connected. - Edmonds–Pruhs protocol is a randomized protocol that requires only O(n) actions, but guarantees only a partially proportional division (each partner receives at least 1/an, where a is some constant), and it might give each partner a collection of "crumbs" instead of a single connected piece. - Beck land division protocol can produce a proportional division of a disputed territory among several neighbouring countries, such that each country receives a share that is both connected and adjacent to its currently held territory. - Woodall's super-proportional division protocol produces a division which gives each partner strictly more than 1/n, given that at least two partners have different opinions about the value of at least a single piece. See proportional cake-cutting for more details and complete references. The proportionality criterion can be generalized to situations in which the rights of the people are not equal. For example, in proportional cake-cutting with different entitlements, the cake belongs to shareholders such that one of them holds 20% and the other holds 80% of the cake. This leads to the criterion of weighted proportionality (WPR): $$ \forall i: V_i(X_i)\geq w_i $$ Where the wi are weights that sum up to 1. ### Envy-freeness Another common criterion is envy-freeness (EF). In an envy-free cake-cutting, each person receives a piece that he values at least as much as every other piece.
https://en.wikipedia.org/wiki/Fair_cake-cutting
passage: Then there is an $$ x $$ between $$ \alpha $$ and $$ \beta $$ such that $$ f(x) = \varphi(x) $$ . The equivalence between this formulation and the modern one can be shown by setting $$ \varphi $$ to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions.
https://en.wikipedia.org/wiki/Intermediate_value_theorem
passage: Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as T A and T G. These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription. Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase. ## Base pair geometry The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist.
https://en.wikipedia.org/wiki/Nucleic_acid_double_helix
passage: The equation requires the use of fractional derivatives. For jump lengths which have a symmetric probability distribution, the equation takes a simple form in terms of the Riesz fractional derivative. In one dimension, the equation reads as $$ \frac{\partial \varphi(x,t)}{\partial t}=-\frac{\partial}{\partial x} f(x,t)\varphi(x,t) + \gamma \frac{\partial^\alpha \varphi(x,t)}{\partial |x|^\alpha} $$ where γ is a constant akin to the diffusion constant, α is the stability parameter and f(x,t) is the potential. The Riesz derivative can be understood in terms of its Fourier Transform. $$ F_k\left[\frac{\partial^\alpha \varphi(x,t)}{\partial |x|^\alpha}\right] = -|k|^\alpha F_k[\varphi(x,t)] $$ This can be easily extended to multiple dimensions. Another important property of the Lévy flight is that of diverging variances in all cases except that of α = 2, i.e. Brownian motion. In general, the θ fractional moment of the distribution diverges if α ≤ θ.
https://en.wikipedia.org/wiki/L%C3%A9vy_flight
passage: Penetrance in genetics is the proportion of individuals carrying a particular variant (or allele) of a gene (genotype) that also expresses an associated trait (phenotype). In medical genetics, the penetrance of a disease-causing mutation is the proportion of individuals with the mutation that exhibit clinical symptoms among all individuals with such mutation.  For example: If a mutation in the gene responsible for a particular autosomal dominant disorder has 95% penetrance, then 95% of those with the mutation will go on to develop the disease, showing its phenotype, whereas 5% will not.   Penetrance only refers to whether an individual with a specific genotype exhibits any phenotypic signs or symptoms, and is not to be confused with variable expressivity which is to what extent or degree the symptoms for said disease are shown (the expression of the phenotypic trait). Meaning that, even if the same disease-causing mutation affects separate individuals, the expressivity will vary. ## Degrees of penetrance ### Complete penetrance If 100% of individuals carrying a particular genotype express the associated trait, the genotype is said to show complete penetrance. Neurofibromatosis type 1 (NF1), is an autosomal dominant condition which shows complete penetrance, consequently everyone who inherits the disease-causing variant of this gene will develop some degree of symptoms for NF1. ### Reduced penetrance The penetrance is said to be reduced if less than 100% of individuals carrying a particular genotype express associated traits, and is likely to be caused by a combination of genetic, environmental and lifestyle factors.  BRCA1 is an example of a genotype with reduced penetrance.
https://en.wikipedia.org/wiki/Penetrance
passage: Second, when there is overall convection or flow, there is an associated flux called advective flux: $$ \mathbf{j}_\text{adv} = \mathbf{v} c $$ The total flux (in a stationary coordinate system) is given by the sum of these two: $$ \mathbf{j} = \mathbf{j}_\text{diff} + \mathbf{j}_\text{adv} = -D \nabla c + \mathbf{v} c. $$ Plugging into the continuity equation: $$ \frac{\partial c}{\partial t} + \nabla\cdot \left(-D \nabla c + \mathbf{v} c \right) = R. $$ ### Common simplifications In a common situation, the diffusion coefficient is constant, there are no sources or sinks, and the velocity field describes an incompressible flow (i.e., it has zero divergence). Then the formula simplifies to: $$ \frac{\partial c}{\partial t} = D \nabla^2 c - \mathbf{v} \cdot \nabla c. $$ In this case the equation can be put in the simple diffusion form: $$ \frac{d c}{d t} = D \nabla^2 c, $$ where the derivative of the left hand side is the material derivative of the variable c.
https://en.wikipedia.org/wiki/Convection%E2%80%93diffusion_equation
passage: Then define the word metric with respect to S to be the word metric with respect to the symmetrization of S. ## Example in a free group Suppose that F is the free group on the two element set $$ \{a,b\} $$ . A word w in the symmetric generating set $$ \{a,b,a^{-1},b^{-1}\} $$ is said to be reduced if the letters $$ a,a^{-1} $$ do not occur next to each other in w, nor do the letters $$ b,b^{-1} $$ . Every element $$ g \in F $$ is represented by a unique reduced word, and this reduced word is the shortest word representing g. For example, since the word $$ w = b^{-1} a $$ is reduced and has length 2, the word norm of $$ w $$ equals 2, so the distance in the word norm between $$ b $$ and $$ a $$ equals 2. This can be visualized in terms of the Cayley graph, where the shortest path between b and a has length 2. ## Theorems ### Isometry of the left action The group G acts on itself by left multiplication: the action of each $$ k \in G $$ takes each $$ g \in G $$ to $$ kg $$ . This action is an isometry of the word metric.
https://en.wikipedia.org/wiki/Word_metric
passage: J. Atchison. "The Statistical Analysis of Compositional Data." Monographs on Statistics and Applied Probability, Chapman and Hall, 1986. Book Probability density function The probability density function is: $$ f_X( \mathbf{x}; \boldsymbol{\mu} , \boldsymbol{\Sigma} ) = \frac{1}{ (2 \pi)^{D-1} |\boldsymbol{\Sigma} |^\frac{1}{2} } \, \frac{1}{ \prod\limits_{i=1}^D x_i } \, e^{- \frac{1}{2} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\}^\top \boldsymbol{\Sigma}^{-1} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\} } \quad , \quad \mathbf{x} \in \mathcal{S}^D \;\; , $$ where $$ \mathbf{x}_{-D} $$ denotes a vector of the first (D-1) components of $$ \mathbf{x} $$ and $$ \mathcal{S}^D $$ denotes the simplex of D-dimensional probability vectors.
https://en.wikipedia.org/wiki/Logit-normal_distribution
passage: The Lie algebroid of a tangent groupoid $$ TG \rightrightarrows TM $$ is the tangent algebroid $$ TA \to TM $$ , for - The Lie algebroid of a jet groupoid $$ J^k G \rightrightarrows M $$ is the jet algebroid $$ J^k A \to M $$ , for ### Detailed example 1 Let us describe the Lie algebroid associated to the pair groupoid $$ G:=M\times M $$ . Since the source map is $$ s:G\to M: (p,q)\mapsto q $$ , the $$ s $$ -fibers are of the kind $$ M \times \{q\} $$ , so that the vertical space is $$ T^sG=\bigcup_{q\in M} TM \times \{q\} \subset TM\times TM $$ . Using the unit map $$ u:M\to G: q\mapsto (q,q) $$ , one obtain the vector bundle $$ A:=u^*T^sG=\bigcup_{q\in M} T_qM=TM $$ . The extension of sections to right-invariant vector fields is simply $$ \tilde X(p,q)= X(p) \oplus 0 $$ and the extension of a smooth function from to a right-invariant function on is $$ \tilde f(p,q)=f(q) $$ .
https://en.wikipedia.org/wiki/Lie_algebroid
passage: If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain. ### Singular value decomposition The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, $$ \mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T $$ Here Σ is an n-by-p rectangular diagonal matrix of positive numbers σ(k), called the singular values of X; U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X; and W is a p-by-p matrix whose columns are orthogonal unit vectors of length p and called the right singular vectors of X. In terms of this factorization, the matrix XTX can be written $$ \begin{align} \mathbf{X}^T\mathbf{X} & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{U}^\mathsf{T} \mathbf{U}\mathbf{\Sigma}\mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{\Sigma} \mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\hat{\Sigma}}^2 \mathbf{W}^\mathsf{T} \end{align} $$ where is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies .
https://en.wikipedia.org/wiki/Principal_component_analysis
passage: A useful rule of thumb is that the maximum absolute humidity doubles for every increase in temperature. Thus, the relative humidity will drop by a factor of 2 for each increase in temperature, assuming conservation of absolute moisture. For example, in the range of normal temperatures, air at and 50% relative humidity will become saturated if cooled to , its dew point, and air at 80% relative humidity warmed to will have a relative humidity of only 29% and feel dry. By comparison, thermal comfort standard ASHRAE 55 requires systems designed to control humidity to maintain a dew point of though no lower humidity limit is established. Water vapor is a lighter gas than other gaseous components of air at the same temperature, so humid air will tend to rise by natural convection. This is a mechanism behind thunderstorms and other weather phenomena. Relative humidity is often mentioned in weather forecasts and reports, as it is an indicator of the likelihood of dew, or fog. In hot summer weather, it also increases the apparent temperature to humans (and other animals) by hindering the evaporation of perspiration from the skin as the relative humidity rises. This effect is calculated as the heat index or humidex. A device used to measure humidity is called a hygrometer; one used to regulate it is called a humidistat, or sometimes hygrostat. These are analogous to a thermometer and thermostat for temperature, respectively. The field concerned with the study of physical and thermodynamic properties of gas–vapor mixtures is named psychrometrics.
https://en.wikipedia.org/wiki/Humidity
passage: 1. Effects of local actions have a finite propagation speed. This failure of the classical view was one of the conclusions of the EPR thought experiment in which two remotely located observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It was a conclusion of EPR, using the formal apparatus of quantum theory, that once Alice measured spin in the x direction, Bob's measurement in the x direction was determined with certainty, whereas immediately before Alice's measurement Bob's outcome was only statistically determined. From this it follows that either value of spin in the x direction is not an element of reality or that the effect of Alice's measurement has infinite speed of propagation. ## Indeterminacy for mixed states We have described indeterminacy for a quantum system that is in a pure state. Mixed states are a more general kind of state obtained by a statistical mixture of pure states. For mixed states the "quantum recipe" for determining the probability distribution of a measurement is determined as follows: Let A be an observable of a quantum mechanical system.
https://en.wikipedia.org/wiki/Quantum_indeterminacy
passage: For example, if $$ v=gy^4 \Rightarrow \mathopen{:}v_i(y_1)\mathclose{:}=\mathopen{:}\phi_i(y_1)\phi_i(y_1)\phi_i(y_1)\phi_i(y_1)\mathclose{:} $$ This is analogous to the corresponding Isserlis' theorem in statistics for the moments of a Gaussian distribution. Note that this discussion is in terms of the usual definition of normal ordering which is appropriate for the vacuum expectation values (VEV's) of fields. (Wick's theorem provides as a way of expressing VEV's of n fields in terms of VEV's of two fields.) There are any other possible definitions of normal ordering, and Wick's theorem is valid irrespective. However Wick's theorem only simplifies computations if the definition of normal ordering used is changed to match the type of expectation value wanted. That is we always want the expectation value of the normal ordered product to be zero. For instance in thermal field theory a different type of expectation value, a thermal trace over the density matrix, requires a different definition of normal ordering.
https://en.wikipedia.org/wiki/Wick%27s_theorem
passage: #### Greece General Practice was established as a medical specialty in Greece in 1986. To qualify as a General Practitioner (γενικός ιατρός, genikos iatros) doctors in Greece are required to complete four years of vocational training after medical school, including three years and two months in a hospital setting. General Practitioners in Greece may either work as private specialists or for the National Healthcare Service, ESY (Εθνικό Σύστημα Υγείας, ΕΣΥ). #### Netherlands and Belgium General practice in the Netherlands and Belgium is considered advanced. The huisarts (literally: "home doctor") administers first line, primary care. In the Netherlands, patients usually cannot consult a hospital specialist without a required referral. Most GPs work in private practice although more medical centers with employed GPs are seen. Many GPs have a specialist interest, e.g. in palliative care. In Belgium, one year of lectures and two years of residency are required. In the Netherlands, training consists of three years (full-time) of specialization after completion of internships of 3 years. First and third year of training takes place at a GP practice. The second year of training consists of six months training at an emergency room, or internal medicine, paediatrics or gynaecology, or a combination of a general or academic hospital, three months of training at a psychiatric hospital or outpatient clinic and three months at a nursing home (verpleeghuis) or clinical geriatrics ward/policlinic. During all three years, residents get one day of training at university while working in practice the other days.
https://en.wikipedia.org/wiki/General_practitioner
passage: Every TD space is a T0 space. Every T1 space is a TD space, since every singleton is closed, hence $$ \{x\}'=\overline{\{x\}}\setminus\{x\}=\varnothing, $$ which is closed. Consequently, in a T1 space, the derived set of any set is closed. The relation between these properties can be summarized as $$ T_1\implies T_D\implies T_0. $$ The implications are not reversible. For example, the Sierpiński space is TD and not T1. And the right order topology on $$ \R $$ is T0 and not TD. ### More properties Two subsets $$ S $$ and $$ T $$ are separated precisely when they are disjoint and each is disjoint from the other's derived set $$ S' \cap T = \varnothing = T' \cap S. $$ A bijection between two topological spaces is a homeomorphism if and only if the derived set of the image (in the second space) of any subset of the first space is the image of the derived set of that subset. In a T1 space, the derived set of any finite set is empty and furthermore, $$ (S - \{p\})' = S' = (S \cup \{p\})', $$ for any subset $$ S $$ and any point $$ p $$ of the space.
https://en.wikipedia.org/wiki/Derived_set_%28mathematics%29
passage: For wavelength $$ \lambda $$ , it is: $$ B_{\lambda} (T) = \frac{2 ck_{\mathrm{B}} T}{\lambda^4}, $$ where $$ B_{\lambda} $$ is the spectral radiance, the power emitted per unit emitting area, per steradian, per unit wavelength; $$ c $$ is the speed of light; $$ k_{\mathrm{B}} $$ is the Boltzmann constant; and $$ T $$ is the temperature in kelvins. For frequency $$ \nu $$ , the expression is instead $$ B_{\nu}(T) = \frac{2 \nu^2 k_{\mathrm{B}} T}{c^2}. $$ This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of $$ k_{\rm B}T $$ . The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because $$ B_{\nu}(T) \to \infty $$ as $$ \nu \to \infty $$ . An example, from Mason's A History of the Sciences, illustrates multi-mode vibration via a piece of string.
https://en.wikipedia.org/wiki/Ultraviolet_catastrophe
passage: The numbers $$ 2^{2^n} $$ form an irrationality sequence: for every sequence $$ x_i $$ of positive integers, the series $$ \sum_{i=0}^{\infty} \frac{1}{2^{2^i} x_i} = \frac{1}{2x_0}+\frac{1}{4x_1}+\frac{1}{16x_2}+\cdots $$ converges to an irrational number. Despite the rapid growth of this sequence, it is the slowest-growing irrationality sequence known. ### Powers of two whose exponents are powers of two in computer science Since it is common for computer data types to have a size which is a power of two, these numbers count the number of representable values of that type. For example, a 32-bit word consisting of 4 bytes can represent distinct values, which can either be regarded as mere bit-patterns, or are more commonly interpreted as the unsigned numbers from 0 to , or as the range of signed numbers between and . For more about representing signed numbers see Two's complement. ## Selected powers of two 22 = The number that is the square of two. Also the first power of two tetration of two. 28 = The number of values represented by the 8 bits in a byte, more specifically termed as an octet.
https://en.wikipedia.org/wiki/Power_of_two
passage: The theorem appears in a 1922 publication of Ernst Steinitz, after whom it is named. It can be proven by mathematical induction (as Steinitz did), by finding the minimum-energy state of a two-dimensional spring system and lifting the result into three dimensions, or by using the circle packing theorem. Several extensions of the theorem are known, in which the polyhedron that realizes a given graph has additional constraints; for instance, every polyhedral graph is the graph of a convex polyhedron with integer coordinates, or the graph of a convex polyhedron all of whose edges are tangent to a common midsphere. ## Definitions and statement of the theorem An undirected graph is a system of vertices and edges, each edge connecting two of the vertices. As is common in graph theory, for the purposes of Steinitz's theorem these graphs are restricted to being finite (the vertices and edges are finite sets) and simple (no two edges connect the same two vertices, and no edge connects a vertex to itself). From any polyhedron one can form a graph, by letting the vertices of the graph correspond to the vertices of the polyhedron and by connecting any two graph vertices by an edge whenever the corresponding two polyhedron vertices are the endpoints of an edge of the polyhedron. This graph is known as the skeleton of the polyhedron.
https://en.wikipedia.org/wiki/Steinitz%27s_theorem
passage: Issues such as an uneven supply voltage, low gain and a small dynamic range held off the dominance of monolithic op amps until 1965 when the μA709 (also designed by Bob Widlar) was released. 1968: Release of the μA741. The popularity of monolithic op amps was further improved upon the release of the LM101 in 1967, which solved a variety of issues, and the subsequent release of the μA741 in 1968. The μA741 was extremely similar to the LM101 except that Fairchild's facilities allowed them to include a 30 pF compensation capacitor inside the chip instead of requiring external compensation. This simple difference has made the 741 the canonical op amp and many modern amps base their pinout on the 741s. The μA741 is still in production, and has become ubiquitous in electronics—many manufacturers produce a version of this classic chip, recognizable by part numbers containing 741. The same part is manufactured by several companies. 1970: First high-speed, low-input current FET design. In the 1970s high speed, low-input current designs started to be made by using FETs. These would be largely replaced by op amps made with MOSFETs in the 1980s. 1972: Single sided supply op amps being produced. A single sided supply op amp is one where the input and output voltages can be as low as the negative power supply voltage instead of needing to be at least two volts above it.
https://en.wikipedia.org/wiki/Operational_amplifier
passage: The disease is characterized by loss of the insulin-producing beta cells of the pancreatic islets, leading to severe insulin deficiency, and can be further classified as immune-mediated or idiopathic (without known cause). The majority of cases are immune-mediated, in which a T cell-mediated autoimmune attack causes loss of beta cells and thus insulin deficiency. Patients often have irregular and unpredictable blood sugar levels due to very low insulin and an impaired counter-response to hypoglycemia. Type 1 diabetes is partly inherited, with multiple genes, including certain HLA genotypes, known to influence the risk of diabetes. In genetically susceptible people, the onset of diabetes can be triggered by one or more environmental factors, such as a viral infection or diet. Several viruses have been implicated, but to date there is no stringent evidence to support this hypothesis in humans. Type 1 diabetes can occur at any age, and a significant proportion is diagnosed during adulthood. Latent autoimmune diabetes of adults (LADA) is the diagnostic term applied when type 1 diabetes develops in adults; it has a slower onset than the same condition in children. Given this difference, some use the unofficial term "type 1.5 diabetes" for this condition. Adults with LADA are frequently initially misdiagnosed as having type 2 diabetes, based on age rather than a cause. LADA leaves adults with higher levels of insulin production than type 1 diabetes, but not enough insulin production for healthy blood sugar levels.
https://en.wikipedia.org/wiki/Diabetes
passage: &\mathbb{E}[\text{payoff for B playing T}] = (-1)p + (+1)(1 - p) = 1 - 2p, \\ &\mathbb{E}[\text{payoff for B playing H}] = \mathbb{E}[\text{payoff for B playing T}] \implies 2p - 1 = 1 - 2p \implies p = \frac{1}{2}. \end{align} $$ Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with $$ p = \frac{1}{2} $$ and $$ q = \frac{1}{2} $$ .
https://en.wikipedia.org/wiki/Nash_equilibrium
passage: Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator. The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature. ### Devices A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work. Mechanical efficiency of heat engines, p. 1 (2007) by James R. Senf: "Heat engines are made to provide mechanical energy from thermal energy. " A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
https://en.wikipedia.org/wiki/Heat_transfer
passage: ### Supportive cells Although its primary function is transport of sugars, phloem may also contain cells that have a mechanical support function. These are sclerenchyma cells which generally fall into two categories: fibres and sclereids. Both cell types have a secondary cell wall and are dead at maturity. The secondary cell wall increases their rigidity and tensile strength, especially because they contain lignin. #### Fibres Bast fibres are the long, narrow supportive cells that provide tension strength without limiting flexibility. They are also found in xylem, and are the main component of many textiles such as paper, linen, and cotton. #### Sclereids Sclereids are irregularly shaped cells that add compression strength but may reduce flexibility to some extent. They also serve as anti-herbivory structures, as their irregular shape and hardness will increase wear on teeth as the herbivores chew. For example, they are responsible for the gritty texture in pears, and in winter pears. ## Function Unlike xylem (which is composed primarily of dead cells), the phloem is composed of still-living cells that transport sap. The sap is a water-based solution, but rich in sugars made by photosynthesis. These sugars are transported to non-photosynthetic parts of the plant, such as the roots, or into storage structures, such as tubers or bulbs.
https://en.wikipedia.org/wiki/Phloem
passage: So, instead of reaching , all bad paths after reflection end at . Because every monotonic path in the grid meets the higher diagonal, and because the reflection process is reversible, the reflection is therefore a bijection between bad paths in the original grid and monotonic paths in the new grid. The number of bad paths is therefore: $$ {n-1 + n+1 \choose n-1} = {2n \choose n-1} = {2n \choose n+1} $$ and the number of Catalan paths (i.e. good paths) is obtained by removing the number of bad paths from the total number of monotonic paths of the original grid, $$ C_n = {2n \choose n} - {2n \choose n+1} = \frac{1}{n+1}{2n \choose n}. $$ In terms of Dyck words, we start with a (non-Dyck) sequence of X's and Y's and interchange all X's and Y's after the first Y that violates the Dyck condition. After this Y, note that there is exactly one more Y than there are Xs. ### Third proof This bijective proof provides a natural explanation for the term appearing in the denominator of the formula for . A generalized version of this proof can be found in a paper of Rukavicka Josef (2011). Given a monotonic path, the exceedance of the path is defined to be the number of vertical edges above the diagonal.
https://en.wikipedia.org/wiki/Catalan_number
passage: Its time complexity is $$ O(nL) $$ , where $$ L $$ is the maximum length of a codeword. No algorithm is known to solve this problem in or time, unlike the presorted and unsorted conventional Huffman problems, respectively. ### Huffman coding with unequal letter costs In the standard Huffman coding problem, it is assumed that each symbol in the set that the code words are constructed from has an equal cost to transmit: a code word whose length is N digits will always have a cost of N, no matter how many of those digits are 0s, how many are 1s, etc. When working under this assumption, minimizing the total cost of the message and minimizing the total number of digits are the same thing. Huffman coding with unequal letter costs is the generalization without this assumption: the letters of the encoding alphabet may have non-uniform lengths, due to characteristics of the transmission medium. An example is the encoding alphabet of Morse code, where a 'dash' takes longer to send than a 'dot', and therefore the cost of a dash in transmission time is higher. The goal is still to minimize the weighted average codeword length, but it is no longer sufficient just to minimize the number of symbols used by the message. No algorithm is known to solve this in the same manner or with the same efficiency as conventional Huffman coding, though it has been solved by Richard M. Karp whose solution has been refined for the case of integer costs by Mordecai J. Golin. ### Optimal alphabetic binary trees (Hu–Tucker coding)
https://en.wikipedia.org/wiki/Huffman_coding
passage: Google Translate uses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories. As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job". ## Criticism and comment Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. ### Theory A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear. (e.g., Does it converge? If so, how fast? What is it approximating?)
https://en.wikipedia.org/wiki/Deep_learning%23Deep_neural_networks
passage: In many cases, environmental standards such as via maximum pollution levels, regulation of chemicals, occupational hygiene requirements or consumer protection regulations establish some protection in combination with the monitoring. Preventive measures like vaccines and medical screenings are also important. Using PPE properly and getting the recommended vaccines and screenings can help decrease the spread of respiratory diseases, protecting the healthcare workers as well as their patients. Secondary prevention Secondary prevention deals with latent diseases and attempts to prevent an asymptomatic disease from progressing to symptomatic disease. Certain diseases can be classified as primary or secondary. This depends on definitions of what constitutes a disease, though, in general, primary prevention addresses the root cause of a disease or injury whereas secondary prevention aims to detect and treat a disease early on. Secondary prevention consists of "early diagnosis and prompt treatment" to contain the disease and prevent its spread to other individuals, and "disability limitation" to prevent potential future complications and disabilities from the disease. Early diagnosis and prompt treatment for a syphilis patient would include a course of antibiotics to destroy the pathogen and screening and treatment of any infants born to syphilitic mothers. Disability limitation for syphilitic patients includes continued check-ups on the heart, cerebrospinal fluid, and central nervous system of patients to curb any damaging effects such as blindness or paralysis. Tertiary prevention Finally, tertiary prevention attempts to reduce the damage caused by symptomatic disease by focusing on mental, physical, and social rehabilitation.
https://en.wikipedia.org/wiki/Preventive_healthcare
passage: The eigenvalues of the matrix are $$ \varphi=\tfrac12\bigl(1+\sqrt5~\!\bigr) $$ and $$ \psi=-\varphi^{-1}=\tfrac12\bigl(1-\sqrt5~\!\bigr) $$ corresponding to the respective eigenvectors $$ \vec \mu=\begin{pmatrix} \varphi \\ 1 \end{pmatrix}, \quad \vec\nu=\begin{pmatrix} -\varphi^{-1} \\ 1 \end{pmatrix}. $$ As the initial value is $$ \vec F_0=\begin{pmatrix} 1 \\ 0 \end{pmatrix}=\frac{1}{\sqrt{5}}\vec{\mu}\,-\,\frac{1}{\sqrt{5}}\vec{\nu}, $$ it follows that the th element is $$ \begin{align} \vec F_n\ &= \frac{1}{\sqrt{5}}A^n\vec\mu-\frac{1}{\sqrt{5}}A^n\vec\nu \\ &= \frac{1}{\sqrt{5}}\varphi^n\vec\mu - \frac{1}{\sqrt{5}}(-\varphi)^{-n}\vec\nu \\
https://en.wikipedia.org/wiki/Fibonacci_sequence
passage: For example, the following is a basis for the geometric algebra : $$ \{1, e_1, e_2, e_3, e_1e_2, e_2e_3, e_3e_1, e_1e_2e_3\} $$ A basis formed this way is called a standard basis for the geometric algebra, and any other orthogonal basis for $$ V $$ will produce another standard basis. Each standard basis consists of $$ 2^n $$ elements. Every multivector of the geometric algebra can be expressed as a linear combination of the standard basis elements. If the standard basis elements are $$ \{ B_i \mid i \in S \} $$ with $$ S $$ being an index set, then the geometric product of any two multivectors is $$ \left( \sum_i \alpha_i B_i \right) \left( \sum_j \beta_j B_j \right) = \sum_{i,j} \alpha_i\beta_j B_i B_j . $$ The terminology " $$ k $$ -vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of $$ k $$ vectors).
https://en.wikipedia.org/wiki/Geometric_algebra
passage: In the next-to-last step, D is accessed and the sequence number is updated. F is then accessed, replacing Bwhich had the lowest rank, (B(1)). #### Time-aware, least-recently used Time-aware, least-recently-used (TLRU) is a variant of LRU designed for when the contents of a cache have a valid lifetime. The algorithm is suitable for network cache applications such as information-centric networking (ICN), content delivery networks (CDNs) and distributed networks in general. TLRU introduces a term: TTU (time to use), a timestamp of content (or a page) which stipulates the usability time for the content based on its locality and the content publisher. TTU provides more control to a local administrator in regulating network storage. When content subject to TLRU arrives, a cache node calculates the local TTU based on the TTU assigned by the content publisher. The local TTU value is calculated with a locally-defined function. When the local TTU value is calculated, content replacement is performed on a subset of the total content of the cache node. TLRU ensures that less-popular and short-lived content is replaced with incoming content. #### Most-recently-used (MRU) Unlike LRU, MRU discards the most-recently-used items first.
https://en.wikipedia.org/wiki/Cache_replacement_policies
passage: In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same motion afterwards. A head-on inelastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision then in a perfectly inelastic collision both bodies will be travelling with velocity after the collision. The equation expressing conservation of momentum is: $$ \begin{align} m_A v_{A1} + m_B v_{B1} &= \left( m_A + m_B \right) v_2\,.\end{align} $$ If one body is motionless to begin with (e.g. $$ u_2 = 0 $$ ), the equation for conservation of momentum is $$ m_A v_{A1} = \left( m_A + m_B \right) v_2\,, $$ so $$ v_2 = \frac{m_{A}}{m_{A}+m_{B}} v_{A1}\,. $$ In a different situation, if the frame of reference is moving at the final velocity such that $$ v_2 = 0 $$ , the objects would be brought to rest by a perfectly inelastic collision and 100% of the kinetic energy is converted to other forms of energy. In this instance the initial velocities of the bodies would be non-zero, or the bodies would have to be massless.
https://en.wikipedia.org/wiki/Momentum
passage: However the complexity-theoretic status of graph isomorphism remains an open question. In the context of the Aanderaa–Karp–Rosenberg conjecture on the query complexity of monotone graph properties, showed that any subgraph isomorphism problem has query complexity Ω(n3/2); that is, solving the subgraph isomorphism requires an algorithm to check the presence or absence in the input of Ω(n3/2) different edges in the graph. ## Algorithms describes a recursive backtracking procedure for solving the subgraph isomorphism problem. Although its running time is, in general, exponential, it takes polynomial time for any fixed choice of H (with a polynomial that depends on the choice of H). When G is a planar graph (or more generally a graph of bounded expansion) and H is fixed, the running time of subgraph isomorphism can be reduced to linear time. is a substantial update to the 1976 subgraph isomorphism algorithm paper. proposed in 2004 another algorithm based on Ullmann's, VF2, which improves the refinement process using different heuristics and uses significantly less memory. proposed a better algorithm, which improves the initial order of the vertices using some heuristics. The current state of the art solver for moderately-sized, hard instances is the Glasgow Subgraph Solver (). This solver adopts a constraint programming approach, using bit-parallel data structures and specialized propagation algorithms for performance.
https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
passage: Every layer can have a different number of neurons $$ N_A $$ . These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers $$ A $$ and $$ B $$ are denoted by $$ \xi^{(A,B)}_{ij} $$ (the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index $$ i $$ enumerates neurons in the layer $$ A $$ , and index $$ j $$ enumerates neurons in the layer $$ B $$ ). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as with boundary conditions The main difference between these equations and those from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function $$ L^A(\{x^A_i\}) $$ for the $$ A $$ -th hidden layer, which depends on the activities of all the neurons in that layer.
https://en.wikipedia.org/wiki/Hopfield_network
passage: By the way we transformed the arrays, it is guaranteed that . ### Convolution sum Instead of looking for arbitrary elements of the array such that: $$ S[k]=S[i]+S[j] $$ the convolution 3sum problem (Conv3SUM) looks for elements in specific locations: $$ S[i+j]=S[i]+S[j] $$ #### Reduction from Conv3SUM to 3SUM Given a solver for 3SUM, the Conv3SUM problem can be solved in the following way. - Define a new array T, such that for every index i: $$ T[i]=2n S[i]+i $$ (where n is the number of elements in the array, and the indices run from 0 to n-1). - Solve 3SUM on the array T. Correctness proof: - If in the original array there is a triple with $$ S[i+j]=S[i]+S[j] $$ , then $$ T[i+j]=2n S[i+j]+i+j = (2n S[i] + i) + (2n S[j] + j)=T[i]+T[j] $$ , so this solution will be found by 3SUM on T. - Conversely, if in the new array there is a triple with $$ T[k]=T[i]+T[j] $$ , then $$ 2n S[k] + k = 2n(S[i]+S[j]) + (i+j) $$ .
https://en.wikipedia.org/wiki/3SUM
passage: As trivial examples, note that attaching a 0-handle is just taking a disjoint union with a ball, and that attaching an n-handle to $$ (W,\partial W) $$ is gluing in a ball along any sphere component of $$ \partial W $$ . Morse theory was used by Thom and Milnor to prove that every manifold (with or without boundary) is a handlebody, meaning that it has an expression as a union of handles. The expression is non-unique: the manipulation of handlebody decompositions is an essential ingredient of the proof of the Smale h-cobordism theorem, and its generalization to the s-cobordism theorem. A manifold is called a "k-handlebody" if it is the union of r-handles, for r at most k. This is not the same as the dimension of the manifold. For instance, a 4-dimensional 2-handlebody is a union of 0-handles, 1-handles and 2-handles. Any manifold is an n-handlebody, that is, any manifold is the union of handles. It isn't too hard to see that a manifold is an (n-1)-handlebody if and only if it has non-empty boundary. Any handlebody decomposition of a manifold defines a CW complex decomposition of the manifold, since attaching an r-handle is the same, up to homotopy equivalence, as attaching an r-cell. However, a handlebody decomposition gives more information than just the homotopy type of the manifold.
https://en.wikipedia.org/wiki/Handlebody
passage: ### Macromolecule blotting and probing The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term #### Southern blotting , after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term. Southern blotting Named after its inventor, biologist Edwin Southern, the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action. The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR, to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockout embryonic stem cell lines. #### Northern blotting The northern blot is used to study the presence of specific RNA molecules as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot.
https://en.wikipedia.org/wiki/Molecular_biology
passage: The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications. Dividing existing classful networks into subnets began in 1985 with the publication of . This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in in 1987. In 1993, based on this work, introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, , and the class-based scheme was dubbed classful, by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments. ### Special-use addresses The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks. {|class="wikitable sortable" |+Special address blocks ! Address block ! Address range ! Number of addresses ! Scope !
https://en.wikipedia.org/wiki/IPv4
passage: In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity. The concept of proper time was introduced by Hermann Minkowski in 1908, and is an important feature of Minkowski diagrams. ## Mathematical formalism The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime. From the mathematical point of view, coordinate time is assumed to be predefined and an expression for proper time as a function of coordinate time is required. On the other hand, proper time is measured experimentally and coordinate time is calculated from the proper time of inertial clocks. Proper time can only be defined for timelike paths through spacetime which allow for the construction of an accompanying set of physical rulers and clocks. The same formalism for spacelike paths leads to a measurement of proper distance rather than proper time. For lightlike paths, there exists no concept of proper time and it is undefined as the spacetime interval is zero. Instead, an arbitrary and physically irrelevant affine parameter unrelated to time must be introduced.
https://en.wikipedia.org/wiki/Proper_time