text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: ### Klein–Gordon and Dirac equations Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation $$ E^2 = (pc)^2 + \left(m_0 c^2\right)^2, $$ instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, $$ -\frac {1}{c^2} \frac{\partial^2}{\partial t^2} \psi + \nabla^2 \psi = \frac {m^2 c^2}{\hbar^2} \psi, $$ was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices $$ \alpha_1,\alpha_2,\alpha_3,\beta $$ .
https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation
passage: #### Flat base change in the derived category A far reaching extension of flat base change is possible when considering the base change map $$ Lg^* Rf_* (\mathcal{F}) \to Rf'_*(Lg'^*\mathcal{F}) $$ in the derived category of sheaves on S', similarly as mentioned above. Here $$ Lg^* $$ is the (total) derived functor of the pullback of $$ \mathcal O $$ -modules (because $$ g^* \mathcal G = \mathcal O_X \otimes_{g^{-1} \mathcal O_S} g^{-1} \mathcal G $$ involves a tensor product, $$ g^* $$ is not exact when is not flat and therefore is not equal to its derived functor $$ Lg^* $$ ). This map is a quasi-isomorphism provided that the following conditions are satisfied: - $$ S $$ is quasi-compact and $$ f $$ is quasi-compact and quasi-separated, - $$ \mathcal F $$ is an object in $$ D^b(\mathcal{O}_X\text{-mod}) $$ , the bounded derived category of $$ \mathcal{O}_X $$ -modules, and its cohomology sheaves are quasi-coherent (for example, $$ \mathcal F $$ could be a bounded complex of quasi-coherent sheaves) - $$ X $$ and $$ S' $$ are Tor-independent over $$ S $$ , meaning that if
https://en.wikipedia.org/wiki/Base_change_theorems
passage: The current 500m record (for Windsurfers) is held by French windsurfer Antoine Albeau. The women's 500m Record is 48.03 knots held by Jenna Gibson, from England, also in Luderitz. The Men's nautical mile record is held by Bjorn Dunkerbeck and the women's mile record is held by Zara Davis both set in Walvis Bay, Namibia With the advent of cheap and small GPS units and the website www.gps-speedsurfing.com, Speedsurfers have been able to organise impromptu competitions amongst themselves as well as more formal competitions such as the European Speed Meetings and Speedweeks/fortnights in Australia. With over 5000 sailors registered it is possible for windsurfers all over the world to compare speeds. Men's Speed Sailing RecordsDateSailorLocation1 December 2024Antoine AlbeauLuderitz, Namibia5 November 2015Antoine AlbeauLuderitz, NamibiaNovember 2012Antoine AlbeauLuderitz, Namibia Women's Speed Sailing RecordsDateSailorLocation25 November 2024Jenna GibsonLuderitz, Namibia25 November 2022Heidi UlrichLuderitz, NamibiaNovember 2017Zara DavisLuderitz, Namibia ### Indoor "In 1990 indoor windsurfing was born with the Palais Omnisports de Paris – Bercy making its spectacular debut. It was during this first indoor event that Britain's Nik Baker, from the south coast, flourished and went on to add a whopping x6 Indoor World Championships to his name".
https://en.wikipedia.org/wiki/Windsurfing
passage: ``` Inside the braces, `c` might refer to the same object as `m`, so mutations to `m` could indirectly change `c` as well. Also, `c` might refer to the same object as `i`, but since the value then is immutable, there are no changes. However, `m` and `i` cannot legally refer to the same object. In the language of guarantees, mutable has no guarantees (the function might change the object), `const` is an outward-only guarantee that the function will not change anything, and `immutable` is a bidirectional guarantee (the function will not change the value and the caller must not change it). Values that are `const` or `immutable` must be initialized by direct assignment at the point of declaration or by a constructor. Because `const` parameters forget if the value was mutable or not, a similar construct, `inout`, acts, in a sense, as a variable for mutability information. A function of type `const(S) function(const(T))` returns `const(S)` typed values for mutable, const and immutable arguments. In contrast, a function of type `inout(S) function(inout(T))` returns `S` for mutable `T` arguments, `const(S)` for `const(T)` values, and `immutable(S)` for `immutable(T)` values.
https://en.wikipedia.org/wiki/Immutable_object
passage: If this action is transitive on some fiber, then it is transitive on all fibers, and we call the cover regular (or normal or Galois). Every such regular cover is a principal, where $$ G = \operatorname{Aut}(p) $$ is considered as a discrete topological group. Every universal cover $$ p:D \to X $$ is regular, with deck transformation group being isomorphic to the fundamental group Examples - Let $$ q : S^1 \to S^1 $$ be the covering $$ q(z)=z^{n} $$ for some $$ n \in \mathbb{N} $$ , then the map $$ d_k:S^1 \rightarrow S^1 : z \mapsto z \, e^{2\pi ik/n} $$ for $$ k \in \mathbb{Z} $$ is a deck transformation and $$ \operatorname{Deck}(q)\cong \mathbb{Z}/ n\mathbb{Z} $$ . - Let $$ r : \mathbb{R} \to S^1 $$ be the covering $$ r(t)=(\cos(2 \pi t), \sin(2 \pi t)) $$ , then the map $$ d_k:\mathbb{R} \rightarrow \mathbb{R} : t \mapsto t + k $$ for $$ k \in \mathbb{Z} $$ is a deck transformation and $$ \operatorname{Deck}(r)\cong \mathbb{Z} $$ . -
https://en.wikipedia.org/wiki/Covering_space
passage: Grinding may serve the following purposes in engineering: - increase of the surface area of a solid - manufacturing of a solid with a desired grain size - pulping of resources ## Grinding laws In spite of a great number of studies in the field of fracture schemes there is no formula known which connects the technical grinding work with grinding results. Mining engineers, Peter von Rittinger, Friedrich Kick and Fred Chester Bond independently produced equations to relate the needed grinding work to the grain size produced and a fourth engineer, R.T.Hukki suggested that these three equations might each describe a narrow range of grain sizes and proposed uniting them along a single curve describing what has come to be known as the Hukki relationship. In stirred mills, the Hukki relationship does not apply and instead, experimentation has to be performed to determine any relationship. To evaluate the grinding results the grain size disposition of the source material (1) and of the ground material (2) is needed. Grinding degree is the ratio of the sizes from the grain disposition. There are several definitions for this characteristic value: - Grinding degree referring to grain size d80 $$ Z_d=\frac{d_{80,1}}{d_{80,2}}\, $$ Instead of the value of d80 also d50 or other grain diameter can be used.
https://en.wikipedia.org/wiki/Mill_%28grinding%29
passage: Furthermore, $$ \lim_{x \to -\infty} F_X(x) = 0, \quad \lim_{x \to +\infty} F_X(x) = 1. $$ Every function with these three properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable. If $$ X $$ is a purely discrete random variable, then it attains values $$ x_1,x_2,\ldots $$ with probability $$ p_i = p(x_i) $$ , and the CDF of $$ X $$ will be discontinuous at the points $$ x_i $$ : $$ F_X(x) = \operatorname{P}(X\leq x) = \sum_{x_i \leq x} \operatorname{P}(X = x_i) = \sum_{x_i \leq x} p(x_i). $$ If the CDF $$ F_X $$ of a real valued random variable $$ X $$ is continuous, then $$ X $$ is a continuous random variable; if furthermore $$ F_X $$ is absolutely continuous, then there exists a Lebesgue-integrable function $$ f_X(x) $$ such that $$ F_X(b)-F_X(a) = \operatorname{P}(a< X\leq b) = \int_a^b f_X(x)\,dx $$ for all real numbers $$ a $$
https://en.wikipedia.org/wiki/Cumulative_distribution_function
passage: $$ {\hat f}^s(\xi) = \int_{-\infty}^\infty \overbrace{f_\text{even}(t) \cdot \sin(2\pi \xi t)}^\text{even·odd=odd} \, dt = 0. $$ The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function. ### Other conventions Just like the Fourier transform takes the form of different equations with different constant factors (see for discussion), other authors also define the cosine transform as $$ {\hat f}^c(\xi)=\sqrt{\frac{2}{\pi}} \int_0^\infty f(t)\cos(2\pi \xi t) \,dt $$ and the sine transform as $$ {\hat f}^s(\xi) =\sqrt{\frac{2}{\pi}} \int_0^\infty f(t)\sin(2\pi \xi t) \,dt. $$ Another convention defines the cosine transform as $$ F_c(\alpha) = \frac{2}{\pi} \int_0^\infty f(x) \cos(\alpha x) \, dx $$ and the sine transform as $$ F_s(\alpha) = \frac{2}{\pi} \int_0^\infty f(x) \sin(\alpha x) \, dx $$ using $$ \alpha $$ as the transformation variable. And while $$ t $$ is typically used to represent the time domain, $$ x $$ is often instead used to represent a spatial domain when transforming to spatial frequencies.
https://en.wikipedia.org/wiki/Sine_and_cosine_transforms
passage: - Mastoiditis - Inner ear diseases - BPPV – benign paroxysmal positional vertigo - Labyrinthitis/Vestibular neuronitis - Ménière's disease/Endolymphatic hydrops - Perilymphatic fistula - Acoustic neuroma, vestibular schwannoma - Facial nerve disease - Idiopathic facial palsy (Bell's Palsy) - Facial nerve tumors - Ramsay Hunt Syndrome - Symptoms - Hearing loss - Tinnitus (subjective noise in the ear) - Aural fullness (sense of fullness in the ear) - Otalgia (pain referring to the ear) - Otorrhea (fluid draining from the ear) - Vertigo - Imbalance Rhinology Rhinology includes nasal dysfunction and sinus diseases.
https://en.wikipedia.org/wiki/Otorhinolaryngology
passage: Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death. ### Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex.
https://en.wikipedia.org/wiki/Histone
passage: Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution. ### Generalizations The delta function can be defined in -dimensional Euclidean space as the measure such that $$ \int_{\mathbf{R}^n} f(\mathbf{x})\,\delta(d\mathbf{x}) = f(\mathbf{0}) $$ for every compactly supported continuous function . As a measure, the -dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with , one has The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, () should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances. The notion of a Dirac measure makes sense on any set. Thus if is a set, is a marked point, and is any sigma algebra of subsets of , then the measure defined on sets by $$ \delta_{x_0}(A)=\begin{cases} 1 &\text{if }x_0\in A\\ 0 &\text{if }x_0\notin A \end{cases} $$ is the delta measure or unit mass concentrated at .
https://en.wikipedia.org/wiki/Dirac_delta_function
passage: The ground state wave function is known as the $$ 1\mathrm{s} $$ wavefunction. It is written as: $$ \psi_{1 \mathrm{s}} (r) = \frac{1}{\sqrt{\pi} a_0^{3 / 2}} \mathrm{e}^{-r / a_0}. $$ Here, $$ a_0 $$ is the numerical value of the Bohr radius. The probability density of finding the electron at a distance $$ r $$ in any radial direction is the squared value of the wavefunction: $$ | \psi_{1 \mathrm{s}} (r) |^2 = \frac{1}{\pi a_0^3} \mathrm{e}^{-2 r / a_0}. $$ The $$ 1 \mathrm{s} $$ wavefunction is spherically symmetric, and the surface area of a shell at distance $$ r $$ is $$ 4 \pi r^2 $$ , so the total probability $$ P(r) \, dr $$ of the electron being in a shell at a distance $$ r $$ and thickness $$ dr $$ is $$ P (r) \, \mathrm dr = 4 \pi r^2 | \psi_{1 \mathrm{s}} (r) |^2 \, \mathrm dr. $$ It turns out that this is a maximum at $$ r = a_0 $$ .
https://en.wikipedia.org/wiki/Hydrogen_atom
passage: The graviton propagator for (Anti) de Sitter space is $$ G = \frac{\mathcal{P}^2}{2H^2-\Box} + \frac{\mathcal{P}^0_s}{2(\Box+4H^2)}, $$ where $$ H $$ is the Hubble constant. Note that upon taking the limit $$ H \to 0 $$ and $$ \Box \to -k^2 $$ , the AdS propagator reduces to the Minkowski propagator. ## Related singular functions The scalar propagators are Green's functions for the Klein–Gordon equation. There are related singular functions which are important in quantum field theory. These functions are most simply defined in terms of the vacuum expectation value of products of field operators. ### Solutions to the Klein–Gordon equation
https://en.wikipedia.org/wiki/Propagator
passage: By relying on factors such as keyword density, which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages with numerous keywords by unscrupulous webmasters. This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Some search engines have also reached out to the SEO industry and are frequent sponsors and guests at SEO conferences, webchats, and seminars. Major search engines provide information and guidelines to help with website optimization. Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status. In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products. In response, many brands began to take a different approach to their Internet marketing strategies. ### Relationship with Google In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.
https://en.wikipedia.org/wiki/Search_engine_optimization
passage: For instance, in some back stories, the temple is a monastery, and the priests are monks. The temple or monastery may be in various locales including Hanoi, and may be associated with any religion. In some versions, other elements are introduced, such as the fact that the tower was created at the beginning of the world, or that the priests or monks may make only one move per day. ## Solution The puzzle can be played with any number of disks, although many toy versions have around 7 to 9 of them. The minimal number of moves required to solve a Tower of Hanoi puzzle with n disks is . ### Iterative solution A simple solution for the toy puzzle is to alternate moves between the smallest piece and a non-smallest piece. When moving the smallest piece, always move it to the next position in the same direction (to the right if the starting number of pieces is even, to the left if the starting number of pieces is odd). If there is no tower position in the chosen direction, move the piece to the opposite end, but then continue to move in the correct direction. For example, if you started with three pieces, you would move the smallest piece to the opposite end, then continue in the left direction after that. When the turn is to move the non-smallest piece, there is only one legal move. Doing this will complete the puzzle in the fewest moves.
https://en.wikipedia.org/wiki/Tower_of_Hanoi
passage: See also Nordström's theory of gravitation for how this could be modified to deal with changes over time. This form is reprised in the next example of a scalar field theory. The variation of the integral with respect to is: $$ \delta \mathcal{L}(\mathbf{x},t) = - \rho (\mathbf{x},t) \delta\Phi (\mathbf{x},t) - {2 \over 8 \pi G} (\nabla \Phi (\mathbf{x},t)) \cdot (\nabla \delta\Phi (\mathbf{x},t)) . $$ After integrating by parts, discarding the total integral, and dividing out by the formula becomes: $$ 0 = - \rho (\mathbf{x},t) + \frac{1}{4 \pi G} \nabla \cdot \nabla \Phi (\mathbf{x},t) $$ which is equivalent to: $$ 4 \pi G \rho (\mathbf{x},t) = \nabla^2 \Phi (\mathbf{x},t) $$ which yields Gauss's law for gravity.
https://en.wikipedia.org/wiki/Lagrangian_%28field_theory%29
passage: International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly progress in differential technological development on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggest some technical restrictions. Improved transparency regarding technological abilities may be another important facilitator for arms-control. A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation, has been analyzed by Freitas in "Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations" and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nanotech experts including Drexler now discredit the scenario. According to Chris Phoenix a "So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident". With the advent of nano-biotech, a different scenario called green goo has been forwarded. Here, the malignant substance is not nanobots but rather self-replicating biological organisms engineered through nanotechnology.
https://en.wikipedia.org/wiki/Molecular_nanotechnology
passage: Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry. DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.
https://en.wikipedia.org/wiki/Density_functional_theory
passage: Let $$ \lambda^n : B_0(\mathbb{R}^n) \to [0, +\infty] $$ denote the usual $$ n $$ -dimensional Lebesgue measure. Then the standard Gaussian measure $$ \gamma^n : B_0(\mathbb{R}^n) \to [0, 1] $$ is defined by $$ \gamma^{n} (A) = \frac{1}{\sqrt{2 \pi}^{n}} \int_{A} \exp \left( - \frac{1}{2} \left\| x \right\|_{\mathbb{R}^{n}}^{2} \right) \, \mathrm{d} \lambda^{n} (x) $$ for any measurable set $$ A \in B_0(\mathbb{R}^n) $$ . In terms of the Radon–Nikodym derivative, $$ \frac{\mathrm{d} \gamma^{n}}{\mathrm{d} \lambda^{n}} (x) = \frac{1}{\sqrt{2 \pi}^{n}} \exp \left( - \frac{1}{2} \left\| x \right\|_{\mathbb{R}^{n}}^{2} \right). $$ More generally, the Gaussian measure with mean $$ \mu \in \mathbb{R}^n $$
https://en.wikipedia.org/wiki/Gaussian_measure
passage: ### Implementation details There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution). When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time.
https://en.wikipedia.org/wiki/A%2A_search_algorithm
passage: Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems. ## Alkali lakes In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake. Examples of alkali lakes: - Alkali Lake, Lake County, Oregon - Baldwin Lake, San Bernardino County, California - Bear Lake on the Utah–Idaho border - Lake Magadi in Kenya - Lake Turkana in Kenya - Mono Lake, near Owens Valley in California - Redberry Lake, Saskatchewan - Summer Lake, Lake County, Oregon - Tramping Lake, Saskatchewan
https://en.wikipedia.org/wiki/Alkali
passage: Property ArchaeaBacteria EukaryotaCell membraneEther-linked lipidsEster-linked lipidsEster-linked lipidsCell wallGlycoprotein, or S-layer; rarely pseudopeptidoglycanPeptidoglycan, S-layer, or no cell wallVarious structuresGene structureCircular chromosomes, similar translation and transcription to EukaryotaCircular chromosomes, unique translation and transcriptionMultiple, linear chromosomes, but translation and transcription similar to ArchaeaInternal cell structureNo membrane-bound organelles (?) or nucleusNo membrane-bound organelles or nucleusMembrane-bound organelles and nucleus ## Metabolism Various, including diazotrophy, with methanogenesis unique to ArchaeaVarious, including photosynthesis, aerobic and anaerobic respiration, fermentation, diazotrophy, and autotrophyPhotosynthesis, cellular respiration, and fermentation; no diazotrophy ## Reproduction Asexual reproduction, horizontal gene transferAsexual reproduction, horizontal gene transferSexual and asexual reproductionMethionineFormylmethionineMethionineRNA polymeraseOneOneManyEF-2/EF-GSensitive to diphtheria toxinResistant to diphtheria toxinSensitive to diphtheria toxin Archaea were split off as a third domain because of the large differences in their ribosomal RNA structure. The particular molecule 16S rRNA is key to the production of proteins in all organisms.
https://en.wikipedia.org/wiki/Archaea
passage: In algebraic geometry, a smooth scheme over a field is a scheme which is well approximated by affine space near any point. Smoothness is one way of making precise the notion of a scheme with no singular points. A special case is the notion of a smooth variety over a field. Smooth schemes play the role in algebraic geometry of manifolds in topology. ## Definition First, let X be an affine scheme of finite type over a field k. Equivalently, X has a closed immersion into affine space An over k for some natural number n. Then X is the closed subscheme defined by some equations g1 = 0, ..., gr = 0, where each gi is in the polynomial ring k[x1,..., xn]. The affine scheme X is smooth of dimension m over k if X has dimension at least m in a neighborhood of each point, and the matrix of derivatives (∂gi/∂xj) has rank at least n−m everywhere on X. (It follows that X has dimension equal to m in a neighborhood of each point.) Smoothness is independent of the choice of immersion of X into affine space. The condition on the matrix of derivatives is understood to mean that the closed subset of X where all (n−m) × (n − m) minors of the matrix of derivatives are zero is the empty set. Equivalently, the ideal in the polynomial ring generated by all gi and all those minors is the whole polynomial ring.
https://en.wikipedia.org/wiki/Smooth_scheme
passage: ### Notes to the sample code and diagrams of insertion and removal The proposal breaks down both insertion and removal (not mentioning some very simple cases) into six constellations of nodes, edges, and colors, which are called cases. The proposal contains, for both insertion and removal, exactly one case that advances one black level closer to the root and loops, the other five cases rebalance the tree of their own. The more complicated cases are pictured in a diagram. - symbolises a red node and a (non-NULL) black node (of black height ≥ 1), symbolises the color red or black of a non-NULL node, but the same color throughout the same diagram. NULL nodes are not represented in the diagrams. - The variable N denotes the current node, which is labeled  N  or  N  in the diagrams. - A diagram contains three columns and two to four actions. The left column shows the first iteration, the right column the higher iterations, the middle column shows the segmentation of a case into its different actions. 1. The action "entry" shows the constellation of nodes with their colors which defines a case and mostly violates some of the requirements. A blue border rings the current node N and the other nodes are labeled according to their relation to N. 1. If a rotation is considered useful, this is pictured in the next action, which is labeled "rotation". 1. If some recoloring is considered useful, this is pictured in the next action, which is labeled "color". 1.
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
passage: The V60 plug-in hybrid was released in 2011 and was available for sale. In October 2010 Lotus Engineering unveiled the Lotus CityCar, a plug-in series hybrid concept car designed for flex-fuel operation on ethanol, or methanol as well as regular gasoline. The lithium battery pack provides an all-electric range of , and the 1.2-liter flex-fuel engine kicks in to allow to extend the range to more than . GM officially launched the Chevrolet Volt in the U.S. on November 30, 2010, and retail deliveries began in December 2010. Its sibling the Opel/Vauxhall Ampera was launched in Europe between late 2011 and early 2012. The first deliveries of the Fisker Karma took place in July 2011, and deliveries to retail customers began in November 2011. The Toyota Prius Plug-in Hybrid was released in Japan in January 2012, followed by the United States in February 2012. Deliveries of the Prius PHV in Europe began in late June 2012. The Ford C-Max Energi was released in the U.S. in October 2012, the Volvo V60 Plug-in Hybrid in Sweden by late 2012. The Honda Accord Plug-in Hybrid was released in selected U.S. markets in January 2013, and the Mitsubishi Outlander PHEV in Japan in January 2013, becoming the first SUV plug-in hybrid in the market. Deliveries of the Ford Fusion Energi began in February 2013. BYD Auto stopped production of its BYD F3DM due to low sales, and its successor, the BYD Qin, began sales in Costa Rica in November 2013, with sales in other countries in Latin America scheduled to begin in 2014. Qin deliveries began in China in mid December 2013.
https://en.wikipedia.org/wiki/Plug-in_hybrid
passage: The radiation can be either from internal sources (brachytherapy) or external sources. The radiation is most commonly low energy X-rays for treating skin cancers, while higher energy X-rays are used for cancers within the body. Radiation is typically used in addition to surgery and/or chemotherapy. For certain types of cancer, such as early head and neck cancer, it may be used alone. Radiation therapy after surgery for brain metastases has been shown to not improve overall survival in patients compared to surgery alone. For painful bone metastasis, radiation therapy has been found to be effective in about 70% of patients. ### Surgery Surgery is the primary method of treatment for most isolated, solid cancers and may play a role in palliation and prolongation of survival. It is typically an important part of the definitive diagnosis and staging of tumors, as biopsies are usually required. In localized cancer, surgery typically attempts to remove the entire mass along with, in certain cases, the lymph nodes in the area. For some types of cancer this is sufficient to eliminate the cancer. Palliative care Palliative care is treatment that attempts to help the patient feel better and may be combined with an attempt to treat the cancer. Palliative care includes action to reduce physical, emotional, spiritual and psycho-social distress. Unlike treatment that is aimed at directly killing cancer cells, the primary goal of palliative care is to improve quality of life. People at all stages of cancer treatment typically receive some kind of palliative care.
https://en.wikipedia.org/wiki/Cancer
passage: Neutrons were discovered in the early 1930s, and their diffraction was observed in 1936. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull, they developed neutron diffraction throughout the 1940s. In the 1970s, a neutron interferometer demonstrated the action of gravity in relation to wave–particle duality. The double-slit experiment was performed using neutrons in 1988. #### Atoms Interference of atom matter waves was first observed by Immanuel Estermann and Otto Stern in 1930, when a Na beam was diffracted off a surface of NaCl. The short de Broglie wavelength of atoms prevented progress for many years until two technological breakthroughs revived interest: microlithography allowing precise small devices and laser cooling allowing atoms to be slowed, increasing their de Broglie wavelength. The double-slit experiment on atoms was performed in 1991. Advances in laser cooling allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. ####
https://en.wikipedia.org/wiki/Matter_wave
passage: $$ where the derivative of the left hand side is the material derivative of the variable c. In non-interacting material, (for example, when temperature is close to absolute zero, dilute gas has almost zero mass diffusivity), hence the transport equation is simply the continuity equation: $$ \frac{\partial c}{\partial t} + \mathbf{v} \cdot \nabla c=0. $$ Using Fourier transform in both temporal and spatial domain (that is, with integral kernel $$ e^{i\omega t+i\mathbf{k}\cdot\mathbf{x}} $$ ), its characteristic equation can be obtained: $$ i\omega \tilde c+\mathbf{v}\cdot i \mathbf{k} \tilde c=0 \rightarrow \omega=-\mathbf{k}\cdot \mathbf{v}, $$ which gives the general solution: $$ c = f(\mathbf{x}-\mathbf{v}t), $$ where $$ f $$ is any differentiable scalar function.
https://en.wikipedia.org/wiki/Convection%E2%80%93diffusion_equation
passage: OperationVertex-vertexFace-vertexWinged-edgeRender dynamicV-VAll vertices around vertexExplicitj → f1, f2, f3, ... → v1, v2, v3, ...V → e1, e2, e3, ... → v1, v2, v3, ...V → e1, e2, e3, ... → v1, v2, v3, ...E-FAll edges of a faceF(a,b,c) → {a,b}, {b,c}, {a,c}F → {a,b}, {b,c}, {a,c}ExplicitExplicitV-FAll vertices of a faceF(a,b,c) → {a,b,c}ExplicitF → e1, e2, e3 → a, b, cExplicitF-VAll faces around a vertexPair searchExplicitV → e1, e2, e3 → f1, f2, f3, ...ExplicitE-VAll edges around a vertexV → {v,v1}, {v,v2}, {v,v3}, ...V → f1, f2, f3, ... → v1, v2, v3, ...ExplicitExplicitF-EBoth faces of an edgeList compareList compareExplicitExplicitV-EBoth vertices of an edgeE(a,b) → {a,b}E(a,b) → {a,b}ExplicitExplicitFlookFind face with given verticesF(a,b,c) → {a,b,c}Set intersection of
https://en.wikipedia.org/wiki/Polygon_mesh
passage: The vector's components are referred to as yi,t, meaning the observation at time t of the i th variable. For example, if the first variable in the model measures the price of wheat over time, then y1,1998 would indicate the price of wheat in the year 1998. VAR models are characterized by their order, which refers to the number of earlier time periods the model will use. Continuing the above example, a 5th-order VAR would model each year's wheat price as a linear combination of the last five years of wheat prices. A lag is the value of a variable in a previous time period. So in general a pth-order VAR refers to a VAR model which includes lags for the last p time periods. A pth-order VAR is denoted "VAR(p)" and sometimes called "a VAR with p lags". A pth-order VAR model is written as $$ y_t = c + A_1 y_{t-1} + A_2 y_{t-2} + \cdots + A_p y_{t-p} + e_t, \, $$ The variables of the form yt−i indicate that variable's value i time periods earlier and are called the "ith lag" of yt. The variable c is a k-vector of constants serving as the intercept of the model. Ai is a time-invariant (k × k)-matrix and et is a k-vector of error terms. The error terms must satisfy three conditions: 1. $$ \mathrm{E}(e_t) = 0\, $$ .
https://en.wikipedia.org/wiki/Vector_autoregression
passage: This approximation, which reflects the "molecular clock" hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore, it does not account for possible differences among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein.) More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes. ### Sequence motifs Frequently the primary structure encodes motifs that are of functional importance. Some examples of sequence motifs are: the C/D and H/ACA boxes of snoRNAs, Sm binding site found in spliceosomal RNAs such as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence, the Kozak consensus sequence and the RNA polymerase III terminator.
https://en.wikipedia.org/wiki/Nucleic_acid_sequence
passage: First, the sign of the left-hand side is positive since either all three of the ratios are positive, the case where is inside the triangle (upper diagram), or one is positive and the other two are negative, the case is outside the triangle (lower diagram shows one case). To check the magnitude, note that the area of a triangle of a given height is proportional to its base. So $$ \frac{|\triangle BOD|}{|\triangle COD|}=\frac{\overline{BD}}{\overline{DC}}=\frac{|\triangle BAD|}{|\triangle CAD|}. $$ Therefore, $$ \frac{\overline{BD}}{\overline{DC}}= \frac{|\triangle BAD|-|\triangle BOD|}{|\triangle CAD|-|\triangle COD|} =\frac{|\triangle ABO|}{|\triangle CAO|}. $$ (Replace the minus with a plus if and are on opposite sides of .)
https://en.wikipedia.org/wiki/Ceva%27s_theorem
passage: In the example, the fluent $$ \textit{isCarrying}(o,s) $$ can be used to indicate that the robot is carrying a particular object in a particular situation. If the robot initially carries nothing, $$ \textit{isCarrying}(Ball,S_{0}) $$ is false while $$ \textit{isCarrying}(Ball,do(pickup(Ball),S_{0})) $$ is true. The location of the robot can be modeled using a functional fluent $$ location(s) $$ that returns the location $$ (x,y) $$ of the robot in a particular situation. ## Formulae The description of a dynamic world is encoded in second-order logic using three kinds of formulae: formulae about actions (preconditions and effects), formulae about the state of the world, and foundational axioms. ### Action preconditions Some actions may not be executable in a given situation. For example, it is impossible to put down an object unless one is in fact carrying it. The restrictions on the performance of actions are modeled by literals of the form $$ \textit{Poss}(a,s) $$ , where is an action, a situation, and is a special binary predicate denoting executability of actions.
https://en.wikipedia.org/wiki/Situation_calculus
passage: Heat-related deaths can occur indoors, for instance among elderly people living alone. In these cases it can be challenging to assign heat as a contributing factor. ### Heat index for temperature and relative humidity The heat index in the table above is a measure of how hot it feels when relative humidity is factored with the actual air temperature. ### Psychological and sociological effects Excessive heat causes psychological stress as well as physical stress. This can affect performance. It may also lead to an increase in violent crime. High temperatures are associated with increased conflict between individuals and at the social level. In every society, crime rates go up when temperatures go up. This is particularly the case with violent crimes such as assault, murder and rape. In politically unstable countries, high temperatures can exacerbate factors that lead to civil war. High temperatures also have a significant effect on income. A study of countries in the United States found that the economic productivity of individual days declines by about 1.7 percent for each degree Celsius above . ### Surface ozone (air pollution) High temperatures also make the effects of ozone pollution in urban areas worse. This raises heat-related mortality during heat waves. During heat waves in urban areas, ground level ozone pollution can be 20 percent higher than usual. One study looked at fine particle concentrations and ozone concentrations from 1860 to 2000. It found that the global population-weighted fine particle concentrations increased by 5 percent due to climate change. Near-surface ozone concentrations rose by 2 percent.
https://en.wikipedia.org/wiki/Heat_wave
passage: This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive ground-based laser interferometer is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO is famous as the site of the first confirmed detections of gravitational waves in 2015. LIGO has two detectors: one in Livingston, Louisiana; the other at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 4 km in length. These are at 90 degree angles to each other, with the light passing through diameter vacuum tubes running the entire . A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which a Michelson interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as $$ h \approx 5\times 10^{-22} $$ . Upgrades to LIGO and other detectors such as Virgo, GEO600, and TAMA 300 should increase the sensitivity further, and the next generation of instruments (Advanced LIGO Plus and Advanced Virgo Plus) will be more sensitive still. Another highly sensitive interferometer (KAGRA) began operations in 2020.
https://en.wikipedia.org/wiki/Gravitational-wave_observatory
passage: The other was more penetrating (able to expose film through paper but not metal) and had a negative charge, and this type Rutherford named beta. This was the radiation that had been first detected by Becquerel from uranium salts. In 1900, the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. Henri Becquerel himself proved that beta rays are fast electrons, while Rutherford and Thomas Royds proved in 1909 that alpha particles are ionized helium. Rutherford and Edward Andrade proved in 1914 that gamma rays are like X-rays, but with shorter wavelengths. Cosmic ray radiations striking the Earth from outer space were finally definitively recognized and proven to exist in 1912, as the scientist Victor Hess carried an electrometer to various altitudes in a free balloon flight. The nature of these radiations was only gradually understood in later years. The neutron and neutron radiation were discovered by James Chadwick in 1932. A number of other high energy particulate radiations such as positrons, muons, and pions were discovered by cloud chamber examination of cosmic ray reactions shortly thereafter, and others types of particle radiation were produced artificially in particle accelerators, through the last half of the twentieth century. ## Applications ### Medicine Radiation and radioactive substances are used for diagnosis, treatment, and research.
https://en.wikipedia.org/wiki/Radiation
passage: Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search. ## Overview Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model. Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems. ## Algorithms based on the quantum Fourier transform The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates.
https://en.wikipedia.org/wiki/Quantum_algorithm
passage: \left(\frac{2}{p}\right) &= (-1)^{\frac{p^2-1}{8}} = \begin{cases} 1 & p \equiv 1, 7 \bmod{8}\\ -1 & p \equiv 3, 5\bmod{8}\end{cases} \end{align} $$ From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows: For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues : $$ \bmod p $$ .
https://en.wikipedia.org/wiki/Quadratic_reciprocity
passage: ### Estimation of the noise covariances Qk and Rk Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is the Optimized Kalman Filter (OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation. These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies.
https://en.wikipedia.org/wiki/Kalman_filter
passage: By Thales's theorem, $$ \angle DAB $$ and $$ \angle DCB $$ are both right angles. The right-angled triangles $$ DAB $$ and $$ DCB $$ both share the hypotenuse $$ \overline{BD} $$ of length 1. Thus, the side $$ \overline{AB} = \sin \alpha $$ , $$ \overline{AD} = \cos \alpha $$ , $$ \overline{BC} = \sin \beta $$ and $$ \overline{CD} = \cos \beta $$ . By the inscribed angle theorem, the central angle subtended by the chord $$ \overline{AC} $$ at the circle's center is twice the angle $$ \angle ADC $$ , i.e. $$ 2(\alpha + \beta) $$ . Therefore, the symmetrical pair of red triangles each has the angle $$ \alpha + \beta $$ at the center. Each of these triangles has a hypotenuse of length $$ \frac{1}{2} $$ , so the length of $$ \overline{AC} $$ is $$ 2 \times \frac{1}{2} \sin(\alpha + \beta) $$ , i.e. simply $$ \sin(\alpha + \beta) $$ .
https://en.wikipedia.org/wiki/List_of_trigonometric_identities
passage: 1. Niches of similar species are segregated (as the result of natural selection) in order to prevent interspecific hybridisation, because hybrids are less fit. (Many cases of niche segregation explained by interspecific competition are better explained by this mechanism, i.e., "reinforcement of reproductive barriers") (e.g., Rohde 2005b). ### Relative significance of the mechanisms Both paradigms acknowledge a role for all mechanisms (except possibly for that of random selection of niches in the first paradigm), but emphasis on the various mechanisms varies. The first paradigm stresses the paramount importance of interspecific competition, whereas the second paradigm tries to explain many cases which are thought to be due to competition in the first paradigm, by reinforcement of reproductive barriers and/or random selection of niches. – Many authors believe in the overriding importance of interspecific competition. Intuitively, one would expect that interspecific competition is of particular importance in all those cases in which sympatric species (i.e., species occurring together in the same area) with large population densities use the same resources and largely exhaust them. However, Andrewartha and Birch (1954,1984)Andrewartha, H.G. and Birch, L.C. 1984. The ecological web. University of Chicago Press. Chicago and London. and others have pointed out that most natural populations usually don't even approach exhaustion of resources, and too much emphasis on interspecific competition is therefore wrong. Concerning the possibility that competition has led to segregation in the evolutionary past, Wiens (1974, 1984)Wiens, J.A. 1984.
https://en.wikipedia.org/wiki/Ecological_niche
passage: The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B. For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child. ## Mathematical description A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions: 1. The curl of F is the zero vector: $$ \mathbf{\nabla} \times \mathbf{F} = \mathbf{0}. $$ where in two dimensions this reduces to: BLOCK11. There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place: BLOCK21. The force can be written as the negative gradient of a potential, $$ \Phi $$ :
https://en.wikipedia.org/wiki/Conservative_force
passage: For example, Allen Hatcher and William Thurston have used it to give a proof of the fact that it is finitely presented. ## Pants in hyperbolic geometry ### Moduli space of hyperbolic pants The interesting hyperbolic structures on a pair of pants are easily classified. For all $$ \ell_1, \ell_2, \ell_3 \in (0, \infty) $$ there is a hyperbolic surface $$ M $$ which is homeomorphic to a pair of pants and whose boundary components are simple closed geodesics of lengths equal to $$ \ell_1, \ell_2, \ell_3 $$ . Such a surface is uniquely determined by the $$ \ell_i $$ up to isometry. By taking the length of a cuff to be equal to zero, one obtains a complete metric on the pair of pants minus the cuff, which is replaced by a cusp. This structure is of finite volume. ### Pants and hexagons The geometric proof of the classification in the previous paragraph is important for understanding the structure of hyperbolic pants. It proceeds as follows: Given a hyperbolic pair of pants with totally geodesic boundary, there exist three unique geodesic arcs that join the cuffs pairwise and that are perpendicular to them at their endpoints. These arcs are called the seams of the pants.
https://en.wikipedia.org/wiki/Pair_of_pants_%28mathematics%29
passage: Circumscription can be used to this aim by defining new variables $$ change\_open_t $$ to model changes and then minimizing them: $$ \text{change open}_0 \equiv (\text{open}_0 \not\equiv \text{open}_1) $$ $$ \text{change open}_1 \equiv (\text{open}_1 \not\equiv \text{open}_2) $$ ... As shown by the Yale shooting problem, this kind of solution does not work. For example, $$ \neg \text{open}_1 $$ is not yet entailed by the circumscription of the formulae above: the model in which $$ \text{change open}_0 $$ is true and $$ \text{change open}_1 $$ is false is incomparable with the model with the opposite values. Therefore, the situation in which the door becomes open at time 1 and then remains open as a consequence of the action is not excluded by circumscription. Several other formalizations of dynamical domains not suffering from such problems have been developed (see frame problem for an overview). Many use circumscription but in a different way. ## Predicate circumscription The original definition of circumscription proposed by McCarthy is about first-order logic. The role of variables in propositional logic (something that can be true or false) is played in first-order logic by predicates.
https://en.wikipedia.org/wiki/Circumscription_%28logic%29
passage: The character is easily seen to be a class function, that is, invariant under conjugation. In the SU(2) case, the fact that the character is a class function means it is determined by its value on the maximal torus $$ T $$ consisting of the diagonal matrices in SU(2), since the elements are orthogonally diagonalizable with the spectral theorem. Since the irreducible representation with highest weight $$ m $$ has weights $$ m, m - 2, \ldots, -(m - 2), -m $$ , it is easy to see that the associated character satisfies $$ \Chi\left(\begin{pmatrix} e^{i\theta} & 0\\ 0 & e^{-i\theta} \end{pmatrix}\right) = e^{im\theta} + e^{i(m-2)\theta} + \cdots + e^{-i(m-2)\theta} + e^{-im\theta}. $$ This expression is a finite geometric series that can be simplified to $$ \Chi\left(\begin{pmatrix} e^{i\theta} & 0\\ 0 & e^{-i\theta} \end{pmatrix}\right) = \frac{\sin((m + 1)\theta)}{\sin(\theta)}. $$ This last expression is just the statement of the Weyl character formula for the SU(2) case.
https://en.wikipedia.org/wiki/Representation_theory_of_SU%282%29
passage: The paper also reveals the incident as marginally more extreme than the Ucluelet event. The paper also assesses a report of an wave in a significant wave height of , but the authors cast doubt on that claim. A paper written by Craig B. Smith in 2007 reported on an incident in the North Atlantic, in which the submarine Grouper was hit by a 30-meter wave in calm seas. ## Causes Because the phenomenon of rogue waves is still a matter of active research, clearly stating what the most common causes are or whether they vary from place to place is premature. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area. The warm Agulhas Current runs to the southwest, while the dominant winds are westerlies, but since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localized variation. Suggested mechanisms for freak waves include: Diffractive focusing According to this hypothesis, coast shape or seabed shape directs several small waves to meet in phase. Their crest heights combine to create a freak wave. Focusing by currents Waves from one current are driven into an opposing current. This results in shortening of wavelength, causing shoaling (i.e., increase in wave height), and oncoming wave trains to compress together into a rogue wave. This happens off the South African coast, where the Agulhas Current is countered by westerlies.
https://en.wikipedia.org/wiki/Rogue_wave
passage: The cache behaves like a stack, and unlike a FIFO queue. The cache evicts the block added most recently first, regardless of how often or how many times it was accessed before. #### SIEVE SIEVE is a simple eviction algorithm designed specifically for web caches, such as key-value caches and Content Delivery Networks. It uses the idea of lazy promotion and quick demotion. Therefore, SIEVE does not update the global data structure at cache hits and delays the update till eviction time; meanwhile, it quickly evicts newly inserted objects because cache workloads tend to show high one-hit-wonder ratios, and most of the new objects are not worthwhile to be kept in the cache. SIEVE uses a single FIFO queue and uses a moving hand to select objects to evict. Objects in the cache have one bit of metadata indicating whether the object has been requested after being admitted into the cache. The eviction hand points to the tail of the queue at the beginning and moves toward the head over time. Compared with the CLOCK eviction algorithm, retained objects in SIEVE stay in the old position. Therefore, new objects are always at the head, and the old objects are always at the tail. As the hand moves toward the head, new objects are quickly evicted (quick demotion), which is the key to the high efficiency in the SIEVE eviction algorithm. SIEVE is simpler than LRU, but achieves lower miss ratios than LRU on par with state-of-the-art eviction algorithms.
https://en.wikipedia.org/wiki/Cache_replacement_policies
passage: The series converges when . The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of Laplace's equation of the static potential, , in a charge-free region of space, using the method of separation of variables, where the boundary conditions have axial symmetry (no dependence on an azimuthal angle). Where is the axis of symmetry and is the angle between the position of the observer and the axis (the zenith angle), the solution for the potential will be $$ \Phi(r,\theta) = \sum_{\ell=0}^\infty \left( A_\ell r^\ell + B_\ell r^{-(\ell+1)} \right) P_\ell(\cos\theta) \,. $$ and are to be determined according to the boundary condition of each problem. They also appear when solving the Schrödinger equation in three dimensions for a central force.
https://en.wikipedia.org/wiki/Legendre_polynomials
passage: Furthermore, and must be nonsingular.) Matrices can also be inverted blockwise by using the analytic inversion formula: The strategy is particularly advantageous if is diagonal and is a small matrix, since they are the only matrices requiring inversion. The nullity theorem says that the nullity of equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of equals the nullity of the sub-block in the upper right of the inverse matrix. The inversion procedure that led to Equation () performed matrix block operations that operated on and first. Instead, if and are operated on first, and provided and are nonsingular, the result is Equating the upper-left sub-matrices of Equations () and () leads to where Equation () is the Woodbury matrix identity, which is equivalent to the binomial inverse theorem. If and are both invertible, then the above two block matrix inverses can be combined to provide the simple factorization By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is. This formula simplifies significantly when the upper right block matrix is the zero matrix. This formulation is useful when the matrices and have relatively simple inverse formulas (or pseudo inverses in the case where the blocks are not all square.
https://en.wikipedia.org/wiki/Invertible_matrix
passage: - A Bayer matrix produces a very distinctive cross-hatch pattern. - A matrix tuned for blue noise, such as those generated by the void-and-cluster method, produces a look closer to that of an error diffusion dither method. (Original)ThresholdRandomHalftoneOrdered (Bayer)Ordered (void-and-cluster) - Error-diffusion dithering is a feedback process that diffuses the quantization error to neighboring pixels. - Floyd–Steinberg (FS) dithering only diffuses the error to neighboring pixels. This results in very fine-grained dithering. - Minimized average error dithering by Jarvis, Judice, and Ninke diffuses the error also to pixels one step further away. The dithering is coarser but has fewer visual artifacts. However, it is slower than Floyd–Steinberg dithering, because it distributes errors among 12 nearby pixels instead of 4 nearby pixels for Floyd–Steinberg. - Stucki dithering is based on the above, but is slightly faster. Its output tends to be clean and sharp. - Burkes dithering is a simplified form of Stucki dithering that is faster, but is less clean than Stucki dithering.
https://en.wikipedia.org/wiki/Dither
passage: The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern level of transcription of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter. Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two Enhancer RNAs (eRNAs) as illustrated in the Figure. Like mRNAs, these eRNAs are usually protected by their 5′ cap. An inactive enhancer may be bound by an inactive transcription factor.
https://en.wikipedia.org/wiki/Enhancer_%28genetics%29
passage: ### Newton's method A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient if it is convenient to find a suitable starting seed: $$ X_{k+1} = 2X_k - X_k A X_k. $$ Victor Pan and John Reif have done work that includes ways of generating a starting seed. Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix. For example, the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman–Beavers iteration. That may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. Newton's method is also useful for "touch up" corrections to the Gauss–Jordan algorithm which has been contaminated by small errors from imperfect computer arithmetic.
https://en.wikipedia.org/wiki/Invertible_matrix
passage: ## History Prompt injection is a type of code injection attack that leverages adversarial prompt engineering to manipulate AI models. In May 2022, Jonathan Cefalu of Preamble identified prompt injection as a security vulnerability and reported it to OpenAI, referring to it as "command injection". In late 2022, the NCC Group identified prompt injection as an emerging vulnerability affecting AI and machine learning (ML) systems. The term "prompt injection" was coined by Simon Willison in September 2022. He distinguished it from jailbreaking, which bypasses an AI model's safeguards, whereas prompt injection exploits its inability to differentiate system instructions from user inputs. While some prompt injection attacks involve jailbreaking, they remain distinct techniques. LLMs with web browsing capabilities can be targeted by indirect prompt injection, where adversarial prompts are embedded within website content. If the LLM retrieves and processes the webpage, it may interpret and execute the embedded instructions as legitimate commands, potentially leading to unintended behavior. A November 2024 OWASP report identified security challenges in multimodal AI, which processes multiple data types, such as text and images. Adversarial prompts can be embedded in non-textual elements, such as hidden instructions within images, influencing model responses when processed alongside text. This complexity expands the attack surface, making multimodal AI more susceptible to cross-modal vulnerabilities.
https://en.wikipedia.org/wiki/Prompt_injection
passage: In other words, new research will probably change the presented conclusions completely. ### Categories of recommendations In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses the following system: - Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweigh the potential risks. Clinicians should discuss the service with eligible patients. - Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients. - Level C: At least fair scientific evidence suggests that the clinical service provides benefits, but the balance between benefits and risks is too close for general recommendations. Clinicians need not offer it unless individual considerations apply. - Level D: At least fair scientific evidence suggests that the risks of the clinical service outweigh potential benefits. Clinicians should not routinely offer the service to asymptomatic patients. - Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service. GRADE guideline panelists may make strong or weak recommendations on the basis of further criteria. Some of the important criteria are the balance between desirable and undesirable effects (not considering cost), the quality of the evidence, values and preferences and costs (resource utilization).
https://en.wikipedia.org/wiki/Evidence-based_medicine
passage: These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. The series of values generated by such algorithms is generally determined by a fixed number called a seed. One of the most common PRNG is the linear congruential generator, which uses the recurrence $$ X_{n+1} = (a X_n + b)\, \textrm{mod}\, m $$ to generate numbers, where , and are large integers, and $$ X_{n+1} $$ is the next in as a series of pseudorandom numbers. The maximum number of numbers the formula can produce is the modulus, . The recurrence relation can be extended to matrices to have much longer periods and better statistical properties . To avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, , can be used in parallel, with a master random number generator that selects from among the several different generators. A simple pen-and-paper method for generating random numbers is the so-called middle-square method suggested by John von Neumann. While simple to implement, its output is of poor quality. It has a very short period and severe weaknesses, such as the output sequence almost always converging to zero. A recent innovation is to combine the middle square with a Weyl sequence. This method produces high-quality output through a long period. Most computer programming languages include functions or library routines that provide random number generators.
https://en.wikipedia.org/wiki/Random_number_generation
passage: For example, the equations $$ \begin{align} x &= \cos t \\ y &= \sin t \end{align} $$ form a parametric representation of the unit circle, where is the parameter: A point is on the unit circle if and only if there is a value of such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors: $$ (x, y)=(\cos t, \sin t). $$ Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations. In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled ; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.
https://en.wikipedia.org/wiki/Parametric_equation
passage: In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units. Traditional shaders calculate rendering effects on graphics hardware with a high degree of flexibility. Most shaders are coded for (and run on) a graphics processing unit (GPU), though this is not a strict requirement. Shading languages are used to program the GPU's rendering pipeline, which has mostly superseded the fixed-function pipeline of the past that only allowed for common geometry transforming and pixel-shading functions; with shaders, customized effects can be used. The position and color (hue, saturation, brightness, and contrast) of all pixels, vertices, and/or textures used to construct a final rendered image can be altered using algorithms defined in a shader, and can be modified by external variables or textures introduced by the computer program calling the shader. Shaders are used widely in cinema post-processing, computer-generated imagery, and video games to produce a range of effects.
https://en.wikipedia.org/wiki/Shader
passage: The first alternating-current commutatorless induction motor was invented by Galileo Ferraris in 1885. Ferraris was able to improve his first design by producing more advanced setups in 1886. In 1888, the Royal Academy of Science of Turin published Ferraris's research detailing the foundations of motor operation, while concluding at that time that "the apparatus based on that principle could not be of any commercial importance as motor. " Possible industrial development was envisioned by Nikola Tesla, who invented independently his induction motor in 1887 and obtained a patent in May 1888. In the same year, Tesla presented his paper A New System of Alternate Current Motors and Transformers to the AIEE that described three patented two-phase four-stator-pole motor types: one with a four-pole rotor forming a non-self-starting reluctance motor, another with a wound rotor forming a self-starting induction motor, and the third a true synchronous motor with separately excited DC supply to rotor winding. One of the patents Tesla filed in 1887, however, also described a shorted-winding-rotor induction motor. George Westinghouse, who had already acquired rights from Ferraris (US$1,000), promptly bought Tesla's patents (US$60,000 plus US$2.50 per sold hp, paid until 1897), employed Tesla to develop his motors, and assigned C.F. Scott to help Tesla; however, Tesla left for other pursuits in 1889. The constant speed AC induction motor was found not to be suitable for street cars, but Westinghouse engineers successfully adapted it to power a mining operation in Telluride, Colorado in 1891.
https://en.wikipedia.org/wiki/Electric_motor
passage: More precisely, for every normed space $$ X, $$ there exists a Banach space $$ Y $$ and a mapping $$ T : X \to Y $$ such that $$ T $$ is an isometric mapping and $$ T(X) $$ is dense in $$ Y. $$ If $$ Z $$ is another Banach space such that there is an isometric isomorphism from $$ X $$ onto a dense subset of $$ Z, $$ then $$ Z $$ is isometrically isomorphic to $$ Y. $$ The Banach space $$ Y $$ is the Hausdorff completion of the normed space $$ X. $$ The underlying metric space for $$ Y $$ is the same as the metric completion of $$ X, $$ with the vector space operations extended from $$ X $$ to $$ Y. $$ The completion of $$ X $$ is sometimes denoted by $$ \widehat{X}. $$ ## General theory ### Linear operators, isomorphisms If $$ X $$ and $$ Y $$ are normed spaces over the same ground field $$ \mathbb{K}, $$ the set of all continuous -linear maps $$ T : X \to Y $$ is denoted by $$ B(X, Y). $$ In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space $$ X $$ to another normed space is continuous if and only if it is bounded on the closed unit ball of $$ X. $$
https://en.wikipedia.org/wiki/Banach_space
passage: VII promotes the development of a direct communication link between road vehicles and all other vehicles nearby, allowing for the exchange of information on vehicle speed and orientation or driver awareness and intent. This real-time exchange of information may enable more effective automated emergency maneuvers, such as steering, decelerating, or braking. In addition to nearby vehicle awareness, VII promotes a communication link between vehicles and roadway infrastructure. Such a link may allow for improved real-time traffic information, better queue management, and feedback to vehicles. Existing implementations of VII use vehicle-based sensors that can recognize and respond to roadway markings or signs, automatically adjusting vehicle parameters to follow the recognized instructions. However, this information may also be acquired via roadside beacons or stored in a centralized database accessible to all vehicles. ### Efficiency With a VII system in place, vehicles will be linked together. The headway between vehicles may therefore be reduced so that there is less empty space on the road, increasing the available capacity per lane. More capacity per lane will in turn imply fewer lanes in general, possibly satisfying the community's concerns about the impact of roadway widening. VII will enable precise traffic-signal coordination by tracking vehicle platoons and will benefit from accurate timing by drawing on real-time traffic data covering volume, density, and turning movements. Real-time traffic data can also be used in the design of new roadways or modification of existing systems as the data could be used to provide accurate origin-destination studies and turning-movement counts for uses in transportation forecasting and traffic operations.
https://en.wikipedia.org/wiki/Vehicle_infrastructure_integration
passage: Since the early 2000s, many of the widely used support libraries have also been implemented in C and more recently, in C++. On the other hand, high-level languages such as the Wolfram Language, MATLAB, Python, and R have become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs are also written in such higher-level scripting languages. For this reason, facilities for inter-operation with C were added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages. ## Portability Portability was a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a reference syntax and semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such as The PFORT Verifier, it was not until after the 1977 standard, when the National Bureau of Standards (now NIST) published FIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions. Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic.
https://en.wikipedia.org/wiki/Fortran
passage: Sustainability is a social goal for people to co-exist on Earth over a long period of time. ## Definitions of this term are disputed and have varied with literature, context, and time. Sustainability usually has three dimensions (or pillars): environmental, economic, and social. Many definitions emphasize the environmental dimension. This can include addressing key environmental problems, including climate change and biodiversity loss. The idea of sustainability can guide decisions at the global, national, organizational, and individual levels. A related concept is that of sustainable development, and the terms are often used to mean the same thing. UNESCO distinguishes the two like this: "Sustainability is often thought of as a long-term goal (i.e. a more sustainable world), while sustainable development refers to the many processes and pathways to achieve it. " Details around the economic dimension of sustainability are controversial. Scholars have discussed this under the concept of weak and strong sustainability. For example, there will always be tension between the ideas of "welfare and prosperity for all" and environmental conservation, so trade-offs are necessary. It would be desirable to find ways that separate economic growth from harming the environment. This means using fewer resources per unit of output even while growing the economy. This decoupling reduces the environmental impact of economic growth, such as pollution. Doing this is difficult. Some experts say there is no evidence that such a decoupling is happening at the required scale. It is challenging to measure sustainability as the concept is complex, contextual, and dynamic.
https://en.wikipedia.org/wiki/Sustainability
passage: ## History Some scholars trace the origins of natural science as far back as pre-literate human societies, where understanding the natural world was necessary for survival. People observed and built up knowledge about the behavior of animals and the usefulness of plants as food and medicine, which was passed down from generation to generation. These primitive understandings gave way to more formalized inquiry around 3500 to 3000 BC in the Mesopotamian and Ancient Egyptian cultures, which produced the first known written evidence of natural philosophy, the precursor of natural science. While the writings show an interest in astronomy, mathematics, and other aspects of the physical world, the ultimate aim of inquiry about nature's workings was, in all cases, religious or mythological, not scientific. A tradition of scientific inquiry also emerged in Ancient China, where Taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. They focused on the yin and yang, or contrasting elements in nature; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. The five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. The water turned into wood, which turned into the fire when it burned. The ashes left by fire were earth. Using these principles, Chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the West.
https://en.wikipedia.org/wiki/Natural_science
passage: If the machine is restricted to logarithmic space instead of polynomial time, the analogous RL, co-RL, and ZPL complexity classes are obtained. By enforcing both restrictions, RLP, co-RLP, BPLP, and ZPLP are yielded. Probabilistic computation is also critical for the definition of most classes of interactive proof systems, in which the verifier machine depends on randomness to avoid being predicted and tricked by the all-powerful prover machine. For example, the class IP equals PSPACE, but if randomness is removed from the verifier, we are left with only NP, which is not known but widely believed to be a considerably smaller class. One of the central questions of complexity theory is whether randomness adds power; that is, is there a problem that can be solved in polynomial time by a probabilistic Turing machine but not a deterministic Turing machine? Or can deterministic Turing machines efficiently simulate all probabilistic Turing machines with at most a polynomial slowdown? It is known that P ⊆ BPP, since a deterministic Turing machine is just a special case of a probabilistic Turing machine. However, it is uncertain whether (but widely suspected that) BPP ⊆ P, implying that BPP = P. The same question for log space instead of polynomial time (does L = BPLP?) is even more widely believed to be true.
https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
passage: In the category of rings, the inclusion $$ \mathbb{Z} \hookrightarrow \mathbb{Q} $$ is an epimorphism but is not the quotient of $$ \mathbb{Z} $$ by a two-sided ideal. To get maps which truly behave like subobject embeddings or quotients, rather than as arbitrary injective functions or maps with dense image, one must restrict to monomorphisms and epimorphisms satisfying additional hypotheses. Therefore, one might define a "subobject" to be an equivalence class of so-called "regular monomorphisms" (monomorphisms which can be expressed as an equalizer of two morphisms) and a "quotient object" to be any equivalence class of "regular epimorphisms" (morphisms which can be expressed as a coequalizer of two morphisms) ## Interpretation This definition corresponds to the ordinary understanding of a subobject outside category theory. When the category's objects are sets (possibly with additional structure, such as a group structure) and the morphisms are set functions (preserving the additional structure), one thinks of a monomorphism in terms of its image. An equivalence class of monomorphisms is determined by the image of each monomorphism in the class; that is, two monomorphisms f and g into an object T are equivalent if and only if their images are the same subset (thus, subobject) of T.
https://en.wikipedia.org/wiki/Subobject
passage: {{DISPLAYTITLE:Exotic R4}} In mathematics, an exotic $$ \R^4 $$ is a differentiable manifold that is homeomorphic (i.e. shape preserving) but not diffeomorphic (i.e. non smooth) to the Euclidean space $$ \R^4. $$ The first examples were found in 1982 by Michael Freedman and others, by using the contrast between Freedman's theorems about topological 4-manifolds, and Simon Donaldson's theorems about smooth 4-manifolds. Freedman and Quinn (1990), p. 122 There is a continuum of non-diffeomorphic differentiable structures $$ \R^4, $$ as was shown first by Clifford Taubes. Prior to this construction, non-diffeomorphic smooth structures on spheresexotic sphereswere already known to exist, although the question of the existence of such structures for the particular case of the 4-sphere remained open (and remains open as of 2024). For any positive integer n other than 4, there are no exotic smooth structures $$ \R^n; $$ in other words, if n ≠ 4 then any smooth manifold homeomorphic to $$ \R^n $$ is diffeomorphic to $$ \R^n. $$
https://en.wikipedia.org/wiki/Exotic_R4
passage: The logarithm of the th power of a number is times the logarithm of the number itself; the logarithm of a th root is the logarithm of the number divided by . The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions $$ x=b^{\log_b x}, $$ and/or $$ y=b^{\log_b y}, $$ in the left hand sides. Formula Example product quotient power root #### Change of base The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: $$ \log_b (x) = \frac{\log_{10} (x)}{\log_{10} (b)} = \frac{\log_{e} (x)}{\log_{e} (b)}. $$ Given a number x and its logarithm logb(x) to an unknown base b, the base is given by: $$ b = x^\frac{1}{\log_b(x)}. $$ ### Hyperbolic function identities The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities.
https://en.wikipedia.org/wiki/Identity_%28mathematics%29
passage: The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe. ### Radio and microwave Electromagnetic radiation phenomena with wavelengths ranging from one meter to one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz. When radio waves impinge upon a conductor, they couple to the conductor, travel along it, and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven.
https://en.wikipedia.org/wiki/Electromagnetic_radiation
passage: The semi-classical Monte Carlo model can then be employed to simulate the device characteristics. The quantum corrections can be incorporated into a Monte Carlo simulator by simply introducing a quantum potential term which is superimposed onto the classical electrostatic potential seen by the simulated particles. Figure beside pictorially depicts the essential features of this technique. The various quantum approaches available for implementation are described in the following subsections. ### Wigner-based correction The Wigner transport equation forms the basis for the Wigner-based quantum correction. $$ \frac{\partial f}{\partial t} + r \cdot \nabla_r f - \frac{1}{\hbar} \nabla_r V \cdot \nabla_k f + \sum_{\alpha = 1}^{\infty} \frac{(-1)^{\alpha +1}}{\hbar 4^{\alpha} (2 \alpha +1)!} \times (\nabla_r \nabla_k)^{2 \alpha +1} V f = \left(\frac{\partial f}{\partial t}\right)_c $$ where, k is the crystal momentum, V is the classical potential, the term on the RHS is the effect of collision, the fourth term on the LHS represents non-local quantum mechanical effects. The standard Boltzmann Transport Equation is obtained when the non-local terms on the LHS disappear in the limit of slow spatial variations.
https://en.wikipedia.org/wiki/Monte_Carlo_methods_for_electron_transport
passage: ## As a homogeneous space Each of the Stiefel manifolds $$ V_k(\mathbb F^n) $$ can be viewed as a homogeneous space for the action of a classical group in a natural manner. Every orthogonal transformation of a k-frame in $$ \R^n $$ results in another k-frame, and any two k-frames are related by some orthogonal transformation. In other words, the orthogonal group O(n) acts transitively on $$ V_k(\R^n). $$ The stabilizer subgroup of a given frame is the subgroup isomorphic to O(n−k) which acts nontrivially on the orthogonal complement of the space spanned by that frame. Likewise the unitary group U(n) acts transitively on $$ V_k(\Complex^n) $$ with stabilizer subgroup U(n−k) and the symplectic group Sp(n) acts transitively on $$ V_k(\mathbb{H}^n) $$ with stabilizer subgroup Sp(n−k).
https://en.wikipedia.org/wiki/Stiefel_manifold
passage: Gödel's second incompleteness theorem shows that any consistent theory powerful enough to encode addition and multiplication of integers cannot prove its own consistency. This presents a challenge to Hilbert's program: - It is not possible to formalize all mathematical true statements within a formal system, as any attempt at such a formalism will omit some true mathematical statements. There is no complete, consistent extension of even Peano arithmetic based on a computably enumerable set of axioms. - A theory such as Peano arithmetic cannot even prove its own consistency, so a restricted "finitistic" subset of it certainly cannot prove the consistency of more powerful theories such as set theory. - There is no algorithm to decide the truth (or provability) of statements in any consistent extension of Peano arithmetic. Strictly speaking, this negative solution to the Entscheidungsproblem appeared a few years after Gödel's theorem, because at the time the notion of an algorithm had not been precisely defined. ## Hilbert's program after Gödel Many current lines of research in mathematical logic, such as proof theory and reverse mathematics, can be viewed as natural continuations of Hilbert's original program. Much of it can be salvaged by changing its goals slightly (Zach 2005), and with the following modifications some of it was successfully completed: - Although it is not possible to formalize all mathematics, it is possible to formalize essentially all the mathematics that anyone uses.
https://en.wikipedia.org/wiki/Hilbert%27s_program
passage: Acting the commutator on ψ gives: $$ \left[ \hat{A}, \hat{B} \right] \psi = \hat{A} \hat{B} \psi - \hat{B} \hat{A} \psi . $$ If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute: $$ \left[ \hat{A}, \hat{B} \right] \psi = 0, $$ then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties $$ \Delta A = 0 $$ , $$ \Delta B = 0 $$ simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this: $$ \begin{align} \left[ \hat{A}, \hat{B} \right] \psi &= \hat{A} \hat{B} \psi - \hat{B} \hat{A} \psi \\ BLOCK0\end{align} $$ It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again.
https://en.wikipedia.org/wiki/Operator_%28physics%29
passage: ### Long gamma-ray bursts Most observed events (70%) have a duration of greater than two seconds and are classified as long gamma-ray bursts. Because these events constitute the majority of the population and because they tend to have the brightest afterglows, they have been observed in much greater detail than their short counterparts. Almost every well-studied long gamma-ray burst has been linked to a galaxy with rapid star formation, and in many cases to a core-collapse supernova as well, unambiguously associating long GRBs with the deaths of massive stars. Long GRB afterglow observations, at high redshift, are also consistent with the GRB having originated in star-forming regions. In December 2022, astronomers reported the observation of GRB 211211A for 51 seconds, the first evidence of a long GRB likely associated with mergers of "compact binary objects" such as neutron stars or white dwarfs. Following this, GRB 191019A (2019, 64s) and GRB 230307A (2023, 35s) have been argued to signify an emerging class of long GRB which may originate from these types of progenitor events. ### Ultra-long gamma-ray bursts ulGRB are defined as GRB lasting more than 10,000 seconds, covering the upper range to the limit of the GRB duration distribution. They have been proposed to form a separate class, caused by the collapse of a blue supergiant star, a tidal disruption event or a new-born magnetar.
https://en.wikipedia.org/wiki/Gamma-ray_burst
passage: Then The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing $$ \sigma $$ being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose. Theorem. Let and be subsets of . If , then Theorem. (Schnirelmann) Let . If then there exists such that ## Additive bases A subset $$ A \subseteq \N $$ with the property that $$ A \oplus A \oplus \cdots \oplus A = \N $$ for a finite sum, is called an additive basis, and the least number of summands required is called the degree (sometimes order) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares $$ \mathfrak{G}^2 = \{k^2\}_{k=1}^{\infty} $$ is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases.) ## Mann's theorem Historically the theorems above were pointers to the following result, at one time known as the $$ \alpha + \beta $$ hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942. Theorem. Let and be subsets of .
https://en.wikipedia.org/wiki/Schnirelmann_density
passage: All nilpotent elements are zero divisors. An $$ n\times n $$ matrix $$ A $$ with entries from a field is nilpotent if and only if its characteristic polynomial is $$ t^n $$ . If $$ x $$ is nilpotent, then $$ 1-x $$ is a unit, because $$ x^n=0 $$ entails $$ (1 - x) (1 + x + x^2 + \cdots + x^{n-1}) = 1 - x^n = 1. $$ More generally, the sum of a unit element and a nilpotent element is a unit when they commute. ## Commutative rings The nilpotent elements from a commutative ring $$ R $$ form an ideal $$ \mathfrak{N} $$ ; this is a consequence of the binomial theorem. This ideal is the nilradical of the ring. If $$ \mathfrak{N}=\{0\} $$ , i.e., $$ R $$ has no non-zero nilpotent elements, $$ R $$ is called a reduced ring. Every nilpotent element $$ x $$ in a commutative ring is contained in every prime ideal $$ \mathfrak{p} $$ of that ring, since $$ x^n = 0\in \mathfrak{p} $$ . So $$ \mathfrak{N} $$ is contained in the intersection of all prime ideals.
https://en.wikipedia.org/wiki/Nilpotent
passage: Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. It was first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not. The stencil buffer implementation of shadow volumes is generally considered among the most practical general purpose real-time shadowing techniques for use on modern 3D graphics hardware. It has been popularized by the video game Doom 3, and a particular variation of the technique used in this game has become known as Carmack's Reverse. Shadow volumes have become a popular tool for real-time shadowing, alongside the more venerable shadow mapping. The main advantage of shadow volumes is that they are accurate to the pixel (though many implementations have a minor self-shadowing problem along the silhouette edge, see construction below), whereas the accuracy of a shadow map depends on the texture memory allotted to it as well as the angle at which the shadows are cast (at some angles, the accuracy of a shadow map unavoidably suffers). However, the technique requires the creation of shadow geometry, which can be CPU intensive (depending on the implementation). The advantage of shadow mapping is that it is often faster, because shadow volume polygons are often very large in terms of screen space and require a lot of fill time (especially for convex objects), whereas shadow maps do not have this limitation. ## Construction
https://en.wikipedia.org/wiki/Shadow_volume
passage: ### Geometric interpretation The geometric interpretation of De Casteljau's algorithm is straightforward. - Consider a Bézier curve with control points $$ P_0, \dots, P_n $$ . Connecting the consecutive points we create the control polygon of the curve. - Subdivide now each line segment of this polygon with the ratio $$ t : (1-t) $$ and connect the points you get. This way you arrive at the new polygon having one fewer segment. - Repeat the process until you arrive at the single point – this is the point of the curve corresponding to the parameter $$ t $$ . The following picture shows this process for a cubic Bézier curve: Note that the intermediate points that were constructed are in fact the control points for two new Bézier curves, both exactly coincident with the old one. This algorithm not only evaluates the curve at $$ t $$ , but splits the curve into two pieces at $$ t $$ , and provides the equations of the two sub-curves in Bézier form. The interpretation given above is valid for a nonrational Bézier curve.
https://en.wikipedia.org/wiki/De_Casteljau%27s_algorithm
passage: The parallelogram defined by the columns of the above matrix is the one with vertices at , , , and , as shown in the accompanying diagram. The absolute value of is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by . The absolute value of the determinant together with the sign becomes the signed area of the parallelogram. The signed area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix). To show that is the signed area, one may consider a matrix containing two vectors and representing the parallelogram's sides. The signed area can be expressed as for the angle θ between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other.
https://en.wikipedia.org/wiki/Determinant
passage: $$ \phi_c = 1 - e^{-\eta_c} $$ is the critical volume fraction, valid for overlapping randomly placed objects. For disks and plates, these are effective volumes and volume fractions. For void ("Swiss-Cheese" model), $$ \phi_c = e^{-\eta_c} $$ is the critical void fraction. For more results on void percolation around ellipsoids and elliptical plates, see. For more ellipsoid percolation values see. For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in. For superballs, m is the deformation parameter, the percolation values are given in., In addition, the thresholds of concave-shaped superballs are also determined in For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in. ### Void percolation in 3D Void percolation refers to percolation in the space around overlapping objects. Here $$ \phi_c $$ refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related to $$ \eta_c $$ by $$ \phi_c = e^{-\eta_c} $$ .
https://en.wikipedia.org/wiki/Percolation_threshold
passage: ### Tensor product The tensor product $$ V \otimes_F W, $$ or simply $$ V \otimes W, $$ of two vector spaces $$ V $$ and $$ W $$ is one of the central notions of multilinear algebra which deals with extending notions such as linear maps to several variables. A map $$ g : V \times W \to X $$ from the Cartesian product $$ V \times W $$ is called bilinear if $$ g $$ is linear in both variables $$ \mathbf{v} $$ and $$ \mathbf{w}. $$ That is to say, for fixed $$ \mathbf{w} $$ the map $$ \mathbf{v} \mapsto g(\mathbf{v}, \mathbf{w}) $$ is linear in the sense above and likewise for fixed $$ \mathbf{v}. $$ The tensor product is a particular vector space that is a universal recipient of bilinear maps $$ g, $$ as follows. It is defined as the vector space consisting of finite (formal) sums of symbols called tensors $$ \mathbf{v}_1 \otimes \mathbf{w}_1 + \mathbf{v}_2 \otimes \mathbf{w}_2 + \cdots + \mathbf{v}_n \otimes \mathbf{w}_n, $$ subject to the rules $$ \begin{alignat}{6}
https://en.wikipedia.org/wiki/Vector_space
passage: \operatorname{last-index-of} &\equiv \lambda f.\ \operatorname{Y}\ (\lambda r.\lambda n.\lambda l.\ l\ (\lambda h.\lambda t.\lambda d.\ (\lambda i.\ \operatorname{IsZero}\ i\ (f\ h\ n\ \operatorname{zero})\ i)\ (r\ (\operatorname{succ}\ n)\ t))\ \operatorname{zero})\ \operatorname{one} \\ \operatorname{range} &\equiv \lambda f.\lambda z.\ \operatorname{Y}\ (\lambda r.\lambda s.\lambda n.\ \operatorname{IsZero}\ n\ \operatorname{nil}\ (\operatorname{cons}\ (s\ f\ z)\ (r\ (\operatorname{succ}\ s)\ (\operatorname{pred}\ n))))\ \operatorname{zero} \\ \operatorname{repeat} &\equiv \lambda v.\ \operatorname{Y}\ (\lambda r.\lambda n.\ \operatorname{IsZero}\ n\ \operatorname{nil}\ (\operatorname{cons}\ v\ (r\ (\operatorname{pred}\ n)))) \\
https://en.wikipedia.org/wiki/Church_encoding
passage: contained in the unit ball, must have all points of level $$ n - 1 $$ contained in the ball of radius $$ 1 - \delta_X(t) < 1. $$ By induction, it follows that all points of level $$ n - k $$ are contained in the ball of radius $$ \left(1 - \delta_X(t)\right)^j, \ j = 1, \ldots, n. $$ If the height $$ n $$ was so large that $$ \left(1 - \delta_X(t)\right)^{n-1} < t / 2, $$ then the two points $$ x_1, x_{-1} $$ of the first level could not be $$ t $$ -separated, contrary to the assumption. This gives the required bound $$ n(t), $$ function of $$ \delta_X(t) $$ only. Using the tree-characterization, Enflo proved that super-reflexive Banach spaces admit an equivalent uniformly convex norm. Trees in a Banach space are a special instance of vector-valued martingales. Adding techniques from scalar martingale theory, Pisier improved Enflo's result by showing that a super-reflexive space
https://en.wikipedia.org/wiki/Reflexive_space
passage: As there are many equivalent ways to define the notion of an affine connection, so there are many different ways to define curvature and torsion. From the Cartan connection point of view, the curvature is the failure of the affine connection to satisfy the Maurer–Cartan equation $$ \mathrm{d}\eta + \tfrac12[\eta\wedge\eta] = 0, $$ where the second term on the left hand side is the wedge product using the Lie bracket in to contract the values. By expanding into the pair and using the structure of the Lie algebra , this left hand side can be expanded into the two formulae $$ \mathrm{d}\theta + \omega\wedge\theta \quad \text{and} \quad \mathrm{d}\omega + \omega\wedge\omega\,, $$ where the wedge products are evaluated using matrix multiplication. The first expression is called the torsion of the connection, and the second is also called the curvature. These expressions are differential 2-forms on the total space of a frame bundle. However, they are horizontal and equivariant, and hence define tensorial objects. These can be defined directly from the induced covariant derivative on as follows. The torsion is given by the formula $$ T^\nabla(X,Y) = \nabla_X Y - \nabla_Y X - [X,Y]. $$ If the torsion vanishes, the connection is said to be torsion-free or symmetric.
https://en.wikipedia.org/wiki/Affine_connection
passage: The mode of inheritance of mutant tumor suppressors is that affected member inherits a defective copy from one parent, and a normal copy from another. Because mutations in tumor suppressors act in a recessive manner (note, however, there are exceptions), the loss of the normal copy creates the cancer phenotype. For instance, individuals that are heterozygous for p53 mutations are often victims of Li-Fraumeni syndrome, and that are heterozygous for Rb mutations develop retinoblastoma. In similar fashion, mutations in the adenomatous polyposis coli gene are linked to adenopolyposis colon cancer, with thousands of polyps in the colon while young, whereas mutations in BRCA1 and BRCA2 lead to early onset of breast cancer. A new idea announced in 2011 is an extreme version of multiple mutations, called chromothripsis by its proponents. This idea, affecting only 2–3% of cases of cancer, although up to 25% of bone cancers, involves the catastrophic shattering of a chromosome into tens or hundreds of pieces and then being patched back together incorrectly. This shattering probably takes place when the chromosomes are compacted during normal cell division, but the trigger for the shattering is unknown. Under this model, cancer arises as the result of a single, isolated event, rather than the slow accumulation of multiple mutations. ### Non-mutagenic carcinogens Many mutagens are also carcinogens, but some carcinogens are not mutagens. Examples of carcinogens that are not mutagens include alcohol and estrogen.
https://en.wikipedia.org/wiki/Carcinogenesis
passage: Its characteristic polynomial is equal to $$ \det (\lambda I_2- \mathfrak{H}) = \lambda^2-\operatorname{tr} \mathfrak{H}\,\lambda + \det \mathfrak{H} = \lambda^2-(a+d)\lambda+(ad-bc) $$ which has roots $$ \lambda_{i} = \frac{(a + d) \pm \sqrt {(a - d)^2 + 4 b c}}{2} = \frac{(a + d) \pm \sqrt {(a + d)^2 - 4(a d - b c)}}{2}=c\gamma_i+d \, . $$ ## Simple Möbius transformations and composition A Möbius transformation can be composed as a sequence of simple transformations. The following simple transformations are also Möbius transformations: - $$ f(z) = z+b\quad (a=1, c=0, d=1) $$ is a translation. - $$ f(z) = az \quad (b=0, c=0, d=1) $$ is a combination of a homothety (uniform scaling) and a rotation.
https://en.wikipedia.org/wiki/M%C3%B6bius_transformation
passage: Above the surface of the earth, the air temperature near the center of the cyclone is increasingly colder than the surrounding environment. These characteristics are the direct opposite of those found in their counterparts, tropical cyclones; thus, they are sometimes called "cold-core lows". Various charts can be examined to check the characteristics of a cold-core system with height, such as the chart, which is at about altitude. Cyclone phase diagrams are used to tell whether a cyclone is tropical, subtropical, or extratropical. ## Cyclone evolution There are two models of cyclone development and life cycles in common use: the Norwegian model and the ### Shapiro–Keyser model . ### Norwegian cyclone model Of the two theories on extratropical cyclone structure and life cycle, the older is the Norwegian Cyclone Model, developed during World War I. In this theory, cyclones develop as they move up and along a frontal boundary, eventually occluding and reaching a barotropically cold environment. It was developed completely from surface-based weather observations, including descriptions of clouds found near frontal boundaries. This theory still retains merit, as it is a good description for extratropical cyclones over continental landmasses. Shapiro–Keyser model A second competing theory for extratropical cyclone development over the oceans is the Shapiro–Keyser model, developed in 1990.
https://en.wikipedia.org/wiki/Extratropical_cyclone
passage: This is due to distinct facial structures associated with the condition that are not adequately represented in training datasets. More broadly, facial recognition systems tend to overlook diverse physical characteristics related to disabilities. The lack of representative data for individuals with varying disabilities further emphasizes the need for inclusive algorithmic designs to mitigate bias and improve accuracy. Additionally, facial expression recognition technologies often fail to accurately interpret the emotional states of individuals with intellectual disabilities. This shortcoming can hinder effective communication and interaction, underscoring the necessity for systems trained on diverse datasets that include individuals with intellectual disabilities. Furthermore, biases in facial recognition algorithms can lead to discriminatory outcomes for people with disabilities. For example, certain facial features or asymmetries may result in misidentification or exclusion, highlighting the importance of developing accessible and fair biometric systems. ### Advancements in fairness and mitigation strategies Efforts to address these biases include designing algorithms specifically for fairness. A notable study introduced a method to learn fair face representations by using a progressive cross-transformer model. This approach highlights the importance of balancing accuracy across demographic groups while avoiding performance drops in specific populations. Additionally, targeted dataset collection has been shown to improve racial equity in facial recognition systems. By prioritizing diverse data inputs, researchers demonstrated measurable reductions in performance disparities between racial groups.
https://en.wikipedia.org/wiki/Facial_recognition_system
passage: The variance-covariance matrix (or simply covariance matrix) of $$ \scriptstyle\hat\beta $$ is equal to $$ \operatorname{Var}[\, \hat\beta \mid X \,] = \sigma^2\left(X ^\operatorname{T} X\right)^{-1} = \sigma^2 Q. $$ In particular, the standard error of each coefficient $$ \scriptstyle\hat\beta_j $$ is equal to square root of the j-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantity σ2 with its estimate s2. Thus, $$ \widehat{\operatorname{s.\!e.}}(\hat{\beta}_j) = \sqrt{s^2 \left(X ^\operatorname{T} X\right)^{-1}_{jj}} $$ It can also be easily shown that the estimator $$ \scriptstyle\hat\beta $$ is uncorrelated with the residuals from the model: $$ \operatorname{Cov}[\, \hat\beta,\hat\varepsilon \mid X\,] = 0. $$ The Gauss–Markov theorem states that under the spherical errors assumption (that is, the errors should be uncorrelated and homoscedastic) the estimator $$ \scriptstyle\hat\beta $$ is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE).
https://en.wikipedia.org/wiki/Ordinary_least_squares
passage: An affine basis define barycentric coordinates for every point. Many other coordinates systems can be defined on a Euclidean space of dimension , in the following way. Let be a homeomorphism (or, more often, a diffeomorphism) from a dense open subset of to an open subset of $$ \R^n. $$ The coordinates of a point of are the components of . The polar coordinate system (dimension 2) and the spherical and cylindrical coordinate systems (dimension 3) are defined this way. For points that are outside the domain of , coordinates may sometimes be defined as the limit of coordinates of neighbour points, but these coordinates may be not uniquely defined, and may be not continuous in the neighborhood of the point. For example, for the spherical coordinate system, the longitude is not defined at the pole, and on the antimeridian, the longitude passes discontinuously from –180° to +180°. This way of defining coordinates extends easily to other mathematical structures, and in particular to manifolds. ## Isometries An isometry between two metric spaces is a bijection preserving the distance, that is $$ d(f(x), f(y))= d(x,y). $$ In the case of a Euclidean vector space, an isometry that maps the origin to the origin preserves the norm $$ \|f(x)\| = \|x\|, $$ since the norm of a vector is its distance from the zero vector.
https://en.wikipedia.org/wiki/Euclidean_space
passage: Robbed bits were translated to changes in contact states (opens and closures) by electronics in the channel bank hardware. This allowed direct current E and M signaling, or dial pulses, to be sent between electromechanical switches over a pure digital carrier which did not have DC continuity. ### Noise Bell System installations typically had alarm bells, gongs, or chimes to announce alarms calling attention to a failed switch element. A trouble reporting card system was connected to switch common control elements. These trouble reporting systems punctured cardboard cards with a code that logged the nature of a failure. ### Maintenance tasks Electromechanical switching systems required sources of electricity in form of direct current (DC), as well as alternating ring current (AC), which were generated on-site with mechanical generators. In addition, telephone switches required adjustment of many mechanical parts. Unlike modern switches, a circuit connecting a dialed call through an electromechanical switch had DC continuity within the local exchange area via metallic conductors. The design and maintenance procedures of all systems involved methods to avoid that subscribers experienced undue changes in the quality of the service or that they noticed failures. A variety of tools referred to as make-busys were plugged into electromechanical switch elements upon failure and during repairs. A make-busy identified the part being worked on as in-use, causing the switching logic to route around it. A similar tool was called a TD tool. Delinquent subscribers had their service temporarily denied (TDed).
https://en.wikipedia.org/wiki/Telephone_exchange
passage: (Such an increase in human brain size is equivalent to each generation having 125,000 more neurons than their parents.) It is believed that H. erectus and H. ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between . According to the recent African origin theory, modern humans evolved in Africa possibly from H. heidelbergensis, H. rhodesiensis or H. antecessor and migrated out of the continent some 50,000 to 100,000 years ago, gradually replacing local populations of H. erectus, Denisova hominins, H. floresiensis, H. luzonensis and H. neanderthalensis, whose ancestors had left Africa in earlier migrations. Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved in the Middle Paleolithic between 400,000 and 250,000 years ago. Recent DNA evidence suggests that several haplotypes of Neanderthal origin are present among all non-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day humans, suggestive of a limited interbreeding between these species. According to some anthropologists, the transition to behavioral modernity with the development of symbolic culture, language, and specialized lithic technology happened around 50,000 years ago (beginning of the Upper Paleolithic), although others point to evidence of a gradual change over a longer time span during the Middle Paleolithic. Homo sapiens is the only extant species of its genus, Homo.
https://en.wikipedia.org/wiki/Human_evolution
passage: Marr's 2D sketch assumes that a depth map is constructed, and that this map is the basis of 3D shape perception. However, both stereoscopic and pictorial perception, as well as monocular viewing, make clear that the perception of 3D shape precedes, and does not rely on, the perception of the depth of points. It is not clear how a preliminary depth map could, in principle, be constructed, nor how this would address the question of figure-ground organization, or grouping. The role of perceptual organizing constraints, overlooked by Marr, in the production of 3D shape percepts from binocularly-viewed 3D objects has been demonstrated empirically for the case of 3D wire objects, e.g. For a more detailed discussion, see Pizlo (2008). A more recent, alternative framework proposes that vision is composed instead of the following three stages: encoding, selection, and decoding. Encoding is to sample and represent visual inputs (e.g., to represent visual inputs as neural activities in the retina). Selection, or attentional selection, is to select a tiny fraction of input information for further processing, e.g., by shifting gaze to an object or visual location to better process the visual signals at that location. Decoding is to infer or recognize the selected input signals, e.g., to recognize the object at the center of gaze as somebody's face.
https://en.wikipedia.org/wiki/Visual_perception
passage: Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching. A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once. A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block.
https://en.wikipedia.org/wiki/Cache_%28computing%29
passage: Ultimate tensile strength (also called UTS, tensile strength, TS, ultimate strength or in notation) is the maximum stress that a material can withstand while being stretched or pulled before breaking. In brittle materials, the ultimate tensile strength is close to the yield point, whereas in ductile materials, the ultimate tensile strength can be higher. The ultimate tensile strength is usually found by performing a tensile test and recording the engineering stress versus strain. The highest point of the stress–strain curve is the ultimate tensile strength and has units of stress. The equivalent point for the case of compression, instead of tension, is called the compressive strength. Tensile strengths are rarely of any consequence in the design of ductile members, but they are important with brittle members. They are tabulated for common materials such as alloys, composite materials, ceramics, plastics, and wood. ## Definition The ultimate tensile strength of a material is an intensive property; therefore its value does not depend on the size of the test specimen. However, depending on the material, it may be dependent on other factors, such as the preparation of the specimen, the presence or otherwise of surface defects, and the temperature of the test environment and material. Some materials break very sharply, without plastic deformation, in what is called a brittle failure. Others, which are more ductile, including most metals, experience some plastic deformation and possibly necking before fracture. Tensile strength is defined as a stress, which is measured as force per unit area.
https://en.wikipedia.org/wiki/Ultimate_tensile_strength
passage: ### Deriving the device-specific impedances What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis. Resistor For a resistor, there is the relation $$ v_\text{R} \mathord\left( t \right) = i_\text{R} \mathord\left( t \right) R $$ which is Ohm's law. Considering the voltage signal to be $$ v_\text{R}(t) = V_p \sin(\omega t) $$ it follows that $$ \frac{v_\text{R} \mathord\left( t \right)}{i_\text{R} \mathord\left( t \right)} = \frac{V_p \sin(\omega t)}{I_p \sin \mathord\left( \omega t \right)} = R $$ This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is $$ R $$ , and that the AC voltage leads the current across a resistor by 0 degrees.
https://en.wikipedia.org/wiki/Electrical_impedance
passage: In mathematics and its applications, the signed distance function or signed distance field (SDF) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space (such as the surface of a geometric shape), with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside). The concept also sometimes goes by the name oriented distance function/field. ## Definition Let be a subset of a metric space with metric , and $$ \partial\Omega $$ be its boundary. The distance between a point of and the subset $$ \partial\Omega $$ of is defined as usual as $$ d(x, \partial \Omega) = \inf_{y \in \partial \Omega}d(x, y), $$ where $$ \inf $$ denotes the infimum.
https://en.wikipedia.org/wiki/Signed_distance_function
passage: In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable. A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program that might determine whether programs halt, that a "pathological" program exists for which makes an incorrect determination. Specifically, is the program that, when called with some input, passes its own source and its input to f and does the opposite of what predicts will do. The behavior of on shows undecidability as it means no program will solve the halting problem in every possible case. ## Background The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.
https://en.wikipedia.org/wiki/Halting_problem
passage: In most cases, the detection and correction are performed by the memory controller; sometimes, the required logic is transparently implemented within DRAM chips or modules, enabling the ECC memory functionality for otherwise ECC-incapable systems. The extra memory bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected. Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from , roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory. Schroeder, Bianca et al. (2009). "DRAM errors in the wild: a large-scale field study" . Proceedings of the Eleventh International Joint Conference on Measurement and Modeling of Computer Systems, pp. 193–204. The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors and that trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data.
https://en.wikipedia.org/wiki/Dynamic_random-access_memory
passage: #### Homogeneous and heterogeneous mixtures In chemistry, a heterogeneous mixture consists of either or both of 1) multiple states of matter or 2) hydrophilic and hydrophobic substances in one mixture; an example of the latter would be a mixture of water, octane, and silicone grease. Heterogeneous solids, liquids, and gases may be made homogeneous by melting, stirring, or by allowing time to pass for diffusion to distribute the molecules evenly. For example, adding dye to water will create a heterogeneous solution at first, but will become homogeneous over time. Entropy allows for heterogeneous substances to become homogeneous over time. A heterogeneous mixture is a mixture of two or more compounds. Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a salad, trail mix, and concrete (not cement). A mixture can be determined to be homogeneous when everything is settled and equal, and the liquid, gas, the object is one color or the same form. Various models have been proposed to model the concentrations in different phases. The phenomena to be considered are mass rates and reaction. #### Homogeneous and heterogeneous reactions Homogeneous reactions are chemical reactions in which the reactants and products are in the same phase, while heterogeneous reactions have reactants in two or more phases. Reactions that take place on the surface of a catalyst of a different phase are also heterogeneous. A reaction between two gases or two miscible liquids is homogeneous.
https://en.wikipedia.org/wiki/Homogeneity_and_heterogeneity
passage: $$ f|_{x_i = b} = f (x_1, \ldots, x_{i-1}, b, x_{i+1}, \ldots, x_n). $$ In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given , a -ary function, and -ary functions , the composition of with , is the -ary function $$ h(x_1,\ldots,x_m) = f(g_1(x_1,\ldots,x_m),\ldots,g_n(x_1,\ldots,x_m)). $$ This is sometimes called the generalized composite or superposition of f with . The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition. A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities.
https://en.wikipedia.org/wiki/Function_composition