source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Hilbert%27s%20eleventh%20problem
Hilbert's eleventh problem is one of David Hilbert's list of open mathematical problems posed at the Second International Congress of Mathematicians in Paris in 1900. A furthering of the theory of quadratic forms, he stated the problem as follows: Our present knowledge of the theory of quadratic number fields puts us in a position to attack successfully the theory of quadratic forms with any number of variables and with any algebraic numerical coefficients. This leads in particular to the interesting problem: to solve a given quadratic equation with algebraic numerical coefficients in any number of variables by integral or fractional numbers belonging to the algebraic realm of rationality determined by the coefficients. As stated by Kaplansky, "The 11th Problem is simply this: classify quadratic forms over algebraic number fields." This is exactly what Minkowski did for quadratic form with fractional coefficients. A quadratic form (not quadratic equation) is any polynomial in which each term has variables appearing exactly twice. The general form of such an equation is ax2 + bxy + cy2. (All coefficients must be whole numbers.) A given quadratic form is said to represent a natural number if substituting specific numbers for the variables gives the number. Gauss and those who followed found that if we change variables in certain ways, the new quadratic form represented the same natural numbers as the old, but in a different, more easily interpreted form. He used this theory of equivalent quadratic forms to prove number theory results. Lagrange, for example, had shown that any natural number can be expressed as the sum of four squares. Gauss proved this using his theory of equivalence relations by showing that the quadratic represents all natural numbers. As mentioned earlier, Minkowski created and proved a similar theory for quadratic forms that had fractions as coefficients. Hilbert's eleventh problem asks for a similar theory. That is, a mode of classificati
https://en.wikipedia.org/wiki/Male
Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization. A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs. In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion. Overview The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another. Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants. Evolution The evolution of ani
https://en.wikipedia.org/wiki/Thermodynamic%20versus%20kinetic%20reaction%20control
Thermodynamic reaction control or kinetic reaction control in a chemical reaction can decide the composition in a reaction product mixture when competing pathways lead to different products and the reaction conditions influence the selectivity or stereoselectivity. The distinction is relevant when product A forms faster than product B because the activation energy for product A is lower than that for product B, yet product B is more stable. In such a case A is the kinetic product and is favoured under kinetic control and B is the thermodynamic product and is favoured under thermodynamic control. The conditions of the reaction, such as temperature, pressure, or solvent, affect which reaction pathway may be favored: either the kinetically controlled or the thermodynamically controlled one. Note this is only true if the activation energy of the two pathways differ, with one pathway having a lower Ea (energy of activation) than the other. Prevalence of thermodynamic or kinetic control determines the final composition of the product when these competing reaction pathways lead to different products. The reaction conditions as mentioned above influence the selectivity of the reaction - i.e., which pathway is taken. Asymmetric synthesis is a field in which the distinction between kinetic and thermodynamic control is especially important. Because pairs of enantiomers have, for all intents and purposes, the same Gibbs free energy, thermodynamic control will produce a racemic mixture by necessity. Thus, any catalytic reaction that provides product with nonzero enantiomeric excess is under at least partial kinetic control. (In many stoichiometric asymmetric transformations, the enantiomeric products are actually formed as a complex with the chirality source before the workup stage of the reaction, technically making the reaction a diastereoselective one. Although such reactions are still usually kinetically controlled, thermodynamic control is at least possible, in prin
https://en.wikipedia.org/wiki/Cointegration
Cointegration is a statistical property of a collection of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper , Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends. Introduction If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated () but some (cointegrating) vector of coefficients exists to form a stationary linear combination of them. For instance, a stock market index and the price of its associated futures contract move through time, each roughly following a random walk. Testing the hypothesis that there is a statistically significant connection between the futures price and the spot price could now be done by testing for the existence of a cointegrated combination of the two series. History The first to introduce and analyse the concept of spurious—or nonsense—regression was Udny Yule in 1926. Before the 1980s, many economists used linear regressions on non-stationary time series data, which Nobel laureate Clive Granger and Paul Newbold showed to be a dangerous approach that could produce spurious correlation, since standard detrending techniques can result in data that are still non-stationary. Granger's 19
https://en.wikipedia.org/wiki/DNA%20fragmentation
DNA fragmentation is the separation or breaking of DNA strands into pieces. It can be done intentionally by laboratory personnel or by cells, or can occur spontaneously. Spontaneous or accidental DNA fragmentation is fragmentation that gradually accumulates in a cell. It can be measured by e.g. the Comet assay or by the TUNEL assay. Its main units of measurement is the DNA Fragmentation Index (DFI). A DFI of 20% or more significantly reduces the success rates after ICSI. DNA fragmentation was first documented by Williamson in 1970 when he observed discrete oligomeric fragments occurring during cell death in primary neonatal liver cultures. He described the cytoplasmic DNA isolated from mouse liver cells after culture as characterized by DNA fragments with a molecular weight consisting of multiples of 135 kDa. This finding was consistent with the hypothesis that these DNA fragments were a specific degradation product of nuclear DNA. Intentional DNA fragmentation is often necessary prior to library construction or subcloning for DNA sequences. A variety of methods involving the mechanical breakage of DNA have been employed where DNA is fragmented by laboratory personnel. Such methods include sonication, needle shear, nebulisation, point-sink shearing and passage through a pressure cell. Restriction digest is the intentional laboratory breaking of DNA strands. It is an enzyme-based treatment used in biotechnology to cut DNA into smaller strands in order to study fragment length differences among individuals or for gene cloning. This method fragments DNA either by the simultaneous cleavage of both strands, or by generation of nicks on each strand of dsDNA to produce dsDNA breaks. Acoustic shearing of the transmission of high-frequency acoustic energy waves delivered to a DNA library. The transducer is bowl shaped so that the waves converge at the target of interest. Nebulization forces DNA through a small hole in a nebulizer unit, which results in the formation
https://en.wikipedia.org/wiki/Clock%20skew
Clock skew (sometimes called timing skew) is a phenomenon in synchronous digital circuit systems (such as computer systems) in which the same sourced clock signal arrives at different components at different times due to gate or, in more advanced semiconductor technology, wire signal propagation delay. The instantaneous difference between the readings of any two clocks is called their skew. The operation of most digital circuits is synchronized by a periodic signal known as a "clock" that dictates the sequence and pacing of the devices on the circuit. This clock is distributed from a single source to all the memory elements of the circuit, which for example could be registers or flip-flops. In a circuit using edge-triggered registers, when the clock edge or tick arrives at a register, the register transfers the register input to the register output, and these new output values flow through combinational logic to provide the values at register inputs for the next clock tick. Ideally, the input to each memory element reaches its final value in time for the next clock tick so that the behavior of the whole circuit can be predicted exactly. The maximum speed at which a system can run must account for the variance that occurs between the various elements of a circuit due to differences in physical composition, temperature, and path length. In a synchronous circuit, two registers, or flip-flops, are said to be "sequentially adjacent" if a logic path connects them. Given two sequentially adjacent registers Ri and Rj with clock arrival times at the source and destination register clock pins equal to TCi and TCj respectively, clock skew can be defined as: . In circuit design Clock skew can be caused by many different things, such as wire-interconnect length, temperature variations, variation in intermediate devices, capacitive coupling, material imperfections, and differences in input capacitance on the clock inputs of devices using the clock. As the clock rate of a ci
https://en.wikipedia.org/wiki/Diffusion%20capacitance
Diffusion Capacitance is the capacitance that happens due to transport of charge carriers between two terminals of a device, for example, the diffusion of carriers from anode to cathode in a forward biased diode or from emitter to base in a forward-biased junction of a transistor. In a semiconductor device with a current flowing through it (for example, an ongoing transport of charge by diffusion) at a particular moment there is necessarily some charge in the process of transit through the device. If the applied voltage changes to a different value and the current changes to a different value, a different amount of charge will be in transit in the new circumstances. The change in the amount of transiting charge divided by the change in the voltage causing it is the diffusion capacitance. The adjective "diffusion" is used because the original use of this term was for junction diodes, where the charge transport was via the diffusion mechanism. See Fick's laws of diffusion. To implement this notion quantitatively, at a particular moment in time let the voltage across the device be . Now assume that the voltage changes with time slowly enough that at each moment the current is the same as the DC current that would flow at that voltage, say (the quasistatic approximation). Suppose further that the time to cross the device is the forward transit time . In this case the amount of charge in transit through the device at this particular moment, denoted , is given by . Consequently, the corresponding diffusion capacitance:. is . In the event the quasi-static approximation does not hold, that is, for very fast voltage changes occurring in times shorter than the transit time , the equations governing time-dependent transport in the device must be solved to find the charge in transit, for example the Boltzmann equation. That problem is a subject of continuing research under the topic of non-quasistatic effects. See Liu , and Gildenblat et al. Notes References notes E
https://en.wikipedia.org/wiki/Cartesian%20tensor
In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation. The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product. Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system. Cartesian basis and related terminology Vectors in three dimensions In 3D Euclidean space, , the standard basis is , , . Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal. Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details. For Cartesian tensors of order 1, a Cartesian vector can be written algebraically as a linear combination of the basis vectors , , : where the coordinates of the vector with respect to the Cartesian basis are denoted , , . It is common and helpful to display the basis vectors as column vectors when we have a coor
https://en.wikipedia.org/wiki/Silverman%E2%80%93Toeplitz%20theorem
In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit. An infinite matrix with complex-valued entries defines a regular summability method if and only if it satisfies all of the following properties: An example is Cesaro summation, a matrix summability method with References Citations Further reading Toeplitz, Otto (1911) "Über allgemeine lineare Mittelbildungen." Prace mat.-fiz., 22, 113–118 (the original paper in German) Silverman, Louis Lazarus (1913) "On the definition of the sum of a divergent series." University of Missouri Studies, Math. Series I, 1–96 , 43-48. Theorems in analysis Summability methods Summability theory
https://en.wikipedia.org/wiki/Orifice%20plate
An orifice plate is a device used for measuring flow rate, for reducing pressure or for restricting flow (in the latter two cases it is often called a ). Description An orifice plate is a thin plate with a hole in it, which is usually placed in a pipe. When a fluid (whether liquid or gaseous) passes through the orifice, its pressure builds up slightly upstream of the orifice but as the fluid is forced to converge to pass through the hole, the velocity increases and the fluid pressure decreases. A little downstream of the orifice the flow reaches its point of maximum convergence, the vena contracta (see drawing to the right) where the velocity reaches its maximum and the pressure reaches its minimum. Beyond that, the flow expands, the velocity falls and the pressure increases. By measuring the difference in fluid pressure across tappings upstream and downstream of the plate, the flow rate can be obtained from Bernoulli's equation using coefficients established from extensive research. In general, the mass flow rate measured in kg/s across an orifice can be described as The overall pressure loss in the pipe due to an orifice plate is lower than the measured pressure, typically by a factor of . Application Orifice plates are most commonly used to measure flow rates in pipes, when the fluid is single-phase (rather than being a mixture of gases and liquids, or of liquids and solids) and well-mixed, the flow is continuous rather than pulsating, the fluid occupies the entire pipe (precluding silt or trapped gas), the flow profile is even and well-developed and the fluid and flow rate meet certain other conditions. Under these circumstances and when the orifice plate is constructed and installed according to appropriate standards, the flow rate can easily be determined using published formulae based on substantial research and published in industry, national and international standards. An orifice plate is called a calibrated orifice if it has been calibrated wit
https://en.wikipedia.org/wiki/Triangle%20of%20U
The triangle of U ( ) is a theory about the evolution and relationships among the six most commonly known members of the plant genus Brassica. The theory states that the genomes of three ancestral diploid species of Brassica combined to create three common tetraploid vegetables and oilseed crop species. It has since been confirmed by studies of DNA and proteins. The theory is summarized by a triangular diagram that shows the three ancestral genomes, denoted by AA, BB, and CC, at the corners of the triangle, and the three derived ones, denoted by AABB, AACC, and BBCC, along its sides. The theory was first published in 1935 by Woo Jang-choon, a Korean-Japanese botanist (writing under the Japanized name "U Nagaharu"). Woo made synthetic hybrids between the diploid and tetraploid species and examined how the chromosomes paired in the resulting triploids. U's theory The six species are The code in the "Chr.count" column specifies the total number of chromosomes in each somatic cell, and how it relates to the number of chromosomes in each full genome set (which is also the number found in the pollen or ovule), and the number of chromosomes in each component genome. For example, each somatic cell of the tetraploid species Brassica napus, with letter tags AACC and count "2=4=38", contains two copies of the A genome, each with 10 chromosomes, and two copies of the C genome, each with 9 chromosomes, which is 38 chromosomes in total. That is two full genome sets (one A and one C), hence "2=38" which means "=19" (the number of chromosomes in each gamete). It is also four component genomes (two A and two C), hence "4=38". The three diploid species exist in nature, but can easily interbreed because they are closely related. This interspecific breeding allowed for the creation of three new species of tetraploid Brassica. (Critics, however, consider the geological separation too large.) These are said to be allotetraploid (containing four genomes from two or more different
https://en.wikipedia.org/wiki/Identity%20document%20forgery
Identity document forgery is the process by which identity documents issued by governing bodies are copied and/or modified by persons not authorized to create such documents or engage in such modifications, for the purpose of deceiving those who would view the documents about the identity or status of the bearer. The term also encompasses the activity of acquiring identity documents from legitimate bodies by falsifying the required supporting documentation in order to create the desired identity. Identity documents differ from other credentials in that they are intended to be usable by only the person holding the card. Unlike other credentials, they may be used to restrict the activities of the holder as well as to expand them. Documents that have been forged in this way include driver's licenses (historically forged or altered as an attempt to conceal the fact that persons desiring to purchase alcohol are under the legal drinking age); birth certificates and Social Security cards (likely used in identity theft schemes or to defraud the government); and passports (used to evade restrictions on entry into a particular country). At the beginning of 2010, there were 11 million stolen or lost passports listed in the global database of Interpol. Such falsified documents can be used for identity theft, age deception, illegal immigration, organized crime, and espionage. Use scenarios, forgery techniques and security countermeasures A distinction needs to be made between the different uses of an identity document. In some cases, the fake ID may only have to pass a cursory inspection, such as flashing a plastic ID card for a security guard. At the other extreme, a document may have to resist scrutiny by a trained document examiner, who may be equipped with technical tools for verifying biometrics and reading hidden security features within the card. To make forgery more difficult, most modern IDs contain numerous security features that require specialised and expensive
https://en.wikipedia.org/wiki/Miracle%20Planet
Miracle Planet is a six-part documentary series, co-produced by Japan's NHK and the National Film Board of Canada (NFB), narrated by Christopher Plummer (Seiko Nakajo in the original Japanese), which tells the 4.6-billion-year-old story of how life has evolved from its humble beginnings to the diversity of living creatures today. It is a remake of the 1989 series produced by NHK and KCTS. Filmed around the world and based upon the most recent scientific findings, Miracle Planet combines location footage and interviews with leading scientists, along with computer animation, to depict the cataclysmic events that have shaped our planet and all of the life-forms within it. The five standard episodes depict the evolution of life on Earth in perspective with our place in the universe - from the simplest microbes to the complexity and diversity that is found on the planet today. The entire 5-hour series Miracle Planet aired on Discovery Channel in Canada on April 22, 2005 as an Earth Day special. The music was composed by Daniel Toussaint, and orchestrated by Daniel Toussaint. Episodes (Fifty minutes each) The Violent Past This episode chronicles the early stages of the Earth's history, from formation to the mid-Precambrian. It discusses the beginning of life on earth, mentioning panspermia as a possible candidate. It then simulates a collision with an asteroid wide, similar to those that formed the protoplanets. Snowball Earth This episode mainly focuses on the two Snowball Earth events in the mid-Precambrian period, and the effect they had on life. It simulates a similar occurrence in the modern day. Animals included: Pteridinium, Yorgia, Kimberella, Arandaspis, Trilobites, Chain Coral (fossil) and Dicranurus (fossil) New Frontiers Full title: "The Evolution of Our World: New Frontiers". This episode follows our ancestors from the shallow saltwater seas to the freshwater rivers, then discusses the colonization of land. Mainly Devonian and Carboniferous. Animals
https://en.wikipedia.org/wiki/InnoDB
InnoDB is a storage engine for the database management system MySQL and MariaDB. Since the release of MySQL 5.5.5 in 2010, it replaced MyISAM as MySQL's default table type. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM versions. Description InnoDB became a product of Oracle Corporation after its acquisition of the Finland-based company Innobase in October 2005. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software. InnoDB supports: Both SQL and XA transactions Tablespaces Foreign keys Full text search indexes, since MySQL 5.6 (February 2013) and MariaDB 10.0 Spatial operations, following the OpenGIS standard Virtual columns, in MariaDB See also Comparison of MySQL database engines References External links Mysqltutorial.org, InnoDB and other table types in MySQL The InnoDB Storage Engine, in the MySQL manual. Database engines MySQL MariaDB Oracle software
https://en.wikipedia.org/wiki/The%20Swallow%27s%20Tail
The Swallow's Tail — Series of Catastrophes () was Salvador Dalí's last painting. It was completed in May 1983, as the final part of a series based on the mathematical catastrophe theory of René Thom. Thom suggested that in four-dimensional phenomena, there are seven possible equilibrium surfaces, and therefore seven possible discontinuities, or "elementary catastrophes": fold, cusp, swallowtail, butterfly, hyperbolic umbilic, elliptic umbilic, and parabolic umbilic. "The shape of Dalí's Swallow's Tail is taken directly from Thom's four-dimensional graph of the same title, combined with a second catastrophe graph, the s-curve that Thom dubbed, 'the cusp'. Thom's model is presented alongside the elegant curves of a cello and the instrument's f-holes, which, especially as they lack the small pointed side-cuts of a traditional f-hole, equally connote the mathematical symbol for an integral in calculus: ∫." In his 1979 speech, Gala, Velázquez and the Golden Fleece, presented upon his 1979 induction into the prestigious Académie des Beaux-Arts of the Institut de France, Dalí described Thom's theory of catastrophes as "the most beautiful aesthetic theory in the world". He also recollected his first and only meeting with René Thom, at which Thom purportedly told Dalí that he was studying tectonic plates; this provoked Dalí to question Thom about the railway station at Perpignan, France (near the Spanish border), which the artist had declared in the 1960s to be the center of the universe. Thom reportedly replied, "I can assure you that Spain pivoted precisely — not in the area of — but exactly there where the Railway Station in Perpignan stands today". Dalí was immediately enraptured by Thom's statement, influencing his painting Topological Abduction of Europe — Homage to René Thom, the lower left corner of which features an equation closely linked to the "swallow's tail": an illustration of the graph, and the term queue d'aronde. The seismic fracture that transver
https://en.wikipedia.org/wiki/Medion
Medion AG is a German consumer electronics company, and a subsidiary of Chinese multinational technology company Lenovo. The company operates in Europe, Turkey, Asia-Pacific, United States and Australia regions. The company's main products are computers and notebooks, but also smartphones, tablet computers, digital cameras, TVs, refrigerators, toasters, and fitness equipment. Products Medion products in Australia and the United States are available exclusively at Aldi and Super Billing Computers, with some products (such as DVD players) branded as Tevion (Aldi's own brand). Some of Medion's formal laptops were sold in North America at Best Buy stores and were sold in Canada at Future Shop as Cicero Computers. In the United Kingdom, Medion products, including laptop and desktop computers, have been sold by Aldi, Sainsbury's, Somerfield, Woolworths, and Tesco, as well as being sold direct through Medion's own Web site and various other online retailers. Medion launched Aldi Talk in Germany and MEDIONMobile as ALDIMobile in Australia in an agreement with Aldi Stores. Medion Australia Pty Limited remains as the owner of ALDIMobile. In China Medion products are sold under the Lenovo brand, but not all Lenovo branded products are Medion products. In Germany, Medion has launched a Cloud Gaming service in partnership with Gamestream in April 2020. Sponsorship Medion sponsored Sahara Force India through Formula One Team driver Adrian Sutil in Formula One in 2007 to 2011, until Sutil left the team. In 2013 Sutil returned to Sahara Force India, and Medion returned as a sponsor. Sutil and Medion left the sport at the end of 2013. Medion brands Other brands used on Medion products: Cybercom Cybermaxx Life Lifetec Micromaxx Essenitel b Ordissimo ERAZER PEAQ QUIGG These Medion products can be recognized by the serial number starting with "MD" or "LT". Acquisition On 1 June 2011, the Chinese multinational Lenovo Group (LNVGY) announced plans to acquire Medion AG. Since Au
https://en.wikipedia.org/wiki/Subclass%20%28set%20theory%29
In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set. That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B. If A and B are sets, then of course A is also a subset of B. In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set. As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!) References Set theory
https://en.wikipedia.org/wiki/Colatitude
In a spherical coordinate system, a colatitude is the complementary angle of a given latitude, i.e. the difference between a right angle and the latitude. Here Southern latitudes are defined to be negative, and as a result the colatitude is a non-negative quantity, ranging from zero at the North pole to 180° at the South pole. The colatitude corresponds to the conventional 3D polar angle in spherical coordinates, as opposed to the latitude as used in cartography. Examples Latitude and colatitude sum up to 90°. Astronomical use The colatitude is most useful in astronomy because it refers to the zenith distance of the celestial poles. For example, at latitude 42°N, Polaris (approximately on the North celestial pole) has an altitude of 42°, so the distance from the zenith (overhead point) to Polaris is . Adding the declination of a star to the observer's colatitude gives the maximum latitude of that star (its angle from the horizon at culmination or upper transit). For example, if Alpha Centauri is seen with a latitude of 72° north (108° south) and its declination is known (60°S), then it can be determined that the observer's colatitude is (i.e. their latitude is ). Stars whose declinations exceed the observer's colatitude are called circumpolar because they will never set as seen from that latitude. If an object's declination is further south on the celestial sphere than the value of the colatitude, then it will never be seen from that location. For example, Alpha Centauri will always be visible at night from Perth, Western Australia because the colatitude is , and 60° is greater than 58°; on the other hand, the star will never rise in Juneau because its declination of −60° is less than −32° (the negation of Juneau's colatitude). Additionally, colatitude is used as part of the Schwarzschild metric in general relativity. References Spherical geometry Orientation (geometry)
https://en.wikipedia.org/wiki/Glazier
A glazier is a tradesperson responsible for cutting, installing, and removing glass (and materials used as substitutes for glass, such as some plastics). They also refer to blueprints to figure out the size, shape, and location of the glass in the building. They may have to consider the type and size of scaffolding they need to stand on to fit and install the glass. Glaziers may work with glass in various surfaces and settings, such as cutting and installing windows, doors, shower doors, skylights, storefronts, display cases, mirrors, facades, interior walls, ceilings, and tabletops. Duties and tools The Occupational Outlook Handbook of the U.S. Department of Labor lists the following as typical tasks for a glazier: Follow blueprints or specifications Remove any old or broken glass before installing replacement glass Cut glass to the specified size and shape Make or install sashes or moldings for glass installation Fasten glass into sashes or frames with clips, moldings, or other types of fasteners Add weather seal or putty around pane edges to seal joints. The National Occupational Analysis recognized by the Canadian Council of Directors of Apprenticeship separates the trade into 5 blocks of skills, each with a list of skills, and a list of tasks and subtasks a journeyman is expected to be able to accomplish: Block A – Occupational Skills Block B – Commercial Window and Door Systems Block C – Residential Window and Door Systems Block D – Specialty Glass and Products Block E – Servicing Tools used by glaziers "include cutting boards, glass-cutting blades, straightedges, glazing knives, saws, drills, grinders, putty,scrapers, sandpaper, sanding blocks, 5 in 1's respirator/dust mask and glazing compounds." Some glaziers work specifically with glass in motor vehicles; other work specifically with the safety glass used in aircraft. Others repair old antique windows and doors that need glass replaced. Education and training Glaziers are typically educate
https://en.wikipedia.org/wiki/Amyl%20acetate
Amyl acetate (pentyl acetate) is an organic compound and an ester with the chemical formula CH3COO[CH2]4CH3 and the molecular weight 130.19g/mol. It is colorless and has a scent similar to bananas and apples. The compound is the condensation product of acetic acid and 1-pentanol. However, esters formed from other pentanol isomers (amyl alcohols), or mixtures of pentanols, are often referred to as amyl acetate. The symptoms of exposure to amyl acetate in humans are dermatitis, central nervous system depression, narcosis and irritation to the eyes and nose. Uses Amyl acetate is a solvent for paints, lacquers, and liquid bandages; and a flavorant. It also fuels the Hefner lamp and fermentative productions of penicillin. See also Isoamyl acetate, also known as banana oil. Esters, organic molecules with the same functional groups References Flavors Acetate esters
https://en.wikipedia.org/wiki/Percent-encoding
URL encoding, officially known as percent-encoding, is a method to encode arbitrary data in a Uniform Resource Identifier (URI) using only the limited US-ASCII characters legal within a URI. Although it is known as URL encoding, it is also used more generally within the main Uniform Resource Identifier (URI) set, which includes both Uniform Resource Locator (URL) and Uniform Resource Name (URN). As such, it is also used in the preparation of data of the application/x-www-form-urlencoded media type, as is often used in the submission of HTML form data in HTTP requests. Percent-encoding in a URI Types of URI characters The characters allowed in a URI are either reserved or unreserved (or a percent character as part of a percent-encoding). Reserved characters are those characters that sometimes have special meaning. For example, forward slash characters are used to separate different parts of a URL (or more generally, a URI). Unreserved characters have no such meanings. Using percent-encoding, reserved characters are represented using special character sequences. The sets of reserved and unreserved characters and the circumstances under which certain reserved characters have special meaning have changed slightly with each revision of specifications that govern URIs and URI schemes. Other characters in a URI must be percent-encoded. Reserved characters When a character from the reserved set (a "reserved character") has a special meaning (a "reserved purpose") in a certain context, and a URI scheme says that it is necessary to use that character for some other purpose, then the character must be percent-encoded. Percent-encoding a reserved character involves converting the character to its corresponding byte value in ASCII and then representing that value as a pair of hexadecimal digits (if there is a single hex digit, a leading zero is added). The digits, preceded by a percent sign (%) as an escape character, are then used in the URI in place of the reserved chara
https://en.wikipedia.org/wiki/Intermittent%20rhythmic%20delta%20activity
Intermittent rhythmic delta activity (IRDA) is a type of brain wave abnormality found in electroencephalograms (EEG). Types It can be classified based on the area of brain it originates from: frontal (FIRDA) occipital (OIRDA) temporal (TIRDA) It can also be Unilateral Bilateral Cause It can be caused by a number of different reasons, some benign, unknown reasons, but also are commonly associated with lesions, tumors, and encephalopathies. Association with periventricular white matter disease and cortical atrophy has been documented and they are more likely to show up during acute metabolic derangements such as uremia and hyperglycemia. Diagnosis References Electroencephalography Neuroscience
https://en.wikipedia.org/wiki/RANAP
In telecommunications networks, RANAP (Radio Access Network Application Part) is a protocol specified by 3GPP in TS 25.413 and used in UMTS for signaling between the Core Network, which can be a MSC or SGSN, and the UTRAN. RANAP is carried over Iu-interface. RANAP signalling protocol resides in the control plane of Radio network layer of Iu interface in the UMTS (Universal Mobile Telecommunication System) protocol stack. Iu interface is the interface between RNC (Radio Network Controller) and CN (Core Network). nb. For Iu-ps transport RANAP is carried on SCTP if IP interface used on this. RANAP handles signaling for the Iu-PS - RNC and 3G SGSN and Iu-CS - RNC and 3G MSC . It also provides the signaling channel to transparently pass messages between the User Equipment (UE) and the CN. In LTE, RANAP has been replaced by S1AP. In SA (standalone) installations of 5G, S1AP will be replaced by NGAP. Functionality Over the Iu interface, RANAP is used to: - Facilitate general UTRAN procedures from the core network such as paging - Separate each User Equipment (UE) on protocol level for mobile-specific signaling management - Transfer transparently non-access signaling - Request and manage various types of UTRAN radio access bearers - Perform the Serving Radio Network Subsystem (SRNS) relocation See also GPRS Tunnelling Protocol (GTP) References 3GPP standards Mobile telecommunications standards Network protocols
https://en.wikipedia.org/wiki/DOM%20Inspector
DOM Inspector (DOMi) is a web developer tool created by Joe Hewitt and was originally included in Mozilla Application Suite as well as versions of Mozilla Firefox prior to Firefox 3. It is now included in Firefox, and SeaMonkey. Its main purpose is to inspect and edit the Document Object Model (DOM) tree of HTML and XML-based documents. A DOM node can be selected from the tree structure, or by clicking on the browser chrome. As well as the DOM tree viewer, other viewers are also available, including Box Model, XBL Bindings, CSS Rules, Style Sheets, Computed Style, JavaScript Object, as well as a number of viewers for document and application accessibility. By default, the DOM Inspector highlights a newly selected non-attribute node with a red flashing border. Similar tools exist in other browsers, e.g., Opera's Dragonfly, Safari's Web Inspector, the Internet Explorer Developer Toolbar, and Google Chrome's Developer Tools. See also Firebug, another web development extension more recently created by Joe Hewitt External links DOM Inspector extension for Firefox DOM Inspector at Mozilla Developer Center XPather - A DOMi extension that adds XPath support Software testing tools Firefox Firefox extensions merged to Firefox
https://en.wikipedia.org/wiki/Gerald%20Weinberg
Gerald Marvin Weinberg (October 27, 1933 – August 7, 2018) was an American computer scientist, author and teacher of the psychology and anthropology of computer software development. His most well-known books are The Psychology of Computer Programming and Introduction to General Systems Thinking. Biography Gerald Weinberg was born and raised in Chicago. He attended Omaha Central High School in Omaha, Nebraska. In 1963 he received a PhD in Communication Sciences from the University of Michigan. Weinberg started working in the computing business at IBM in 1956 at the Federal Systems Division Washington, where he participated as Manager of Operating Systems Development in the Project Mercury (1959–1963), which aimed to put a human in orbit around the Earth. In 1960 he published one of his first papers. Since 1969 was consultant and Principal at Weinberg & Weinberg. Here he conducted workshops such as the AYE Conference, The Problem Solving Leadership workshop since 1974, and workshops about the Fieldstone Method. Further Weinberg was an author at Dorset House Publishing since 1970, consultant at Microsoft since 1988, and moderator at the Shape Forum since 1993. Weinberg was a visiting professor at the University of Nebraska-Lincoln, Binghamton University, and Columbia University. He was a member of the Society for General Systems Research since the late 1950s. He was also a Founding Member of the IEEE Transactions on Software Engineering, a member of the Southwest Writers and the Oregon Writers Network, and a Keynote Speaker on many software development conferences. In 1993 he was the Winner of The J.-D. Warnier Prize for Excellence in Information Sciences, the 2000 Winner of The Stevens Award for Contributions to Software Engineering, the 2010 Software Test Professionals first annual Luminary Award and the European Testing Excellence Award at the EuroSTAR Conference in 2013. Weinberg died on August 7, 2018. Work His most well-known books are The Psychology
https://en.wikipedia.org/wiki/Bijection%2C%20injection%20and%20surjection
In mathematics, injections, surjections, and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other. A function maps elements from its domain to elements in its codomain. Given a function : The function is injective, or one-to-one, if each element of the codomain is mapped to by at most one element of the domain, or equivalently, if distinct elements of the domain map to distinct elements in the codomain. An injective function is also called an injection. Notationally: or, equivalently (using logical transposition), The function is surjective, or onto, if each element of the codomain is mapped to by at least one element of the domain. That is, the image and the codomain of the function are equal. A surjective function is a surjection. Notationally: The function is bijective (one-to-one and onto, one-to-one correspondence, or invertible) if each element of the codomain is mapped to by exactly one element of the domain. That is, the function is both injective and surjective. A bijective function is also called a bijection. That is, combining the definitions of injective and surjective, where means "there exists exactly one ". In any case (for any function), the following holds: An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated with more than one argument). The four possible combinations of injective and surjective features are illustrated in the adjacent diagrams. Injection A function is injective (one-to-one) if each possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an injection. The formal definition is the following. The function is i
https://en.wikipedia.org/wiki/Saturation%20arithmetic
Saturation arithmetic is a version of arithmetic in which all operations, such as addition and multiplication, are limited to a fixed range between a minimum and maximum value. If the result of an operation is greater than the maximum, it is set ("clamped") to the maximum; if it is below the minimum, it is clamped to the minimum. The name comes from how the value becomes "saturated" once it reaches the extreme values; further additions to a maximum or subtractions from a minimum will not change the result. For example, if the valid range of values is from −100 to 100, the following saturating arithmetic operations produce the following values: 60 + 30 → 90. 60 + 43 → 100. (not the expected 103.) (60 + 43) − (75 + 25) → 0. (not the expected 3.) (100 − 100 → 0.) 10 × 11 → 100. (not the expected 110.) 99 × 99 → 100. (not the expected 9801.) 30 × (5 − 1) → 100. (not the expected 120.) (30 × 4 → 100.) (30 × 5) − (30 × 1) → 70. (not the expected 120. not the previous 100.) (100 − 30 → 70.) Here is another example for saturating subtraction when the valid range is from 0 to 100 instead: 30 - 60 → 0. (not the expected -30.) As can be seen from these examples, familiar properties like associativity and distributivity may fail in saturation arithmetic. This makes it unpleasant to deal with in abstract mathematics, but it has an important role to play in digital hardware and algorithms where values have maximum and minimum representable ranges. Modern use Typically, general-purpose microprocessors do not implement integer arithmetic operations using saturation arithmetic; instead, they use the easier-to-implement modular arithmetic, in which values exceeding the maximum value "wrap around" to the minimum value, like the hours on a clock passing from 12 to 1. In hardware, modular arithmetic with a minimum of zero and a maximum of rn − 1, where r is the radix can be implemented by simply discarding all but the lowest n digits. For binary hardware, which the vast majority of
https://en.wikipedia.org/wiki/Iterative%20closest%20point
Iterative closest point (ICP) is an algorithm employed to minimize the difference between two clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning (especially when wheel odometry is unreliable due to slippery terrain), to co-register bone models, etc. Overview The Iterative Closest Point algorithm keeps one point cloud, the reference or target, fixed, while transforming the other, the source, to best match the reference. The transformation (combination of translation and rotation) is iteratively estimated in order to minimize an error metric, typically the sum of squared differences between the coordinates of the matched pairs. ICP is one of the widely used algorithms in aligning three dimensional models given an initial guess of the rigid transformation required. The ICP algorithm was first introduced by Chen and Medioni, and Besl and McKay. The Iterative Closest Point algorithm contrasts with the Kabsch algorithm and other solutions to the orthogonal Procrustes problem in that the Kabsch algorithm requires correspondence between point sets as an input, whereas Iterative Closest Point treats correspondence as a variable to be estimated. Inputs: reference and source point clouds, initial estimation of the transformation to align the source to the reference (optional), criteria for stopping the iterations. Output: refined transformation. Essentially, the algorithm steps are: For each point (from the whole set of vertices usually referred to as dense or a selection of pairs of vertices from each model) in the source point cloud, match the closest point in the reference point cloud (or a selected set). Estimate the combination of rotation and translation using a root mean square point to point distance metric minimization technique which will best align each source point to its match found in the previous step. This step may also involve weighting points and rejecting out
https://en.wikipedia.org/wiki/Akilia
Akilia Island is an island in southwestern Greenland, about 22 kilometers south of Nuuk. Akilia is the location of a rock formation that has been proposed to contain the oldest known sedimentary rocks on Earth, and perhaps the oldest evidence of life on Earth. Geology The rocks in question are part of a metamorphosed supracrustal sequence located at the south-western tip of the island. The sequence has been dated as no younger than 3.85 billion years old - that is, in the Hadean eon - based on the age of an igneous band that cuts the rock. The supracrustal sequence contains layers rich in iron and silica, which are variously interpreted as banded iron formation, chemical sediments from submarine hot springs, or hydrothermal vein deposits. Carbon in the rock, present as graphite, shows low levels of carbon-13, which may suggest an origin as isotopically light organic matter derived from living organisms. However, this interpretation is complicated because of high-grade metamorphism that affected the Akilia rocks after their formation. The sedimentary origin, age and the carbon content of the rocks have been questioned. If the Akilia rocks do show evidence of life by 3.85 Ga, it would challenge models which suggest that Earth would not be hospitable to life at this time. See also List of islands of Greenland Origin of life References External links Scientists Disagree over How, When Life Began on Earth Information on Nuuk, Greenland Study Resolves Doubt About Origin Of Earth’s Oldest Rocks, Possibility Of Finding Traces Of Ancient Life UCLA scientists strengthen case for life more than 3.8 billion years ago Geology of Greenland Islands of Greenland Origin of life Archean
https://en.wikipedia.org/wiki/Pseudoextinction
Pseudoextinction (or phyletic extinction) of a species occurs when all members of the species are extinct, but members of a daughter species remain alive. The term pseudoextinction refers to the evolution of a species into a new form, with the resultant disappearance of the ancestral form. Pseudoextinction results in the relationship between ancestor and descendant still existing even though the ancestor species no longer exists. The classic example is that of the non-avian dinosaurs. While the non-avian dinosaurs of the Mesozoic died out, their descendants, birds, live on today. Many other families of bird-like dinosaurs also died out as the heirs of the dinosaurs continued to evolve, but because birds continue to thrive in the world today their ancestors are only pseudoextinct. Overview From a taxonomic perspective, pseudoextinction is "within an evolutionary lineage, the disappearance of one taxon caused by the appearance of the next." The pseudoextinction of a species can be arbitrary, simply resulting from a change in the naming of a species as it evolves from its ancestral form to its descendant form. Taxonomic pseudoextinction has to do with the disappearance of taxa that are categorized together by taxonomists. As they are just grouped together, their extinction is not reflected through lineage; therefore, unlike evolutionary pseudoextinction, taxonomic pseudoextinction does not alter the evolution of daughter species. From an evolutionary perspective, pseudoextinction entails the loss of a species as a result of the creation of a new one. As the primordial species evolves into its daughter species, either by anagenesis or cladogenesis, the ancestral species can be subject to extinction. Throughout the process of evolution, a taxon can disappear; in this case, pseudoextinction is considered an evolutionary event. From a genetic perspective, pseudoextinction is the "disappearance of a taxon by virtue of its being evolved by anagenesis into another taxon.
https://en.wikipedia.org/wiki/Motorola%2068881
The Motorola 68881 and Motorola 68882 are floating-point units (FPUs) used in some computer systems in conjunction with Motorola's 32-bit 68020 or 68030 microprocessors. These coprocessors are external chips, designed before floating point math became standard on CPUs. The Motorola 68881 was introduced in 1984. The 68882 is a higher performance version produced later. Overview The 68020 and 68030 CPUs were designed with the separate 68881 chip in mind. Their instruction sets reserved the "F-line" instructions – that is, all opcodes beginning with the hexadecimal digit "F" could either be forwarded to an external coprocessor or be used as "traps" which would throw an exception, handing control to the computer's operating system. If an FPU is not present in the system, the OS would then either call an FPU emulator to execute the instruction's equivalent using 68020 integer-based software code, return an error to the program, terminate the program, or crash and require a reboot. Architecture The 68881 has eight 80-bit data registers (a 64-bit mantissa plus a sign bit, and a 15-bit signed exponent). It allows seven different modes of numeric representation, including single-precision floating point, double-precision floating point, extended-precision floating point, integers as 8-, 16- and 32-bit quantities and a floating-point Binary-coded decimal format. The binary floating point formats are as defined by the IEEE 754 floating-point standard. It was designed specifically for floating-point math and is not a general-purpose CPU. For example, when an instruction requires any address calculations, the main CPU handles them before the 68881 takes control. The CPU/FPU pair are designed such that both can run at the same time. When the CPU encounters a 68881 instruction, it hands the FPU all operands needed for that instruction, and then the FPU releases the CPU to go on and execute the next instruction. 68882 The 68882 is an improved version of the 68881, with b
https://en.wikipedia.org/wiki/4B5B
In telecommunication, 4B5B is a form of data communications line code. 4B5B maps groups of 4 bits of data onto groups of 5 bits for transmission. These 5-bit words are pre-determined in a dictionary and they are chosen to ensure that there will be sufficient transitions in the line state to produce a self-clocking signal. A collateral effect of the code is that 25% more bits are needed to send the same information. An alternative to using 4B5B coding is to use a scrambler. Some systems use scramblers in conjunction with 4B5B coding to assure DC balance and improve electromagnetic compatibility. Depending on the standard or specification of interest, there may be several 5-bit output codes left unused. The presence of any of the unused codes in the data stream can be used as an indication that there is a fault somewhere in the link. Therefore, the unused codes can be used to detect errors in the data stream. Applications 4B5B was popularized by Fiber Distributed Data Interface (FDDI) in the mid-1980s. It was adopted for digital audio transmission by MADI in 1989. and by Fast Ethernet in 1995. The name 4B5B is generally taken to mean the FDDI version. Other 4-to-5-bit codes have been used for magnetic recording and are known as group coded recording (GCR), but those are (0,2) run-length limited codes, with at most two consecutive zeros. 4B5B allows up to three consecutive zeros (a (0,3) RLL code), providing a greater variety of control codes. On optical fiber, the 4B5B output is NRZI-encoded. FDDI over copper (CDDI) uses MLT-3 encoding instead, as does 100BASE-TX Fast Ethernet. The 4B5B encoding is also used for USB Power Delivery communication, where it is sent over the USB-C CC pin (further encoded using biphase mark code) or the USB-A/B power lines (further encoded using frequency-shift keying). Clocking 4B5B codes are designed to produce at least two transitions per 5 bits of output code regardless of input data. The transitions provide necessary trans
https://en.wikipedia.org/wiki/Hyetograph
A hyetograph is a graphical representation of the distribution of rainfall intensity over time. For instance, in the 24-hour rainfall distributions as developed by the Soil Conservation Service (now the NRCS or National Resources Conservation Service), rainfall intensity progressively increases until it reaches a maximum and then gradually decreases. Where this maximum occurs and how fast the maximum is reached is what differentiates one distribution from another. One important aspect to understand is that the distributions are for design storms, not necessarily actual storms. In other words, a real storm may not behave in this same fashion. The maximum intensity may not be reached as uniformly as shown in the SCS hyetographs. See also Voronoi diagram - a method adaptable for calculating the average precipitation over an area External links Precipitation Maps for USA Hydrology
https://en.wikipedia.org/wiki/Eddy%20current%20brake
An eddy current brake, also known as an induction brake, Faraday brake, electric brake or electric retarder, is a device used to slow or stop a moving object by generating eddy currents and thus dissipating its kinetic energy as heat. Unlike friction brakes, where the drag force that stops the moving object is provided by friction between two surfaces pressed together, the drag force in an eddy current brake is an electromagnetic force between a magnet and a nearby conductive object in relative motion, due to eddy currents induced in the conductor through electromagnetic induction. A conductive surface moving past a stationary magnet develops circular electric currents called eddy currents induced in it by the magnetic field, as described by Faraday's law of induction. By Lenz's law, the circulating currents create their own magnetic field that opposes the field of the magnet. Thus the moving conductor experiences a drag force from the magnet that opposes its motion, proportional to its velocity. The kinetic energy of the moving object is dissipated as heat generated by the current flowing through the electrical resistance of the conductor. In an eddy current brake the magnetic field may be created by a permanent magnet or an electromagnet. With an electromagnet system, the braking force can be turned on and off (or varied) by varying the electric current in the electromagnet windings. Another advantage is that since the brake does not work by friction, there are no brake shoe surfaces to wear, eliminating replacement as with friction brakes. A disadvantage is that since the braking force is proportional to the relative velocity of the brake, the brake has no holding force when the moving object is stationary, as provided by static friction in a friction brake, hence in vehicles it must be supplemented by a friction brake. In some cases, energy in the form of momentum stored within a motor or other machine is used to energize any electromagnets involved. The
https://en.wikipedia.org/wiki/IEEE%20802.21
The IEEE 802.21 refers to Media Independent Handoff (MIH) and is an IEEE standard published in 2008. The standard supports algorithms enabling seamless handover between wired and wireless networks of the same type as well as handover between different wired and wireless network types also called Media independent handover (MIH) or vertical handover. The vertical handover was first introduced by Mark Stemn and Randy Katz at U C Berkeley. The standard provides information to allow handing over to and from wired 802.3 networks to wireless 802.11, 802.15, 802.16, 3GPP and 3GPP2 networks through different handover mechanisms. The IEEE 802.21 working group started work in March 2004. More than 30 companies have joined the working group. The group produced a first draft of the standard including the protocol definition in May 2005. The standard was published in January 2009. Reasons for 802.21 Cellular networks and 802.11 networks employ handover mechanisms for handover within the same network type (aka horizontal handover). Mobile IP provides handover mechanisms for handover across subnets of different types of networks, but can be slow in the process. Current 802 standards do not support handover between different types of networks. They also do not provide triggers or other services to accelerate mobile IP-based handovers. Moreover, existing 802 standards provide mechanisms for detecting and selecting network access points, but do not allow for detection and selection of network access points in a way that is independent of the network type. Some of the expectations Allow roaming between 802.11 networks and 3G cellular networks. Allow users to engage in ad hoc teleconferencing. Apply to both wired and wireless networks, likely the same list as IEEE P1905 specifies to cooperate in software-defined networking (see also OpenFlow) Allow for use by multiple vendors and users. Compatibility and conformance with other IEEE 802 standards especially 802.11u unknown user
https://en.wikipedia.org/wiki/Internet%20studies
Internet studies is an interdisciplinary field studying the social, psychological, political, technical, cultural and other dimensions of the Internet and associated information and communication technologies. The human aspects of the Internet are a subject of focus in this field. While that may be facilitated by the underlying technology of the Internet, the focus of study is often less on the technology itself than on the social circumstances that technology creates or influences. While studies of the Internet are now widespread across academic disciplines, there is a growing collaboration among these investigations. In recent years, Internet studies have become institutionalized as courses of study at several institutions of higher learning. Cognates are found in departments of a number of other names, including departments of "Internet and Society", "virtual society", "digital culture", "new media" or "convergent media", various "iSchools", or programs like "Media in Transition" at MIT. On the research side, Internet studies intersects with studies of cyberculture, human–computer interaction, and science and technology studies. Internet and society is a research field that addresses the interrelationship of Internet and society, i.e. how society has changed the Internet and how the Internet has changed society. The topic of social issues relating to Internet has become notable since the rise of the World Wide Web, which can be observed from the fact that journals and newspapers run many stories on topics such as cyberlove, cyberhate, Web 2.0, cybercrime, cyberpolitics, Internet economy, etc. As most of the scientific monographs that have considered Internet and society in their book titles are social theoretical in nature, Internet and society can be considered as a primarily social theoretical research approach of Internet studies. Topics of study In recent years, Internet studies have become institutionalized as courses of study, and even separate departme
https://en.wikipedia.org/wiki/PEPA
Performance Evaluation Process Algebra (PEPA) is a stochastic process algebra designed for modelling computer and communication systems introduced by Jane Hillston in the 1990s. The language extends classical process algebras such as Milner's CCS and Hoare's CSP by introducing probabilistic branching and timing of transitions. Rates are drawn from the exponential distribution and PEPA models are finite-state and so give rise to a stochastic process, specifically a continuous-time Markov process (CTMC). Thus the language can be used to study quantitative properties of models of computer and communication systems such as throughput, utilisation and response time as well as qualitative properties such as freedom from deadlock. The language is formally defined using a structured operational semantics in the style invented by Gordon Plotkin. As with most process algebras, PEPA is a parsimonious language. It has only four combinators, prefix, choice, co-operation and hiding. Prefix is the basic building block of a sequential component: the process (a, r).P performs activity a at rate r before evolving to behave as component P. Choice sets up a competition between two possible alternatives: in the process (a, r).P + (b, s).Q either a wins the race (and the process subsequently behaves as P) or b wins the race (and the process subsequently behaves as Q). The co-operation operator requires the two "co-operands" to join for those activities which are specified in the co-operation set: in the process P < a, b> Q the processes P and Q must co-operate on activities a and b, but any other activities may be performed independently. The reversed compound agent theorem gives a set of sufficient conditions for a co-operation to have a product form stationary distribution. Finally, the process P/{a} hides the activity a from view (and prevents other processes from joining with it). Syntax Given a set of action names, the set of PEPA processes is defined by the following BNF
https://en.wikipedia.org/wiki/Handel%20%28warning%20system%29
Handel was the code-name for the UK's national attack warning system in the Cold War. It consisted of a small console with two microphones, lights and gauges. The reason behind this was to provide a back-up if anything failed. If an enemy airstrike was detected, a key on the left-hand side of the console would be turned and two lights would come on. Then the operator would press and hold down a red button and give the message: The message would be sent to the police by the telephone system used for the speaking clock, who would in turn activate the air attack sirens using the local telephone lines. The rationale was to tackle two problems at once, as it reduced running costs (it would most likely be used only once in its working life, though it was regularly tested) and the telephone lines were continually tested for readiness by sharing infrastructure with a public service. This meant a fault could be detected in time to give a warning. A Handel warning console can be seen at the Imperial War Museum in London among their Cold War exhibits, alongside the warning apparatus used by Kent Police (which was located at Maidstone police station to activate the sirens). See also BIKINI state Four-minute warning National Emergency Alarm Repeater References External links An explanation of how the system worked Cold War military equipment of the United Kingdom Civil defense Emergency management in the United Kingdom United Kingdom nuclear command and control Emergency population warning systems Cold War military history of the United Kingdom Color codes
https://en.wikipedia.org/wiki/Phase%20synchronization
Phase synchronization is the process by which two or more cyclic signals tend to oscillate with a repeating sequence of relative phase angles. Phase synchronisation is usually applied to two waveforms of the same frequency with identical phase angles with each cycle. However it can be applied if there is an integer relationship of frequency, such that the cyclic signals share a repeating sequence of phase angles over consecutive cycles. These integer relationships are called Arnold tongues which follow from bifurcation of the circle map. One example of phase synchronization of multiple oscillators can be seen in the behavior of Southeast Asian fireflies. At dusk, the flies begin to flash periodically with random phases and a gaussian distribution of native frequencies. As night falls, the flies, sensitive to one another's behavior, begin to synchronize their flashing. After some time all the fireflies within a given tree (or even larger area) will begin to flash simultaneously in a burst. Thinking of the fireflies as biological oscillators, we can define the phase to be 0° during the flash and +-180° exactly halfway until the next flash. Thus, when they begin to flash in unison, they synchronize in phase. One way to keep a local oscillator "phase synchronized" with a remote transmitter uses a phase-locked loop. See also Algebraic connectivity Coherence (physics) Kuramoto model Synchronization (alternating current) References Sync by S. H. Strogatz (2002). Synchronization - A universal concept in nonlinear sciences by A. Pikovsky, M. Rosenblum, J. Kurths (2001) External links A tutorial on calculating Phase locking and Phase synchronization in Matlab. Wave mechanics Synchronization
https://en.wikipedia.org/wiki/Parametric%20derivative
In calculus, a parametric derivative is a derivative of a dependent variable with respect to another dependent variable that is taken when both variables depend on an independent third variable, usually thought of as "time" (that is, when the dependent variables are x and y and are given by parametric equations in t). First derivative Let and be the coordinates of the points of the curve expressed as functions of a variable t: The first derivative implied by these parametric equations is where the notation denotes the derivative of x with respect to t. This can be derived using the chain rule for derivatives: and dividing both sides by to give the equation above. In general all of these derivatives — dy / dt, dx / dt, and dy / dx — are themselves functions of t and so can be written more explicitly as, for example, Second derivative The second derivative implied by a parametric equation is given by by making use of the quotient rule for derivatives. The latter result is useful in the computation of curvature. Example For example, consider the set of functions where: and Differentiating both functions with respect to t leads to and respectively. Substituting these into the formula for the parametric derivative, we obtain where and are understood to be functions of t. See also Generalizations of the derivative External links Differential calculus
https://en.wikipedia.org/wiki/List%20of%20tundra%20ecoregions
A list of tundra ecoregions from the World Wide Fund for Nature (WWF) includes: See also Tundra External links Arctic tundra biome information from the WWF Alpine tundra information from the WWF The Arctic biome at Classroom of the Future References Tundra Tundra
https://en.wikipedia.org/wiki/Superhelix
A superhelix is a molecular structure in which a helix is itself coiled into a helix. This is significant to both proteins and genetic material, such as overwound circular DNA. The earliest significant reference in molecular biology is from 1971, by F. B. Fuller: A geometric invariant of a space curve, the writhing number, is defined and studied. For the central curve of a twisted cord the writhing number measures the extent to which coiling of the central curve has relieved local twisting of the cord. This study originated in response to questions that arise in the study of supercoiled double-stranded DNA rings.</blockquote> About the writhing number, mathematician W. F. Pohl says: <blockquote>It is well known that the writhing number is a standard measure of the global geometry of a closed space curve. Contrary to intuition, a topological property, the linking number, arises from the geometric properties twist and writhe according to the following relationship: Lk= T + W, where Lk is the linking number, W is the writhe and T is the twist of the coil. The linking number refers to the number of times that one strand wraps around the other. In DNA this property does not change and can only be modified by specialized enzymes called topoisomerases. See also DNA supercoil (superhelical DNA) Knot theory References External links DNA Structure and Topology at Molecular Biochemistry II: The Bello Lectures. Helices Molecular biology Molecular topology
https://en.wikipedia.org/wiki/VSAN
A virtual storage area network (virtual SAN, VSAN or vSAN) is a logical representation of a physical storage area network (SAN). A VSAN abstracts the storage-related operations from the physical storage layer, and provides shared storage access to the applications and virtual machines by combining the servers' local storage over a network into a single or multiple storage pools. The use of VSANs allows the isolation of traffic within specific portions of the network. If a problem occurs in one VSAN, that problem can be handled with a minimum of disruption to the rest of the network. VSANs can also be configured separately and independently. Technology Operation A VSAN operates as a dedicated piece of software responsible for storage access, and depending on the vendor, can run either as a virtual storage appliance (VSA), a storage controller that runs inside an isolated virtual machine (VM) or as an ordinary user-mode application, such as StarWind Virtual SAN, or DataCore SANsymphony. Alternatively it can be implemented as a kernel-mode loadable module, such as VMware vSAN, or Microsoft Storage Spaces Direct (S2D). A VSAN can be tied to a specific hypervisor, known as hypervisor-dedicated, or it can allow different hypervisors, known as hypervisor-agnostic. Different vendors have different requirements for the minimum number of nodes that participate in a resilient VSAN cluster. The minimum requirement is to have at least 2 for high availability. All-flash versus hybrid VSAN Data center operators can deploy VSANs in an all-flash environment or a hybrid configuration, where flash is only used at the caching layer, and traditional spinning disk storage is used everywhere else. All-flash VSANs are higher performing, but as of 2019 were more expensive than hybrid networks. Protocols For sharing storage over a network, VSAN utilizes protocols including Fibre Channel (FC), Internet Small Computer Systems Interface (iSCSI), Server Message Block (SMB), and Network
https://en.wikipedia.org/wiki/Derating
In electronics, derating is the operation of a device at less than its rated maximum capability to prolong its life. Typical examples include operations below the maximum power rating, current rating, or voltage rating. In electronics Power semiconductor devices have a maximum power dissipation rating usually quoted at a case temperature of . The datasheet for the device also includes a derating curve which indicates how much a device will dissipate without getting damaged at any given case temperature, and this must be taken into account while designing a system. As can be seen from the derating curve image for a hypothetical bipolar junction transistor, the device (rated for 100 W at ) cannot be expected to dissipate anything more than about 40 W if the ambient temperature is such that the temperature at which the device's case will stabilize (after heat-sinking) is . This final case temperature is a function of the thermal resistance between the device's case and the heat-sink; and the heat-sink and the ambient (this includes the heat-sinks temp/watt rating - with lower values implying better cooling characteristics). Some capacitors' voltage capability is reduced at higher temperatures because the softened dielectric (e.g., a polymer) is softened further by the heat, and its breakdown field strength is reduced. Derating curves are included in data sheets for such capacitors. Derating can also provide a safety margin for transient voltages or currents (spikes) that exceed normal operation or prolong life. For example, the life of electrolytic capacitors is dramatically increased by operating them below their maximum temperature rating. In electrical installations All dimmers rely on heat conduction and convection to keep the electronic components cool. Similarly, power wiring (e.g., house wiring) not surrounded by an air space (e.g., inside a conduit) needs to have its current-limiting device (e.g., circuit breaker or fuse) adjusted so as not to carry as
https://en.wikipedia.org/wiki/Graphics%20device%20interface
A graphics device interface is a subsystem that most operating systems use for representing graphical objects and transmitting them to output devices such as monitors and printers. In most cases, the graphics device interface is only able to draw 2D graphics and simple 3D graphics, in order to make use of more advanced graphics and keep performance, an API such as DirectX or OpenGL needs to be installed. In Microsoft Windows, the GDI functionality resides in gdi.exe on 16-bit Windows, and gdi32.dll on 32-bit Windows. Operating system technology
https://en.wikipedia.org/wiki/WeatherStar
WeatherStar (sometimes rendered Weather Star or WeatherSTAR; "STAR" being an acronym for Satellite Transponder Addressable Receiver) is the technology used by American cable and satellite television network The Weather Channel (TWC) to generate its local forecast segments—branded as Local on the 8s (LOT8s) since 2002 and previously from 1996 to 1998—on cable and IPTV systems nationwide. The hardware takes the form of a computerized unit installed at a cable system's headend. It receives, generates, and inserts local forecasts and other weather information, including weather advisories and warnings, into TWC's national programming. Overview The primary purpose of WeatherStar units is to disseminate weather information for local forecast segments on The Weather Channel. The forecast and observation data – which is compiled from local offices of the National Weather Service (NWS), the Storm Prediction Center (SPC), and The Weather Channel (which began producing in-house forecasts in 2002, replacing the NWS-sourced zone forecasts that were utilized for the STAR's descriptive, regional and extended forecast products) – is received from the vertical blanking interval of the TWC video feed and from data transmitted via satellite; the localized data is then sent to the unit that inserts the data and accompanying programmed graphics over the TWC feed. The WeatherStar systems are typically programmed to cue the local forecast segments and Lower Display Line (LDL) at given times. The units are programmed to feature customized segments known as "flavors," pre-determined segment lengths for each local forecast segment, varying by the time of broadcast, accommodating the inclusion or exclusion of certain products from a segment's product list. (Until the Local on the 8s segments adopted a uniform length, the extended forecast was the only product regularly included in each flavor.) Flavor lengths previously varied commonly between 30 seconds and two minutes, with some running as
https://en.wikipedia.org/wiki/Differential-algebraic%20system%20of%20equations
In electrical engineering, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system. In mathematics these are examples of differential algebraic varieties and correspond to ideals in differential polynomial rings (see the article on differential algebra for the algebraic setup). We can write these differential equations for a dependent vector of variables x in one independent variable t, as When considering these symbols as functions of a real variable (as is the case in applications in electrical engineering or control theory) we look at as a vector of dependent variables and the system has as many equations, which we consider as functions . They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve. In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as in the case of ODEs; this issue is commonly encountered in nonlinear systems with hysteresis, such as the Schmitt trigger. This difference is more clearly visible if the system may be rewritten so that instead of x we consider a pair of vectors of dependent variables and the DAE has the form where , , and A DAE system of this form is called semi-explicit. Every solution of the second half g of the equation defines a unique direction for x via the first half f of the equations, while the dir
https://en.wikipedia.org/wiki/Serpent%27s%20Wall
Serpent's Wall () is an ancient system of earthworks (valla) located in the middle Dnieper Ukraine (Naddniprianshchyna) that stretch across primarily Kyiv Oblast, Ukraine. They seem to be similar in purpose and character to Trajan's Wall situated to the southwest in Bessarabia. The remaining ancient walls have a total length of 1,000 km and constitute less than 20% of the original wall system. History According to legend, the earthworks are the result of ancient events when a mythical hero (bohatyr), Kozmodemian (or Borysohlib), in order to slay a gargantuan Dragon (Serpent), harnessed it in a giant plow and furrowed. The Dragon (Serpent) bit the dust and left furrows, on both sides of which were immense banks of earth that became known as Serpent's Wall. The ancient walls were built between the 2nd century BC and 7th century AD, according to carbon dating. There are three theories as to what peoples built the walls: either the Sarmatians against the Scythians, or the Goths of Oium against the Huns, or the Early East Slavs against the nomads of the southern steppes. In Slavic culture, the warlike nomads are often associated with the winged dragon, hence the name. On the right bank of Dnieper between its tributaries Teteriv and Ros the remnants of the wall create six lines elongated from west to east. One Serpent's Wall passed over the left bank of Dnieper and its tributary Sula. The 1974-85 explorations established that Serpent's Wall is a remnant of wooded earth fortifications built at the end of 10th and the first half of 11th centuries, smaller part in the 12th century, to protect middle Dnieper Ukraine and Kyiv from Pechenegs and Cumans. Gallery References External links Photo documentary about the Serpent's Wall, by Elena Filatova Landmarks in Ukraine Fortifications in Ukraine Walls Ruins in Ukraine Fortification lines History of Kyiv Oblast Linear earthworks
https://en.wikipedia.org/wiki/Nassi%E2%80%93Shneiderman%20diagram
A Nassi–Shneiderman diagram (NSD) in computer programming is a graphical design representation for structured programming. This type of diagram was developed in 1972 by Isaac Nassi and Ben Shneiderman who were both graduate students at Stony Brook University. These diagrams are also called structograms, as they show a program's structures. Overview Following a top-down design, the problem at hand is reduced into smaller and smaller subproblems, until only simple statements and control flow constructs remain. Nassi–Shneiderman diagrams reflect this top-down decomposition in a straightforward way, using nested boxes to represent subproblems. Consistent with the philosophy of structured programming, Nassi–Shneiderman diagrams have no representation for a GOTO statement. Nassi–Shneiderman diagrams are only rarely used for formal programming. Their abstraction level is close to structured program code and modifications require the whole diagram to be redrawn, but graphic editors removed that limitation. They clarify algorithms and high-level designs, which make them useful in teaching. They were included in Microsoft Visio and dozens of other software tools, such as the German EasyCODE. In Germany, Nassi–Shneiderman diagrams were standardised in 1985 as DIN 66261. They are still used in German introductions to programming, for example Böttcher and Kneißl's introduction to C, Baeumle-Courth and Schmidt's introduction to C and Kirch's introduction to C#. Nassi–Shneiderman diagrams can also be used in technical writing. Diagrams Process blocks: the process block represents the simplest of steps and requires no analysis. When a process block is encountered, the action inside the block is performed and we move onto the next block. Branching blocks: there are two types of branching blocks. First is the simple True/False or Yes/No branching block which offers the program two paths to take depending on whether or not a condition has been fulfilled. These blocks can be u
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring. The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals. Statement Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in : Statement 1: Let be an ideal in , and a finitely generated module over . If , then there exists with such that . This is proven below. A useful mnemonic for Nakayama's lemma is "". This summarizes the following alternative formulation: Statement 2: Let be an ideal in , and a finitely generated module over . If , then there exists an such that for all . Proof: Take in Statement 1. The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears. Statement 3: If is a finitely generated mod
https://en.wikipedia.org/wiki/Ion%20beam-assisted%20deposition
Ion beam assisted deposition or IBAD or IAD (not to be confused with ion beam induced deposition, IBID) is a materials engineering technique which combines ion implantation with simultaneous sputtering or another physical vapor deposition technique. Besides providing independent control of parameters such as ion energy, temperature and arrival rate of atomic species during deposition, this technique is especially useful to create a gradual transition between the substrate material and the deposited film, and for depositing films with less built-in strain than is possible by other techniques. These two properties can result in films with a much more durable bond to the substrate. Experience has shown that some meta-stable compounds like cubic boron nitride (c-BN), can only be formed in thin films when bombarded with energetic ions during the deposition process. See also Ion beam deposition Physical vapor deposition Ion plating References 4wave Inc.'s Ion beam deposition page with diagram of a typical IBD system Thin film deposition
https://en.wikipedia.org/wiki/Titanium%20nitride
Titanium nitride (TiN; sometimes known as tinite) is an extremely hard ceramic material, often used as a physical vapor deposition (PVD) coating on titanium alloys, steel, carbide, and aluminium components to improve the substrate's surface properties. Applied as a thin coating, TiN is used to harden and protect cutting and sliding surfaces, for decorative purposes (for its golden appearance), and as a non-toxic exterior for medical implants. In most applications a coating of less than is applied. Characteristics TiN has a Vickers hardness of 1800–2100, hardness of 31 ± 4 GPa, a modulus of elasticity of 550 ± 50 GPa, a thermal expansion coefficient of 9.35 K−1, and a superconducting transition temperature of 5.6 K. TiN will oxidize at 800 °C in a normal atmosphere. TiN has a brown color, and appears gold when applied as a coating. It is chemically stable at 20 °C, according to laboratory tests, but can be slowly attacked by concentrated acid solutions with rising temperatures. Depending on the substrate material and surface finish, TiN will have a coefficient of friction ranging from 0.4 to 0.9 against another TiN surface (non-lubricated). The typical TiN formation has a crystal structure of NaCl-type with a roughly 1:1 stoichiometry; TiNx compounds with x ranging from 0.6 to 1.2 are, however, thermodynamically stable. TiN becomes superconducting at cryogenic temperatures, with critical temperature up to 6.0 K for single crystals. Superconductivity in thin-film TiN has been studied extensively, with the superconducting properties strongly varying depending on sample preparation, up to complete suppression of superconductivity at a superconductor-insulator transition. A thin film of TiN was chilled to near absolute zero, converting it into the first known superinsulator, with resistance suddenly increasing by a factor of 100,000. Natural occurrence Osbornite is a very rare natural form of titanium nitride, found almost exclusively in meteorites. Uses A well
https://en.wikipedia.org/wiki/Teichm%C3%BCller%20space
In mathematics, the Teichmüller space of a (real) topological (or differential) surface is a space that parametrizes complex structures on up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Teichmüller spaces are named after Oswald Teichmüller. Each point in a Teichmüller space may be regarded as an isomorphism class of "marked" Riemann surfaces, where a "marking" is an isotopy class of homeomorphisms from to itself. It can be viewed as a moduli space for marked hyperbolic structure on the surface, and this endows it with a natural topology for which it is homeomorphic to a ball of dimension for a surface of genus . In this way Teichmüller space can be viewed as the universal covering orbifold of the Riemann moduli space. The Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The study of geometric features of these various structures is an active body of research. The sub-field of mathematics that studies the Teichmüller space is called Teichmüller theory. History Moduli spaces for Riemann surfaces and related Fuchsian groups have been studied since the work of Bernhard Riemann (1826-1866), who knew that parameters were needed to describe the variations of complex structures on a surface of genus . The early study of Teichmüller space, in the late nineteenth–early twentieth century, was geometric and founded on the interpretation of Riemann surfaces as hyperbolic surfaces. Among the main contributors were Felix Klein, Henri Poincaré, Paul Koebe, Jakob Nielsen, Robert Fricke and Werner Fenchel. The main contribution of Teichmüller to the study of moduli was the introduction of quasiconformal mappings to the subject. They allow us to give much more depth to the study of moduli spaces by endowing them with additional features that were not present in the previous, more elementary works. After World War II the subject was developed further in this analytic vein, in particular by L
https://en.wikipedia.org/wiki/Remote%20digital%20terminal
In telecommunications, a remote digital terminal (RDT) typically accepts E1, T1 or OC-3 digital lines to communicate with a telephone Access network (AN) or telephone exchange (Local Digital Switch, LDS) on one side, and forms a local exchange (LE) on the other, which is connected to "plain old telephone service" (POTS) lines. See also Distributed switching Remote concentrator Digital access carrier system References Local loop
https://en.wikipedia.org/wiki/Dobi%C5%84ski%27s%20formula
In combinatorial mathematics, Dobiński's formula states that the n-th Bell number Bn (i.e., the number of partitions of a set of size n) equals where denotes Euler's number. The formula is named after G. Dobiński, who published it in 1877. Probabilistic content In the setting of probability theory, Dobiński's formula represents the nth moment of the Poisson distribution with mean 1. Sometimes Dobiński's formula is stated as saying that the number of partitions of a set of size n equals the nth moment of that distribution. Reduced formula The computation of the sum of Dobiński's series can be reduced to a finite sum of terms, taking into account the information that is an integer. Precisely one has, for any integer provided (a condition that of course implies , but that is satisfied by some of size ). Indeed, since , one has Therefore for all so that the tail is dominated by the series , which implies , whence the reduced formula. Generalization Dobiński's formula can be seen as a particular case, for , of the more general relation: and for in this formula for Touchard polynomials Proof One proof relies on a formula for the generating function for Bell numbers, The power series for the exponential gives so The coefficient of in this power series must be , so Another style of proof was given by Rota. Recall that if x and n are nonnegative integers then the number of one-to-one functions that map a size-n set into a size-x set is the falling factorial Let ƒ be any function from a size-n set A into a size-x set B. For any b ∈ B, let ƒ −1(b) = {a ∈ A : ƒ(a) = b}. Then } is a partition of A. Rota calls this partition the "kernel" of the function ƒ. Any function from A into B factors into one function that maps a member of A to the element of the kernel to which it belongs, and another function, which is necessarily one-to-one, that maps the kernel into B. The first of these two factors is completely determined by the partition t
https://en.wikipedia.org/wiki/G.%20Harry%20Stine
George Harry Stine (March 26, 1928 – November 2, 1997) was one of the founding figures of model rocketry, a science and technology writer, and (under the name Lee Correy) a science fiction author. Education and early career Stine grew up in Colorado Springs and attended New Mexico Military Institute and Colorado College in Colorado Springs, majoring in physics. Upon his graduation he went to work at White Sands Proving Grounds, first as a civilian scientist and then, from 1955 to 1957, at the U.S. Naval Ordnance Missile Test Facility as head of the Range Operations Division. Stine and his wife Barbara were friends of author Robert A. Heinlein, who sponsored their wedding, as Harry's parents were dead and Barbara's mother too ill to travel. Several of Heinlein's books are dedicated one or both of them, most particularly Have Space Suit - Will Travel. Stine wrote science fiction under the pen name "Lee Correy" in the mid-1950s and under his own name in the 1980s and 1990s, as well as writing science articles for Popular Mechanics. Model rocketry After White Sands, Stine was employed at several other aerospace companies, finally ending up at Martin working on the Titan project. This job was short-lived: he was abruptly fired in 1957 when United Press called him for a reaction to the launch of Sputnik 1, and he repeated to them a passage from his just-published book Earth Satellites and the Race for Space Superiority, in which he wrote, "For the first time since the dawn of history, the Earth is going to have more than one moon. This is due to happen within the next few months—or it may have already happened even at the time you are reading this." The next day he was told to clear out his desk. To be more precise, in his "The Formative Years of Model Rocketry, 1957–1962; A Personal Memoir" (International Astronautical Federation, IAF XXVIIth Congress, Anaheim, CA, October 10–16, 1976 (76-241), he wrote "I was fired by the Martin Company on October 5, 1957, for tellin
https://en.wikipedia.org/wiki/Reaching%20definition
In compiler theory, a reaching definition for a given instruction is an earlier instruction whose target variable can reach (be assigned to) the given one without an intervening assignment. For example, in the following code: d1 : y := 3 d2 : x := y d1 is a reaching definition for d2. In the following, example, however: d1 : y := 3 d2 : y := 4 d3 : x := y d1 is no longer a reaching definition for d3, because d2 kills its reach: the value defined in d1 is no longer available and cannot reach d3. As analysis The similarly named reaching definitions is a data-flow analysis which statically determines which definitions may reach a given point in the code. Because of its simplicity, it is often used as the canonical example of a data-flow analysis in textbooks. The data-flow confluence operator used is set union, and the analysis is forward flow. Reaching definitions are used to compute use-def chains. The data-flow equations used for a given basic block in reaching definitions are: In other words, the set of reaching definitions going into are all of the reaching definitions from 's predecessors, . consists of all of the basic blocks that come before in the control-flow graph. The reaching definitions coming out of are all reaching definitions of its predecessors minus those reaching definitions whose variable is killed by plus any new definitions generated within . For a generic instruction, we define the and sets as follows: , a set of locally available definitions in a basic block , a set of definitions (not locally available, but in the rest of the program) killed by definitions in the basic block. where is the set of all definitions that assign to the variable . Here is a unique label attached to the assigning instruction; thus, the domain of values in reaching definitions are these instruction labels. Worklist algorithm Reaching definition is usually calculated using an iterative worklist algorithm. Input: control-flow gra
https://en.wikipedia.org/wiki/Radio%20jamming
Radio jamming is the deliberate blocking of or interference with wireless communications. In some cases, jammers work by the transmission of radio signals that disrupt telecommunications by decreasing the signal-to-noise ratio. The concept can be used in wireless data networks to disrupt information flow. It is a common form of censorship in totalitarian countries, in order to prevent foreign radio stations in border areas from reaching the country. Jamming is usually distinguished from interference that can occur due to device malfunctions or other accidental circumstances. Devices that simply cause interference are regulated differently. Unintentional "jamming" occurs when an operator transmits on a busy frequency without first checking whether it is in use, or without being able to hear stations using the frequency. Another form of unintentional jamming occurs when equipment accidentally radiates a signal, such as a cable television plant that accidentally emits on an aircraft emergency frequency. Distinction between "jamming" and "interference" Originally the terms were used interchangeably but nowadays most radio users use the term "jamming" to describe the deliberate use of radio noise or signals in an attempt to disrupt communications (or prevent listening to broadcasts) whereas the term "interference" is used to describe unintentional forms of disruption (which are far more common). However, the distinction is still not universally applied. For inadvertent disruptions, see electromagnetic compatibility. Method Intentional communications jamming is usually aimed at radio signals to disrupt control of a battle. A transmitter, tuned to the same frequency as the opponents' receiving equipment and with the same type of modulation, can, with enough power, override any signal at the receiver. Digital wireless jamming for signals such as Bluetooth and WiFi is possible with very low power. The most common types of this form of signal jamming are random noise, ra
https://en.wikipedia.org/wiki/Slutsky%20equation
The Slutsky equation (or Slutsky identity) in economics, named after Eugen Slutsky, relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand, which is known as such since it compensates to maintain a fixed level of utility. There are two parts of the Slutsky equation, namely the substitution effect, and income effect. In general, the substitution effect can be negative for consumers as it can limit choices. He designed this formula to explore a consumer's response as the price changes. When the price increases, the budget set moves inward, which also causes the quantity demanded to decrease. In contrast, when the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The substitution effect is due to the effect of the relative price change while the income effect is due to the effect of income being freed up. The equation demonstrates that the change in the demand for a good, caused by a price change, is the result of two effects: a substitution effect: when the price of good changes, as it becomes relatively cheaper, if hypothetically consumer's consumption remains same, income would be freed up which could be spent on a combination of each or more of the goods. an income effect: the purchasing power of a consumer increases as a result of a price decrease, so the consumer can now afford better products or more of the same products, depending on whether the product itself is a normal good or an inferior good. The Slutsky equation decomposes the change in demand for good i in response to a change in the price of good j: where is the Hicksian demand and is the Marshallian demand, at the vector of price levels , wealth level (or, alternatively, income level) , and fixed utility level given by maximizing utility at the original price and income, formally given by the indirect utility function . The right-hand side of the equation is equal to the change in demand for goo
https://en.wikipedia.org/wiki/KMYS
KMYS (channel 35) is a television station licensed to Kerrville, Texas, United States, serving the San Antonio area as an affiliate of the digital multicast network Dabl. It is owned by Deerfield Media, which maintains joint sales and shared services agreements (JSA/SSA) with Sinclair Broadcast Group, owner of dual NBC/CW affiliate WOAI-TV (channel 4) and Fox affiliate KABB (channel 29), for the provision of certain services. The stations share studios between Babcock Road and Sovereign Drive (off Loop 410) in northwest San Antonio, while KMYS's transmitter is located in rural southeastern Bandera County (near Lakehills). Channel 35 began broadcasting in November 1985 as KRRT, the first independent station serving San Antonio and the first new commercial TV station in the San Antonio market in 28 years. It was owned in part, and eventually entirely, by TVX Broadcast Group, a Virginia-based group of independent stations. KRRT served as San Antonio's first affiliate of Fox when the network launched in 1986. TVX was acquired by Paramount Pictures in two stages between 1989 and 1991. The Paramount Stations Group sold KRRT in 1994 to Jet Broadcasting of Erie, Pennsylvania. Jet then contracted with River City Broadcasting, owner of KABB, to run the station. The Fox affiliation moved to KABB, which was starting a news department; KRRT then became a UPN affiliate, and it also inherited San Antonio Spurs telecasts from KABB. After River City merged into Sinclair in 1996, KABB and other Sinclair-owned UPN stations switched to The WB in a major group deal that took effect in January 1998. KRRT became KMYS, an affiliate of MyNetworkTV, in 2006; it then became the CW affiliate in 2010, replacing KCWX. In September 2021, the programming that had been airing on KMYS became the "CW 35" subchannel of WOAI-TV; KMYS itself began exclusively airing diginets ahead of conversion to ATSC 3.0. History Early years In December 1980, Hubbard Broadcasting petitioned the Federal Communicati
https://en.wikipedia.org/wiki/Radeon%20R200%20series
The R200 is the second generation of GPUs used in Radeon graphics cards and developed by ATI Technologies. This GPU features 3D acceleration based upon Microsoft Direct3D 8.1 and OpenGL 1.3, a major improvement in features and performance compared to the preceding Radeon R100 design. The GPU also includes 2D GUI acceleration, video acceleration, and multiple display outputs. "R200" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products. Architecture R200's 3D hardware consists of 4 pixel pipelines, each with 2 texture sampling units. It has 2 vertex shader units and a legacy Direct3D 7 TCL unit, marketed as Charisma Engine II. It is ATI's first GPU with programmable pixel and vertex processors, called Pixel Tapestry II and compliant with Direct3D 8.1. R200 has advanced memory bandwidth saving and overdraw reduction hardware called HyperZ II that consists of occlusion culling (hierarchical Z), fast z-buffer clear, and z-buffer compression. The GPU is capable of dual display output (HydraVision) and is equipped with a video decoding engine (Video Immersion II) with adaptive hardware deinterlacing, temporal filtering, motion compensation, and iDCT. R200 introduced pixel shader version 1.4 (PS1.4), a significant enhancement to prior PS1.x specifications. Notable instructions include "phase", "texcrd", and "texld". The phase instruction allows a shader program to operate on two separate "phases" (2 passes through the hardware), effectively doubling the maximum number of texture addressing and arithmetic instructions, and potentially allowing the number of passes required for an effect to be reduced. This allows not only more complicated effects, but can also provide a speed boost by utilizing the hardware more efficiently. The "texcrd" instruction moves the texture coordinate values of a texture into the destination register, while the "texld" instruction will load the texture at th
https://en.wikipedia.org/wiki/Video%20game%20bot
In video games, a bot is a type of artificial intelligence (AI)–based expert system software that plays a video game in the place of a human. Bots are used in a variety of video game genres for a variety of tasks: a bot written for a first-person shooter (FPS) works very differently from one written for a massively multiplayer online role-playing game (MMORPG). The former may include analysis of the map and even basic strategy; the latter may be used to automate a repetitive and tedious task like farming. Bots written for first-person shooters usually try to mimic how a human would play a game. Computer-controlled bots may play against other bots and/or human players in unison, either over the Internet, on a LAN or in a local session. Features and intelligence of bots may vary greatly, especially with community created content. Advanced bots feature machine learning for dynamic learning of patterns of the opponent as well as dynamic learning of previously unknown maps – whereas more trivial bots may rely completely on lists of waypoints created for each map by the developer, limiting the bot to play only maps with said waypoints. Using bots is generally against the rules of current massively multiplayer online role-playing games (MMORPGs), but a significant number of players still use MMORPG bots for games like RuneScape. MUD players may run bots to automate laborious tasks, which can sometimes make up the bulk of the gameplay. While a prohibited practice in most MUDs, there is an incentive for the player to save time while the bot accumulates resources, such as experience, for the player character bot. Types Bots may be static, dynamic, or both. Static bots are designed to follow pre-made waypoints for each level or map. These bots need a unique waypoint file for each map. For example, Quake III Arena bots use an area awareness system file to move around the map, while Counter-Strike bots use a waypoint file. Dynamic bots learn the levels and maps as they play
https://en.wikipedia.org/wiki/Irresistible%20force%20paradox
The irresistible force paradox (also unstoppable force paradox or shield and spear paradox), is a classic paradox formulated as "What happens when an unstoppable force meets an immovable object?" The immovable object and the unstoppable force are both implicitly assumed to be indestructible, or else the question would have a trivial resolution. Furthermore, it is assumed that they are two entities. The paradox arises because it rests on two incompatible premises—that there can exist simultaneously such things as unstoppable forces and immovable objects. Origins An example of this paradox in eastern thought can be found in the origin of the Chinese word for contradiction (). This term originates from a story (see ) in the 3rd century BC philosophical book Han Feizi. In the story, a man trying to sell a spear and a shield claimed that his spear could pierce any shield, and then claimed that his shield was unpierceable. Then, asked about what would happen if he were to take his spear to strike his shield, the seller could not answer. This led to the idiom of "zìxīang máodùn" (自相矛盾, "from each-other spear shield"), or "self-contradictory". Another ancient and mythological example illustrating this theme can be found in the story of the Teumessian fox, which can never be caught, and the hound Laelaps, which never misses what it hunts. Realizing the paradox, Zeus, Lord of the Sky, turns both creatures into static constellations. Applications The problems associated with this paradox can be applied to any other conflict between two abstractly defined extremes that are opposite. One of the answers generated by seeming paradoxes like these is that there is no contradiction – that there is not a false dilemma. Christopher Kaczor suggested that the need to change indicates a lack of power rather than the possession thereof, and as such a person who was omniscient would never need to change their mind – not changing the future would be consistent with omniscience rather
https://en.wikipedia.org/wiki/Electrostatic%20precipitator
An electrostatic precipitator (ESP) is a filterless device that removes fine particles, such as dust and smoke, from a flowing gas using the force of an induced electrostatic charge minimally impeding the flow of gases through the unit. In contrast to wet scrubbers, which apply energy directly to the flowing fluid medium, an ESP applies energy only to the particulate matter being collected and therefore is very efficient in its consumption of energy (in the form of electricity). Invention of the electrostatic precipitator The first use of corona discharge to remove particles from an aerosol was by Hohlfeld in 1824. However, it was not commercialized until almost a century later. In 1907 Frederick Gardner Cottrell, a professor of chemistry at the University of California, Berkeley, applied for a patent on a device for charging particles and then collecting them through electrostatic attraction—the first electrostatic precipitator. Cottrell first applied the device to the collection of sulphuric acid mist and lead oxide fumes emitted from various acid-making and smelting activities. Wine-producing vineyards in northern California were being adversely affected by the lead emissions. At the time of Cottrell's invention, the theoretical basis for operation was not understood. The operational theory was developed later in Germany, with the work of Walter Deutsch and the formation of the Lurgi company. Cottrell used proceeds from his invention to fund scientific research through the creation of a foundation called Research Corporation in 1912, to which he assigned the patents. The intent of the organization was to bring inventions made by educators (such as Cottrell) into the commercial world for the benefit of society at large. The operation of Research Corporation is funded by royalties paid by commercial firms after commercialization occurs. Research Corporation has provided vital funding to many scientific projects: Goddard's rocketry experiments, Lawrence's cycl
https://en.wikipedia.org/wiki/Complete%20Boolean%20algebra
In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion of A is the Dedekind–MacNeille completion. More generally, if κ is a cardinal then a Boolean algebra is called κ-complete if every subset of cardinality less than κ has a supremum. Examples Complete Boolean algebras Every finite Boolean algebra is complete. The algebra of subsets of a given set is a complete Boolean algebra. The regular open sets of any topological space form a complete Boolean algebra. This example is of particular importance because every forcing poset can be considered as a topological space (a base for the topology consisting of sets that are the set of all elements less than or equal to a given element). The corresponding regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions by the given forcing poset. The algebra of all measurable subsets of a σ-finite measure space, modulo null sets, is a complete Boolean algebra. When the measure space is the unit interval with the σ-algebra of Lebesgue measurable sets, the Boolean algebra is called the random algebra. The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra. Non-complete Boolean algebras The algebra of all subsets of an infinite set that are finite or have finite complement is a Boolean algebra but is not complete. The algebra of all measurable subsets of a measure space is a ℵ1-complete Boolean algebra, but is not usually complet
https://en.wikipedia.org/wiki/Solomon%20Marcus
Solomon Marcus (; 1 March 1925 – 17 March 2016) was a Romanian mathematician, member of the Mathematical Section of the Romanian Academy (full member from 2001) and emeritus professor of the University of Bucharest's Faculty of Mathematics. His main research was in the fields of mathematical analysis, mathematical and computational linguistics and computer science. He also published numerous papers on various cultural topics: poetics, linguistics, semiotics, philosophy, and history of science and education. Early life and education He was born in Bacău, Romania, to Sima and Alter Marcus, a Jewish family of tailors. From an early age he had to live through dictatorships, war, infringements on free speech and free thinking as well as anti-Semitism. At the age of 16 or 17 he started tutoring younger pupils in order to help his family financially. He graduated from Ferdinand I High School in 1944, and completed his studies at the University of Bucharest's Faculty of Science, Department of Mathematics, in 1949. He continued tutoring throughout college and later recounted in an interview that he had to endure hunger during those years and that till the age of 20 he only wore hand-me-downs from his older brothers. Academic career Marcus obtained his PhD in Mathematics in 1956, with a thesis on the Monotonic functions of two variables, written under the direction of Miron Nicolescu. He was appointed Lecturer in 1955, Associate Professor in 1964, and became a Professor in 1966 (Emeritus in 1991). Marcus has contributed to the following areas: Mathematical Analysis, Set Theory, Measure and Integration Theory, and Topology Theoretical Computer Science Linguistics Poetics and Theory of Literature Semiotics Cultural Anthropology History and Philosophy of Science Education. Publications by and on Marcus Marcus published about 50 books, which have been translated into English, French, German, Italian, Spanish, Russian, Greek, Hungarian, Czech, Serbo-Croatian, and about 400 r
https://en.wikipedia.org/wiki/Biodistribution
Biodistribution is a method of tracking where compounds of interest travel in an experimental animal or human subject. For example, in the development of new compounds for PET (positron emission tomography) scanning, a radioactive isotope is chemically joined with a peptide (subunit of a protein). This particular class of isotopes emits positrons (which are antimatter particles, equal in mass to the electron, but with a positive charge). When ejected from the nucleus, positrons encounter an electron, and undergo annihilation which produces two gamma rays travelling in opposite directions. These gamma rays can be measured, and when compared to a standard, quantified. Biodistribution analysis Purpose and results A useful novel radiolabelled compound is one that is suitable either for medical imaging of certain body parts such as brain or tumors (injecting low doses of radioactivity) or for treating tumors (requiring injection of high doses of radioactivity). In both cases, the compound needs to accumulate in the target organ and any surplus compound present needs to clear the body rapidly. In medical diagnostic imaging, this then produces a clear diagnostic image (high image contrast), and in radiotherapy leads to an attack of the target (e.g. tumor) while minimizing side effects to non-target organs. Additional factors need to be evaluated in the development of a new diagnostic or therapeutic compound, including safety for humans. From an efficacy point of view, the biodistribution is an important aspect which can be measured by dissection or by imaging. By dissection For example, a new radiolabelled compound is injected intravenously into a group of 16-20 rodents (typically mice or rats). At intervals of 1, 2, 4, and 24 hours, smaller groups (4-5) of the animals are euthanized, then dissected. The organs of interest (usually: blood, liver, spleen, kidney, muscle, fat, adrenals, pancreas, brain, bone, stomach, small intestine, and upper and lower large intes
https://en.wikipedia.org/wiki/Conservative%20extension
In mathematical logic, a conservative extension is a supertheory of a theory which is often convenient for proving theorems, but proves no new theorems about the language of the original theory. Similarly, a non-conservative extension is a supertheory which is not conservative, and can prove more theorems than the original. More formally stated, a theory is a (proof theoretic) conservative extension of a theory if every theorem of is a theorem of , and any theorem of in the language of is already a theorem of . More generally, if is a set of formulas in the common language of and , then is -conservative over if every formula from provable in is also provable in . Note that a conservative extension of a consistent theory is consistent. If it were not, then by the principle of explosion, every formula in the language of would be a theorem of , so every formula in the language of would be a theorem of , so would not be consistent. Hence, conservative extensions do not bear the risk of introducing new inconsistencies. This can also be seen as a methodology for writing and structuring large theories: start with a theory, , that is known (or assumed) to be consistent, and successively build conservative extensions , , ... of it. Recently, conservative extensions have been used for defining a notion of module for ontologies: if an ontology is formalized as a logical theory, a subtheory is a module if the whole ontology is a conservative extension of the subtheory. An extension which is not conservative may be called a proper extension. Examples , a subsystem of second-order arithmetic studied in reverse mathematics, is a conservative extension of first-order Peano arithmetic. The subsystems of second-order arithmetic and are -conservative over . The subsystem is a -conservative extension of , and a -conservative over (primitive recursive arithmetic). Von Neumann–Bernays–Gödel set theory () is a conservative extension of Zermelo–Fraenkel set theo
https://en.wikipedia.org/wiki/United%20Devices
United Devices, Inc. was a privately held, commercial volunteer computing company that focused on the use of grid computing to manage high-performance computing systems and enterprise cluster management. Its products and services allowed users to "allocate workloads to computers and devices throughout enterprises, aggregating computing power that would normally go unused." It operated under the name Univa UD for a time, after merging with Univa on September 17, 2007. History Founded in 1999 in Austin, Texas, United Devices began with volunteer computing expertise from distributed.net and SETI@home, although only a few of the original technical staff from those organizations remained through the years. In April 2001, grid.org was formally announced as a philanthropic non-profit website to demonstrate the benefits of Internet-based large scale grid computing. Later in 2002 with help from UD, NTT Data launched a similar Internet-based Cell Computing project targeting Japanese users. In 2004, IBM and United Devices worked together to start the World Community Grid project as another demonstration of Internet-based grid computing. In August 2005, United Devices acquired the Paris-based GridXpert company and added Synergy to its product lineup. In 2006, the company acknowledged seeing an industry shift from only using grid computing for compute-intensive applications towards data center automation and business application optimization. Partly in response to the market shifts and reorganization, grid.org was shut down on April 27, 2007, after completing its mission to "demonstrate the viability and benefits of large-scale Internet-based grid computing". On September 17, 2007, the company announced that it would merge with the Lisle, Illinois-based Univa and operate under the new name Univa UD. The combined company would offer open source solutions based around Globus Toolkit, while continuing to sell its existing grid products and support its existing customers.
https://en.wikipedia.org/wiki/Social%20grooming
Social grooming is a behavior in which social animals, including humans, clean or maintain one another's body or appearance. A related term, allogrooming, indicates social grooming between members of the same species. Grooming is a major social activity, and a means by which animals who live in close proximity may bond and reinforce social structures, family links, and build companionships. Social grooming is also used as a means of conflict resolution, maternal behavior and reconciliation in some species. Mutual grooming typically describes the act of grooming between two individuals, often as a part of social grooming, pair bonding, or a precoital activity. Evolutionary advantages There are a variety of proposed mechanisms by which social grooming behavior has been hypothesized to increase fitness. These evolutionary advantages may come in the form of health benefits including reduced disease transmission and reduced stress levels, maintaining social structure, and direct improvement of fitness as a measure of survival. Health benefits It is often argued as to whether the overarching importance of social grooming is to boost an organism's health and hygiene or whether the social side of social grooming plays an equally or more important role. Traditionally, it is thought that the primary function of social grooming is the upkeep of an animal's hygiene. Evidence to support this statement involves the fact that all grooming concentrates on body parts that are inaccessible by autogrooming and that the amount of time spent allogrooming regions did not vary significantly even if the body part had a more important social or communicatory function. Social grooming behaviour has been shown to elicit an array of health benefits in a variety of species. For example, group member connection has the potential to mitigate the potentially harmful effects of stressors. In macaques, social grooming has been proven to reduce heart rate. Social affiliation during a mild stre
https://en.wikipedia.org/wiki/Power%20closed
In mathematics a p-group is called power closed if for every section of the product of powers is again a th power. Regular p-groups are an example of power closed groups. On the other hand, powerful p-groups, for which the product of powers is again a th power are not power closed, as this property does not hold for all sections of powerful p-groups. The power closed 2-groups of exponent at least eight are described in . References Group theory P-groups
https://en.wikipedia.org/wiki/Right%20circular%20cylinder
A right circular cylinder is a cylinder whose generatrices are perpendicular to the bases. Thus, in a right circular cylinder, the generatrix and the height have the same measurements. It is also less often called a cylinder of revolution, because it can be obtained by rotating a rectangle of sides and around one of its sides. Fixing as the side on which the revolution takes place, we obtain that the side , perpendicular to , will be the measure of the radius of the cylinder. In addition to the right circular cylinder, within the study of spatial geometry there is also the oblique circular cylinder, characterized by not having the geratrices perpendicular to the bases. Examples of objects that are shaped like a right circular cylinder are: some cans and candles. Elements of the right circular cylinder Bases: the two parallel and congruent circles of the bases; Axis: the line determined by the two points of the centers of the cylinder's bases; Height: the distance between the two planes of the cylinder's bases; Geratrices: the line segments parallel to the axis and that have ends at the points of the bases' circles. Lateral and total areas The lateral surface of a right cylinder is the meeting of the generatrices. It can be obtained by the product between the length of the circumference of the base and the height of the cylinder. Therefore, the lateral surface area is given by: . Where: represents the lateral surface area of the cylinder; is approximately 3.14; is the distance between the lateral surface of the cylinder and the axis, i.e. it is the value of the radius of the base; is the height of the cylinder; is the length of the circumference of the base, since , that is, . Note that in the case of the right circular cylinder, the height and the generatrix have the same measure, so the lateral area can also be given by: . The area of the base of a cylinder is the area of a circle (in this case we define that the circle has a radius with mea
https://en.wikipedia.org/wiki/Prosthaphaeresis
Prosthaphaeresis (from the Greek προσθαφαίρεσις) was an algorithm used in the late 16th century and early 17th century for approximate multiplication and division using formulas from trigonometry. For the 25 years preceding the invention of the logarithm in 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from the Greek prosthesis (πρόσθεσις) and aphaeresis (ἀφαίρεσις), meaning addition and subtraction, two steps in the process. History and motivation In 16th-century Europe, celestial navigation of ships on long voyages relied heavily on ephemerides to determine their position and course. These voluminous charts prepared by astronomers detailed the position of stars and planets at various points in time. The models used to compute these were based on spherical trigonometry, which relates the angles and arc lengths of spherical triangles (see diagram, right) using formulas such as and where a, b and c are the angles subtended at the centre of the sphere by the corresponding arcs. When one quantity in such a formula is unknown but the others are known, the unknown quantity can be computed using a series of multiplications, divisions, and trigonometric table lookups. Astronomers had to make thousands of such calculations, and because the best method of multiplication available was long multiplication, most of this time was spent taxingly multiplying out products. Mathematicians, particularly those who were also astronomers, were looking for an easier way, and trigonometry was one of the most advanced and familiar fields to these people. Prosthaphaeresis appeared in the 1580s, but its originator is not known for certain; its contributors included the mathematicians Ibn Yunis, Johannes Werner, Paul Wittich, Joost Bürgi, Christopher Clavius, and François Viète. Wittich, Yunis, and Clavius were all astronomers and have all been credited by various sources with discovering the method. Its most well-known propo
https://en.wikipedia.org/wiki/Gaussian%20blur
In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function (named after mathematician and scientist Carl Friedrich Gauss). It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales—see scale space representation and scale space implementation. Mathematics Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect. Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's high-frequency components; a Gaussian blur is thus a low-pass filter. The Gaussian blur is a type of image-blurring filter that uses a Gaussian function (which also expresses the normal distribution in statistics) for calculating the transformation to apply to each pixel in the image. The formula of a Gaussian function in one dimension is In two dimensions, it is the product of two such Gaussian functions, one in each dimension: where x is the distance from the origin in the horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution. It is important to note that the origin on these axes are at the center (0, 0). When applied in two dimensions, this formula produces a surface whose contours are concentric circles w
https://en.wikipedia.org/wiki/Software%20quality%20assurance
Software quality assurance (SQA) is a means and practice of monitoring all software engineering processes, methods, and work products to ensure compliance against defined standards. It may include ensuring conformance to standards or models, such as ISO/IEC 9126 (now superseded by ISO 25010), SPICE or CMMI. It includes standards and procedures that managers, administrators or developers may use to review and audit software products and activities to verify that the software meets quality criteria which link to standards. SQA encompasses the entire software development process, including requirements engineering, software design, coding, code reviews, source code control, software configuration management, testing, release management and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. Purpose SQA involves a three-pronged approach: Organization-wide policies, procedures and standards Project-specific policies, procedures and standards Compliance to appropriate procedures Guidelines for the application of ISO 9001:2015 to computer software are described in ISO/IEC/IEEE 90003:2018. External entities can be contracted as part of process assessments to verify that projects are standard-compliant. More specifically in case of software, ISO/IEC 9126 (now superseded by ISO 25010) should be considered and applied for software quality. Activities Quality assurance activities take place at each phase of development. Analysts use application technology and techniques to achieve high-quality specifications and designs, such as model-driven design. Engineers and technicians find bugs and problems with related software quality through testing activities. Standards and process deviations are identified and addressed throughout development by project managers or quality managers, who also ensure that changes to functionality, performance, features, architecture and component (in general: cha
https://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson%20method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century. For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step times the thermal diffusivity to the square of space step, , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations. Principle The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method the simplest example of a Gauss–Legendre implicit Runge–Kutta method which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is Letting and evaluated for and , the equation for Crank–Nicolson method is a combination of the forward Euler method at and the backward Euler method at n + 1 (note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution): Note that this is an implicit method: to get the "next" value of u in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear, so that advancing in time will involve the solution of a system of nonlinear algebraic equati
https://en.wikipedia.org/wiki/Photuris%20%28protocol%29
In computer networking, Photuris is a session key management protocol defined in RFC 2522. Photuris is the Latin name of a genus of fireflies native to North America that mimic the signals of other firefly species. The name was chosen as a reference to the (classified) FIREFLY key exchange protocol developed by the National Security Agency and used in the STU-III secure telephone, which is believed to operate by similar principles. See also FIREFLY External links RFC 2522 Test implementation of Photuris Network protocols
https://en.wikipedia.org/wiki/Luminous%20efficacy
Luminous efficacy is a measure of how well a light source produces visible light. It is the ratio of luminous flux to power, measured in lumens per watt in the International System of Units (SI). Depending on context, the power can be either the radiant flux of the source's output, or it can be the total power (electric power, chemical energy, or others) consumed by the source. Which sense of the term is intended must usually be inferred from the context, and is sometimes unclear. The former sense is sometimes called luminous efficacy of radiation, and the latter luminous efficacy of a light source or overall luminous efficacy. Not all wavelengths of light are equally visible, or equally effective at stimulating human vision, due to the spectral sensitivity of the human eye; radiation in the infrared and ultraviolet parts of the spectrum is useless for illumination. The luminous efficacy of a source is the product of how well it converts energy to electromagnetic radiation, and how well the emitted radiation is detected by the human eye. Efficacy and efficiency Luminous efficacy can be normalized by the maximum possible luminous efficacy to a dimensionless quantity called luminous efficiency. The distinction between efficacy and efficiency is not always carefully maintained in published sources, so it is not uncommon to see "efficiencies" expressed in lumens per watt, or "efficacies" expressed as a percentage. Luminous efficacy of radiation Explanation Wavelengths of light outside of the visible spectrum are not useful for illumination because they cannot be seen by the human eye. Furthermore, the eye responds more to some wavelengths of light than others, even within the visible spectrum. This response of the eye is represented by the luminosity function. This is a standardized function which represents the response of a "typical" eye under bright conditions (photopic vision). One can also define a similar curve for dim conditions (scotopic vision). When neith
https://en.wikipedia.org/wiki/Linnean%20Medal
The Linnean Medal of the Linnean Society of London was established in 1888, and is awarded annually to alternately a botanist or a zoologist or (as has been common since 1958) to one of each in the same year. The medal was of gold until 1976, and is for the preceding years often referred to as "the Gold Medal of the Linnean Society", not to be confused with the official Linnean Gold Medal which is seldom awarded. The engraver of the medal was Charles Anderson Ferrier of Dundee, a Fellow of the Linnean Society from 1882. On the obverse of the medal is the head of Linnaeus in profile and the words "Carolus Linnaeus", on the reverse are the arms of the society and the legend "Societas Linnaeana optime merenti"; an oval space is reserved for the recipient's name. Linnean medallists 19th century 1888: Sir Joseph D. Hooker and Sir Richard Owen 1889: Alphonse Louis Pierre Pyrame de Candolle 1890: Thomas Henry Huxley 1891: Jean-Baptiste Édouard Bornet 1892: Alfred Russel Wallace 1893: Daniel Oliver 1894: Ernst Haeckel 1895: Ferdinand Julius Cohn 1896: George James Allman 1897: Jacob Georg Agardh 1898: George Charles Wallich 1899: John Gilbert Baker 1900: Alfred Newton 20th century 1901: Sir George King 1902: Albert von Kölliker 1903: Mordecai Cubitt Cooke 1904: Albert C. L. G. Günther 1905: Eduard Strasburger 1906: Alfred Merle Norman 1907: Melchior Treub 1908: Thomas Roscoe Rede Stebbing 1909: Frederick Orpen Bower 1910: Georg Ossian Sars 1911: Hermann Graf zu Solms-Laubach 1912: Robert Cyril Layton Perkins 1913: Heinrich Gustav Adolf Engler 1914: Otto Butschli 1915: Joseph Henry Maiden 1916: Frank Evers Beddard 1917: Henry Brougham Guppy 1918: Frederick DuCane Godman 1919: Sir Isaac Bayley Balfour 1920: Sir Edwin Ray Lankester 1921: Dukinfield Henry Scott 1922: Sir Edward Bagnall Poulton 1923: Thomas Frederic Cheeseman 1924: William Carmichael McIntosh 1925: Francis Wall Oliver 1926: Edgar Johnson Allen 1927: Otto Stapf 1928: Edmund Beecher Wilson 1929: Hugo de Vries
https://en.wikipedia.org/wiki/Video%20astronomy
Video astronomy (aka - Camera Assisted Astronomy, aka electronically-assisted astronomy or "EAA") is a branch of astronomy for near real-time observing of relatively faint astronomical objects using very sensitive CCD or CMOS cameras. Unlike lucky imaging, video astronomy does not discard unwanted frames, and image corrections such as dark subtraction are often not applied, however, the gathered data may be retained and processed in more traditional ways.. Although the field has a long history reaching back to 1928 with the inception of live television broadcasting of the planet Mars, it has largely been developed more recently by amateur enthusiasts and is characterized by the use of relatively inexpensive equipment, such as easily available sensitive security cameras, in contrast to the equipment used for advanced astrophotography. By using either method of rapid internally stacked images, or very short exposure times, and using a TV monitor (for analog cameras) or a computer with readily available software (for USB cameras), video astronomy allows observers to see colour and detail that would not register to the eye. Because the image can be displayed on a monitor or television screen it allows multiple people to share 'live' images; using the internet it is possible for a worldwide audience to share such images. Live broadcasting websites exist for sharing live video astronomy feeds. Video astronomy, combined with remote control of a telescope, allows anyone including disabled people to operate a telescope remotely, or observers in a light-polluted area to operate a telescope in another area, even another country. Other benefits of the highly sensitive cameras used in video astronomy are the ability to see through thin cloud, and the ability to see many faint objects in areas suffering from light pollution. The equipment used varies from webcams and basic security cameras to specialized video astronomy cameras. Recent growing interest in the video 'near-live'
https://en.wikipedia.org/wiki/Freeze%20%28software%20engineering%29
In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap. The exact rules depend on the type of freeze and the particular development process in use; for example, they may include only allowing changes which fix bugs, or allowing changes only after thorough review by other members of the development team. They may also specify what happens if a change contrary to the rules is required, such as restarting the freeze period. Common types of freezes are: A (complete) specification freeze, in which the parties involved decide not to add any new requirement, specification, or feature to the feature list of a software project, so as to begin coding work. A (complete) feature freeze, in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. The addition of new features may have a disruptive effect on other parts of the program, due both to the introduction of new, untested source code or resources and to interactions with other features; thus, a feature freeze helps improve the program's stability. For example: "user interface feature freeze" means no more features will be permitted to the user interface portion of the code; bugs can still be fixed. A (complete) code freeze, in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. Particularly in large software systems, any change to the source code may have unintended consequences, potentially introducing new bugs; thus, a code freeze helps ensure that a portion of the program that is known to work correctly will continue to do so. Code freezes are often emplo
https://en.wikipedia.org/wiki/Exponential%20dichotomy
In the mathematical theory of dynamical systems, an exponential dichotomy is a property of an equilibrium point that extends the idea of hyperbolicity to non-autonomous systems. Definition If is a linear non-autonomous dynamical system in Rn with fundamental solution matrix Φ(t), Φ(0) = I, then the equilibrium point 0 is said to have an exponential dichotomy if there exists a (constant) matrix P such that P2 = P and positive constants K, L, α, and β such that and If furthermore, L = 1/K and β = α, then 0 is said to have a uniform exponential dichotomy. The constants α and β allow us to define the spectral window of the equilibrium point, (−α, β). Explanation The matrix P is a projection onto the stable subspace and I − P is a projection onto the unstable subspace. What the exponential dichotomy says is that the norm of the projection onto the stable subspace of any orbit in the system decays exponentially as t → ∞ and the norm of the projection onto the unstable subspace of any orbit decays exponentially as t → −∞, and furthermore that the stable and unstable subspaces are conjugate (because ). An equilibrium point with an exponential dichotomy has many of the properties of a hyperbolic equilibrium point in autonomous systems. In fact, it can be shown that a hyperbolic point has an exponential dichotomy. References Coppel, W. A. Dichotomies in stability theory, Springer-Verlag (1978), Dynamical systems Dichotomies
https://en.wikipedia.org/wiki/Locally%20compact%20quantum%20group
In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems. One of the main features distinguishing this new approach from its predecessors is the axiomatic existence of left and right invariant weights. This gives a noncommutative analogue of left and right Haar measures on a locally compact Hausdorff group. Definitions Before we can even begin to properly define a locally compact quantum group, we first need to define a number of preliminary concepts and also state a few theorems. Definition (weight). Let be a C*-algebra, and let denote the set of positive elements of . A weight on is a function such that for all , and for all and . Some notation for weights. Let be a weight on a C*-algebra . We use the following notation: , which is called the set of all positive -integrable elements of . , which is called the set of all -square-integrable elements of . , which is called the set of all -integrable elements of . Types of weights. Let be a weight on a C*-algebra . We say that is faithful if and only if for each non-zero . We say that is lower semi-continuous if and only if the set is a closed subset of for every . We say that is densely defined if and only if is a dense subset of , or equivalently, if and only if either or is a dense subset of . We say that is proper if and only if it is non-zero, lower semi-continuous and densely defined. Definition (one-parameter group). Let be a C*-algebra. A one-parameter group on is a family of *-automorphisms of that satisfies for all . We say that is norm-continuous if and only if for every , the mapping defined by is continuous (surely this s
https://en.wikipedia.org/wiki/Disk%20encryption%20software
Disk encryption software is computer security software that protects the confidentiality of data stored on computer media (e.g., a hard disk, floppy disk, or USB device) by using disk encryption. Compared to access controls commonly enforced by an operating system (OS), encryption passively protects data confidentiality even when the OS is not active, for example, if data is read directly from the hardware or by a different OS. In addition crypto-shredding suppresses the need to erase the data at the end of the disk's lifecycle. Disk encryption generally refers to wholesale encryption that operates on an entire volume mostly transparently to the user, the system, and applications. This is generally distinguished from file-level encryption that operates by user invocation on a single file or group of files, and which requires the user to decide which specific files should be encrypted. Disk encryption usually includes all aspects of the disk, including directories, so that an adversary cannot determine content, name or size of any file. It is well suited to portable devices such as laptop computers and thumb drives which are particularly susceptible to being lost or stolen. If used properly, someone finding a lost device cannot penetrate actual data, or even know what files might be present. Methods The disk's data is protected using symmetric cryptography with the key randomly generated when a disk's encryption is first established. This key is itself encrypted in some way using a password or pass-phrase known (ideally) only to the user. Thereafter, in order to access the disk's data, the user must supply the password to make the key available to the software. This must be done sometime after each operating system start-up before the encrypted data can be used. Done in software, encryption typically operates at a level between all applications and most system programs and the low-level device drivers by "transparently" (from a user's point of view) encrypting da
https://en.wikipedia.org/wiki/616%20%28number%29
616 (six hundred [and] sixteen) is the natural number following 615 and preceding 617. While 666 is called the "number of the beast" in most manuscripts of Revelation , a fragment of the earliest papyrus 115 gives the number as 616. In mathematics 616 is a member of the Padovan sequence, coming after 265, 351, 465 (it is the sum of the first two of these). 616 is a polygonal number in four different ways: it is a heptagonal number, as well as 13-, 31- and 104-gonal. It is also the sum of the squares of the factorials of 2,3,4: (2!)² + (3!)² + (4!)² = 4+36+576=616. The 616th harmonic number is the first to exceed seven. Number of the beast 666 is generally believed to have been the original Number of the Beast in the Book of Revelation in the Christian Bible. In 2005, however, a fragment of papyrus 115 was revealed, containing the earliest known version of that part of the Book of Revelation discussing the Number of the Beast. It gave the number as 616, suggesting that this may have been the original. One possible explanation for the two different numbers is that they reflect two different spellings of Emperor Nero/Neron's name, for which (according to this theory) this number is believed to be a code. In other fields Earth-616 is the name used to identify the primary continuity in which most Marvel Comics' titles take place. 616 film, a medium film format. Area code 616, an area code in Michigan. References Integers
https://en.wikipedia.org/wiki/Variable-frequency%20drive
A variable-frequency drive (VFD, or adjustable-frequency drives, adjustable-speed drives, variable-speed drives, AC drives, micro drives, inverter drives, or drives) is a type of AC motor drive (system incorporating a motor) that controls speed and torque by varying the frequency of the input electricity. Depending on its topology, it controls the associated voltage or current variation. VFDs are used in applications ranging from small appliances to large compressors. Systems using VFDs can be more efficient than hydraulic systems, such as in systems with pumps and damper control for fans. Since the 1980s, power electronics technology has reduced VFD cost and size and has improved performance through advances in semiconductor switching devices, drive topologies, simulation and control techniques, and control hardware and software. VFDs include low- and medium-voltage AC-AC and DC-AC topologies. History Pulse Width Modulating (PWM) variable frequency drive project started in the 1960s at Strömberg in Finland. Martti Harmoinen is regarded the inventor of this technology. Strömberg managed to sell the idea of PWM drive to Helsinki metro in 1973 and in 1982 first PWM drive SAMI10 were operational. System description and operation A variable-frequency drive is a device used in a drive system consisting of the following three main sub-systems: AC motor, main drive controller assembly, and drive/operator interface. AC motor The AC electric motor used in a VFD system is usually a three-phase induction motor. Some types of single-phase motors or synchronous motors can be advantageous in some situations, but generally three-phase induction motors are preferred as the most economical. Motors that are designed for fixed-speed operation are often used. Elevated-voltage stresses imposed on induction motors that are supplied by VFDs require that such motors be designed for definite-purpose inverter-fed duty in accordance with such requirements as Part 31 of NEMA Standard
https://en.wikipedia.org/wiki/Personal%20data
Personal data, also known as personal information or personally identifiable information (PII), is any information related to an identifiable person. The abbreviation PII is widely accepted in the United States, but the phrase it abbreviates has four common variants based on personal or personally, and identifiable or identifying. Not all are equivalent, and for legal purposes the effective definitions vary depending on the jurisdiction and the purposes for which the term is being used. Under European Union and United Kingdom data protection regimes, which centre primarily on the General Data Protection Regulation (GDPR), the term "personal data" is significantly broader, and determines the scope of the regulatory regime. National Institute of Standards and Technology Special Publication 800-122 defines personally identifiable information as "any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information." For instance, a user's IP address is not classed as PII on its own, but is classified as a linked PII. Personal data is defined under the GDPR as "any information which [is] related to an identified or identifiable natural person". The IP address of an Internet subscriber may be classed as personal data. The concept of PII has become prevalent as information technology and the Internet have made it easier to collect PII leading to a profitable market in collecting and reselling PII. PII can also be exploited by criminals to stalk or steal the identity of a person, or to aid in the planning of criminal acts. As a response to these threats, many website privacy policies specifically address the gathering of PII, and lawmake
https://en.wikipedia.org/wiki/Supramolecular%20assembly
In chemistry, a supramolecular assembly is a complex of molecules held together by noncovalent bonds. While a supramolecular assembly can be simply composed of two molecules (e.g., a DNA double helix or an inclusion compound), or a defined number of stoichiometrically interacting molecules within a quaternary complex, it is more often used to denote larger complexes composed of indefinite numbers of molecules that form sphere-, rod-, or sheet-like species. Colloids, liquid crystals, biomolecular condensates, micelles, liposomes and biological membranes are examples of supramolecular assemblies, and their realm of study is known as supramolecular chemistry. The dimensions of supramolecular assemblies can range from nanometers to micrometers. Thus they allow access to nanoscale objects using a bottom-up approach in far fewer steps than a single molecule of similar dimensions. The process by which a supramolecular assembly forms is called molecular self-assembly. Some try to distinguish self-assembly as the process by which individual molecules form the defined aggregate. Self-organization, then, is the process by which those aggregates create higher-order structures. This can become useful when talking about liquid crystals and block copolymers. Templating reactions As studied in coordination chemistry, metal ions (usually transition metal ions) exist in solution bound to ligands, In many cases, the coordination sphere defines geometries conducive to reactions either between ligands or involving ligands and other external reagents. A well known metal-ion-templating was described by Charles Pedersen in his synthesis of various crown ethers using metal cations as template. For example, 18-crown-6 strongly coordinates potassium ion thus can be prepared through the Williamson ether synthesis using potassium ion as the template metal. Metal ions are frequently used for assembly of large supramolecular structures. Metal organic frameworks (MOFs) are one example. MOFs
https://en.wikipedia.org/wiki/Air%E2%80%93fuel%20ratio
Air–fuel ratio (AFR) is the mass ratio of air to a solid, liquid, or gaseous fuel present in a combustion process. The combustion may take place in a controlled manner such as in an internal combustion engine or industrial furnace, or may result in an explosion (e.g., a dust explosion, gas or vapor explosion or in a thermobaric weapon). The air–fuel ratio determines whether a mixture is combustible at all, how much energy is being released, and how much unwanted pollutants are produced in the reaction. Typically a range of fuel to air ratios exists, outside of which ignition will not occur. These are known as the lower and upper explosive limits. In an internal combustion engine or industrial furnace, the air–fuel ratio is an important measure for anti-pollution and performance-tuning reasons. If exactly enough air is provided to completely burn all of the fuel, the ratio is known as the stoichiometric mixture, often abbreviated to stoich. Ratios lower than stoichiometric (where the fuel is in excess) are considered "rich". Rich mixtures are less efficient, but may produce more power and burn cooler. Ratios higher than stoichiometric (where the air is in excess) are considered "lean". Lean mixtures are more efficient but may cause higher temperatures, which can lead to the formation of nitrogen oxides. Some engines are designed with features to allow lean-burn. For precise air–fuel ratio calculations, the oxygen content of combustion air should be specified because of different air density due to different altitude or intake air temperature, possible dilution by ambient water vapor, or enrichment by oxygen additions. Internal combustion engines In theory, a stoichiometric mixture has just enough air to completely burn the available fuel. In practice, this is never quite achieved, due primarily to the very short time available in an internal combustion engine for each combustion cycle. Most of the combustion process is completed in approximately 2 millisecon
https://en.wikipedia.org/wiki/Password%20Safe
Password Safe is a free and open-source password manager program originally written for Microsoft Windows but supporting wide area of operating systems with compatible clients available for Linux, FreeBSD, Android, IOS, BlackBerry and other operating systems as well. The Linux version is available for Ubuntu (including the Kubuntu and Xubuntu derivatives) and Debian. A Java-based version is also available on SourceForge. On its page, users can find links to unofficial releases running under Android, BlackBerry, and other mobile operating systems. History The program was initiated by Bruce Schneier at Counterpane Systems, and is now hosted on SourceForge (Windows) and GitHub (Linux) and developed by a group of volunteers. Design After filling in the master password the user has access to all account data entered and saved previously. The data can be organized by categories, searched, and sorted based on references which are easy for the user to remember. There are various key combinations and mouse clicks to copy parts of the stored data (password, email, username etc.), or use the autofill feature (for filling forms). The program can be set to minimize automatically after a period of idle time and clears the clipboard. It is possible to compare and synchronize (merge) two different password databases. The program can be set up to generate automatic backups. Password Safe does not support database sharing, but the single-file database can be shared by any external sharing method (for example Syncthing, Dropbox etc.). Database is not stored online. Features Note: All uncited information in this section is sourced from the official Help file included with the application Password management Stored passwords can be sectioned into groups and subgroups in a tree structure. Changes to entries can be tracked, including a history of previous passwords, the creation time, modification time, last access time, and expiration time of each password stored. Text notes
https://en.wikipedia.org/wiki/International%20Celestial%20Reference%20System%20and%20its%20realizations
The International Celestial Reference System (ICRS) is the current standard celestial reference system adopted by the International Astronomical Union (IAU). Its origin is at the barycenter of the Solar System, with axes that are intended to "show no global rotation with respect to a set of distant extragalactic objects". This fixed reference system differs from previous reference systems, which had been based on Catalogues of Fundamental Stars that had published the positions of stars based on direct "observations of [their] equatorial coordinates, right ascension and declination" and had adopted as "privileged axes ... the mean equator and the dynamical equinox" at a particular date and time. The International Celestial Reference Frame (ICRF) is a realization of the International Celestial Reference System using reference celestial sources observed at radio wavelengths. In the context of the ICRS, a reference frame (RF) is the physical realization of a reference system, i.e., the reference frame is the set of numerical coordinates of the reference sources, derived using the procedures spelled out by the ICRS. More specifically, the ICRF is an inertial barycentric reference frame whose axes are defined by the measured positions of extragalactic sources (mainly quasars) observed using very long baseline interferometry while the Gaia-CRF is an inertial barycentric reference frame defined by optically measured positions of extragalactic sources by the Gaia satellite and whose axes are rotated to conform to the ICRF. Although general relativity implies that there are no true inertial frames around gravitating bodies, these reference frames are important because they do not exhibit any measurable angular rotation since the extragalactic sources used to define the ICRF and the Gaia-CRF are so far away. The ICRF and the Gaia-CRF are now the standard reference frames used to define the positions of astronomical objects. Reference systems and frames It is useful to di
https://en.wikipedia.org/wiki/Catastrophe%20modeling
Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology. Catastrophes/ Perils Natural catastrophes (sometimes referred to as "nat cat") that are modeled include: Hurricane (main peril is wind damage; some models can also include storm surge and rainfall) Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage) severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail) Flood Extratropical cyclone (commonly referred to as European windstorm) Wildfire Winter storm Human catastrophes include: Terrorism events Warfare Casualty/liability events Forced displacement crises Cyber data breaches Lines of business modeled Cat modeling involves many lines of business, including: Personal property Commercial property Workers' compensation Automobile physical damage Limited liabilities Product liability Business Interruption Inputs, Outputs, and Use Cases The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups: Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.) Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.) Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.) The output of a cat model is an estimate of
https://en.wikipedia.org/wiki/Hoarding%20%28animal%20behavior%29
Hoarding or caching in animal behavior is the storage of food in locations hidden from the sight of both conspecifics (animals of the same or closely related species) and members of other species. Most commonly, the function of hoarding or caching is to store food in times of surplus for times when food is less plentiful. However, there is evidence that some amount of caching or hoarding is done in order to ripen the food, called ripening caching. The term hoarding is most typically used for rodents, whereas caching is more commonly used in reference to birds, but the behaviors in both animal groups are quite similar. Hoarding is done either on a long-term basiscached on a seasonal cycle, with food to be consumed months down the lineor on a short-term basis, in which case the food will be consumed over a period of one or several days. Some common animals that cache their food are rodents such as hamsters and squirrels, and many different bird species, such as rooks and woodpeckers. The western scrub jay is noted for its particular skill at caching. There are two types of caching behavior: larder hoarding, where a species creates a few large caches which it often defends, and scatter hoarding, where a species will create multiple caches, often with each individual food item stored in a unique place. Both types of caching have their advantage. Function Caching behavior is typically a way to save excess edible food for later consumption—either soon to be eaten food, such as when a jaguar hangs partially eaten prey from a tree to be eaten within a few days, or long term, where the food is hidden and retrieved many months later. Caching is a common adaptation to seasonal changes in food availability. In regions where winters are harsh, food availability typically becomes low, and caching food during the times of high food availability in the warmer months provides a significant survival advantage. For species that hoard perishable food weather can significantly aff
https://en.wikipedia.org/wiki/Language%20module
The language module or language faculty is a hypothetical structure in the human brain which is thought to contain innate capacities for language, originally posited by Noam Chomsky. There is ongoing research into brain modularity in the fields of cognitive science and neuroscience, although the current idea is much weaker than what was proposed by Chomsky and Jerry Fodor in the 1980s. In today's terminology, 'modularity' refers to specialisation: language processing is specialised in the brain to the extent that it occurs partially in different areas than other types of information processing such as visual input. The current view is, then, that language is neither compartmentalised nor based on general principles of processing (as proposed by George Lakoff). It is modular to the extent that it constitutes a specific cognitive skill or area in cognition. Meaning of a module The notion of a dedicated language module in the human brain originated with Noam Chomsky's theory of Universal Grammar (UG). The debate on the issue of modularity in language is underpinned, in part, by different understandings of this concept. There is, however, some consensus in the literature that a module is considered committed to processing specialized representations (domain-specificity) in an informationally encapsulated way. A distinction should be drawn between anatomical modularity, which proposes there is one 'area' in the brain that deals with this processing, and functional modularity that obviates anatomical modularity whilst maintaining information encapsulation in distributed parts of the brain. No singular anatomical module The available evidence points toward the conclusion that no single area of the brain is solely devoted to processing language. The Wada test, where sodium amobarbital is used to anaesthetise one hemisphere, shows that the left-hemisphere appears to be crucial in language processing. Yet, neuroimaging does not implicate any single area but rather ident
https://en.wikipedia.org/wiki/Esky
Esky is a brand of portable coolers, originally Australian, derived from the word "Eskimo". The term "esky" is also commonly used in Australia to generically refer to portable coolers or ice boxes and is part of the Australian vernacular, in place of words like "cooler" or "cooler box" and the New Zealand "chilly bin". The brand name was purchased by the United States firm Coleman Company, (a subsidiary of Newell Brands) in 2009. History Some historians have credited Malley's with the invention of the portable ice cooler. According to the company, the Esky was "recognised as the first official portable cooler in the world." The company's own figures claim that, by 1960, 500,000 Australian households owned one (in a country of approximately 3 million households at the time). The brand "Esky" was used from around 1945, for an Australian-made ice chest, a free-standing insulated cabinet with two compartments: the upper to carry a standard () block of ice, and the lower for food and drinks. It was made in Sydney by Malleys but did not carry their name until around 1949. The first (metal-cased) portable Esky appeared in 1952, sized to accommodate six bottles of beer or soft drink, as advertised nationally. By 1965 "esky" (no capital E) was being used in Australian literature for such coolers, and in 1973 Malleys, owners of the tradename, acknowledged that the term had entered the vernacular and was being used for lightweight plastic imitations. One such brand was Willow, an Australian manufacturer, previously known for domestic "tinware" — buckets, bins, cake tins and oven trays. Nylex started making the plastic-cased Esky in 1984. In 1993 Nylex Corporation was still defending their ownership of the "Esky" trademark, but by 2002 they had allowed it to lapse. Outdoor recreation company Coleman Australia bought the Esky brands from Nylex Ltd after the company went into administration in February 2009, and later that year Coleman was producing most of the Esky lin
https://en.wikipedia.org/wiki/Medical%20oddity
A medical oddity is an unusual predicament or event which takes place in a medical context. Some examples of medical oddities might include: "lost and found" surgical instruments (in the body), grotesquely oversized tumors, (human) male pregnancy, rare or "orphan" illnesses, rare allergies (such as to water), strange births (extra or missing organs), and bizarre syndromes (such as Capgras delusion). Medical oddities can also include unusual discoveries in purchased food, such as finding a severed finger or thumb in a hamburger. Medical oddities are also known as medical curiosities. While not strictly paranormal, they are classically Fortean. See also Cabinet of curiosities Further reading Books Gould, George Milbry, Anomalies and Curiosities of Medicine, W. B. Saunders, ©1896, Philadelphia, LC Control Number: 07028696 Jones, Kenneth Lyons, Smith's Recognizable Patterns of Human Malformation, Saunders, ©1997, Philadelphia, LC Control Number: 96016722, Periodicals Fortean Times Forteana
https://en.wikipedia.org/wiki/Airline%20teletype%20system
The airline teletype system uses teleprinters, which are electro-mechanical typewriters that can communicate typed messages from point to point through simple electric communications channels, often just pairs of wires. The most modern form of these devices are fully electronic and use a screen, instead of a printer. Historical development The airline industry began using teletypewriter technology in the early 1920s utilizing radio stations located at 10 airfields in the United States. The US Post Office and other US government agencies used these radio stations for transmitting telegraph messages. It was during this period that the first federal teletypewriter system was introduced in the United States to allow weather and flight information to be exchanged between air traffic facilities. While the use of physical teletypes is almost extinct, the message formats and switching concepts remain similar. In 1929, Aeronautical Radio Incorporated (ARINC) was formed to manage radio frequencies and license allocation in the United States, as well as to support the radio stations that were used by the emerging airlines, a role ARINC still fulfils today. ARINC is a private company originally owned by many of the world's airlines including American Airlines, Continental Airlines, British Airways, Air France, and SAS; it was acquired by Collins Aerospace in December 2013. In 1949, the Société Internationale de Télécommunication Aeronautique (SITA) was formed as a cooperative by 11 airlines: Air France, KLM, Sabena, Swissair, TWA, British European Airways, British Overseas Airways Corporation, British South American Airways, Swedish AB Aerotransport, Danish Det Danske Luftfartselskab A/S, and Norwegian Det Norske Luftfartselskap. Their aim was to enable airlines to be able to use the existing communications facilities in the most efficient manner. Morse code was the general means of relaying information between air communications stations prior to World War II. Generally,
https://en.wikipedia.org/wiki/Cryptographically%20Generated%20Address
A Cryptographically Generated Address (CGA) is an Internet Protocol Version 6 (IPv6) address that has a host identifier computed from a cryptographic hash function. This procedure is a method for binding a public signature key to an IPv6 address in the Secure Neighbor Discovery Protocol (SEND). Characteristics A Cryptographically Generated Address is an IPv6 address whose interface identifier has been generated according to the CGA generation method. The interface identifier is formed by the least-significant 64 bits of an IPv6 address and is used to identify the host's network interface on its subnet. The subnet is determined by the most-significant 64 bits, the subnet prefix. Apart from the public key that is to be bound to the CGA, the CGA generation method takes several other input parameters including the predefined subnet prefix. These parameters, along with other parameters that are generated during the execution of the CGA generation method, form a set of parameters called the CGA Parameters data structure. The complete set of CGA Parameters has to be known in order to be able to verify the corresponding CGA. The CGA Parameters data structure consists of: modifier: a random 128-bit unsigned integer; subnetPrefix: the 64-bit prefix that defines to which subnet the CGA belongs; collCount: an 8-bit unsigned integer that must be 0, 1, or 2; publicKey: the public key as a DER-encoded ASN.1 structure of the type SubjectPublicKeyInfo; extFields: an optional variable-length field (default length 0). Additionally, a security parameter Sec determines the CGA's strength against brute-force attacks. This is a 3-bit unsigned integer that can have any value from 0 up to (and including) 7 and is encoded in the three leftmost bits of the CGA's interface identifier. The higher the value of Sec, the higher the level of security, but also the longer it generally takes to generate a CGA. For convenience, the intermediate Sec values in the pseudocode below are assumed t
https://en.wikipedia.org/wiki/Secure%20Neighbor%20Discovery
The Secure Neighbor Discovery (SEND) protocol is a security extension of the Neighbor Discovery Protocol (NDP) in IPv6 defined in RFC 3971 and updated by RFC 6494. The Neighbor Discovery Protocol (NDP) is responsible in IPv6 for discovery of other network nodes on the local link, to determine the link layer addresses of other nodes, and to find available routers, and maintain reachability information about the paths to other active neighbor nodes (RFC 4861). NDP is insecure and susceptible to malicious interference. It is the intent of SEND to provide an alternate mechanism for securing NDP with a cryptographic method that is independent of IPsec, the original and inherent method of securing IPv6 communications. SEND uses Cryptographically Generated Addresses (CGA) and other new NDP options for the ICMPv6 packet types used in NDP. SEND was updated to use the Resource Public Key Infrastructure (RPKI) by RFC 6494 and RFC 6495 which define use of a SEND Certificate Profile utilizing a modified RFC 6487 RPKI Certificate Profile which must include a single RFC 3779 IP Address Delegation extension. There have been concerns with algorithm agility vis-à-vis attacks on hash functions used by SEND expressed in RFC 6273, as CGA currently uses the SHA-1 hash algorithm and PKIX certificates and does not provide support for alternative hash algorithms. Implementations Cisco IOS 12.4(24)T and newer Docomo USL SEND fork Easy-SEND ipv6-send-cga, Huawei and Beijing University of Posts and Telecommunications NDprotector, Telecom SudParis Native SeND kernel API TrustRouter USL SEND (discontinued), NTT DoCoMo WinSEND See also Neighbor Discovery Protocol References Internet protocols Cryptographic protocols Link protocols IPv6
https://en.wikipedia.org/wiki/Paper%20craft
Paper craft is a collection of crafts using paper or card as the primary artistic medium for the creation of two or three-dimensional objects. Paper and card stock lend themselves to a wide range of techniques and can be folded, curved, bent, cut, glued, molded, stitched, or layered. Papermaking by hand is also a paper craft. Paper crafts are known in most societies that use paper, with certain kinds of crafts being particularly associated with specific countries or cultures. In Caribbean countries paper craft is unique to Caribbean culture which reflect the importance of native animals in life of people. In addition to the aesthetic value of paper crafts, various forms of paper crafts are used in the education of children. Paper is a relatively inexpensive medium, readily available, and easier to work with than the more complicated media typically used in the creation of three-dimensional artwork, such as ceramics, wood, and metals. It is also neater to work with than paints, dyes, and other coloring materials. Paper crafts may also be used in therapeutic settings, providing children with a safe and uncomplicated creative outlet to express feelings. Folded paper The word "paper" derives from papyrus, the name of the ancient material manufactured from beaten reeds in Egypt as far back as the third millennium B.C. Indeed, the earliest known example of "paper folding" is an ancient Egyptian map, drawn on papyrus and folded into rectangular forms like a modern road map. However, it does not appear that intricate paper folding as an art form became possible until the introduction of wood-pulp based papers. The first Japanese origami is dated from the 6th century A.D. In much of the West, the term origami is used synonymously with paper folding, though the term properly only refers to the art of paper folding in Japan. Other forms of paper folding include Chinese zhezhi (摺紙), Korean jong'i jeopgi (종이접기), and Western paper folding, such as the traditional paper