source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Bachelor%20of%20Mathematics
A Bachelor of Mathematics (abbreviated B.Math or BMath) is an undergraduate academic degree awarded for successfully completing a program of study in mathematics or related disciplines, such as applied mathematics, actuarial science, computational science, data analytics, financial mathematics, mathematical physics, pure mathematics, operations research or statistics. The Bachelor of Mathematics caters to high-achieving students seeking to develop a comprehensive specialised knowledge in a field of mathematics or a high level of sophistication in the applications of mathematics. In practice, this is essentially equivalent to a Bachelor of Science or Bachelor of Arts degree with a speciality in mathematics. Relatively few institutions award Bachelor of Mathematics degrees, and the distinction between those that do and those that award B.Sc or B.A. degrees for mathematics is usually bureaucratic, rather than curriculum related. List of institutions awarding Bachelor of Mathematics degrees Australia Flinders University, Adelaide, South Australia Queensland University of Technology, Brisbane, Queensland The Australian National University, Canberra, Australian Capital Territory (a Bachelor of Mathematical Sciences BMASC) University of Adelaide, Adelaide, South Australia (a Bachelor of Mathematical Sciences BMathSc or Bachelor of Mathematical and Computer Sciences BMath&CompSc) University of Newcastle, Newcastle, New South Wales University of Western Sydney - Penrith, Parramatta, Cambelltown campuses in NSW. Macquarie University, North Ryde, NSW. University of Queensland, Brisbane, Queensland University of South Australia, Adelaide, South Australia (a Bachelor of Mathematical Sciences BMathSc) University of Wollongong, Wollongong, New South Wales Bangladesh University of Dhaka, Dhaka, Bangladesh Jagannath University, Dhaka, Bangladesh University of Chittagong, Chittagong, Bangladesh Noakhali University of Science and Technology, Noakhali, Bangladesh Can
https://en.wikipedia.org/wiki/Need%20for%20Speed%3A%20Underground
Need for Speed: Underground is a 2003 racing video game and the seventh installment in the Need for Speed series. It was developed by EA Black Box and published by Electronic Arts. Three different versions of the game were produced: one for consoles and Microsoft Windows, and another for the Game Boy Advance. An arcade version was additionally developed by Global VR, and was published by Konami with assistance from Electronic Arts. Underground rebooted the franchise, ignoring previous Need for Speed games that featured sports cars and exotics. It was the first game in the series to offer a career mode that features a comprehensive storyline, and a garage mode that allowed players to fully customize their cars with a large variety of brand-name performance and visual upgrades. All races take place in the fictional Olympic City. Rather than exotic cars, Underground featured vehicles associated with the import scene. Underground was critically and commercially successful, and was followed by Need for Speed: Underground 2 in 2004. Plot The player begins the game in a circuit race set in Olympic City, driving a white Honda/Acura Integra Type R that sports a unique set of vinyls and a wide body kit. They dominantly win the race; only to be awakened from a daydream by a woman named Samantha. Samantha is the player's contact in Olympic City, touring him across the import culture scene and illegal street racing therein. She helps the player buy their first car, although later pokes fun at the player's choices regardless of their preference, saying "ouch, that's seriously weak, dude". She kickstarts the player by introducing them to local street racers such as Jose (who offers Circuit events), Klutch (who offers Drag events) and Dirt (who offers Drift events). In event #7, she introduces him to T.J., a mechanic who rewards him with numerous performance upgrades and parts, provided he wins his time trials challenges. Samantha also issues time trials to the player, her rewa
https://en.wikipedia.org/wiki/512%20%28number%29
512 (five hundred [and] twelve) is the natural number following 511 and preceding 513. In mathematics 512 is a power of two: 29 (2 to the 9th power) and the cube of 8: 83. It is the eleventh Leyland number. It is also the third Dudeney number. It is a self number in base 12. It is a harshad number in decimal. It is the cube of the sum of its digits in base 10. It is the number of directed graphs on 3 labeled nodes. In computing 512 bytes is a common disk sector size, and exactly a half of kibibyte. Internet Relay Chat restricts the size of a message to 510 bytes, which fits to 512-bytes buffers when coupled with the message-separating CRLF sequence. 512 = 2·256 is the highest number of glyphs that the VGA character generator can use simultaneously. In music Selena Quintanilla released a song titled El Chico del Apartamento 512 (the title referring to area code 512, which serves Austin, Texas), in 1995. Lamb of God recorded a song titled "512" for their 2015 album VII: Sturm und Drang. Mora and Jhay Cortez recorded a song titled "512" (The number 512 in this song refers to the Percocet 512 pill, a white, round pill whose active substances are acetaminophen and oxycodone hydrochloride) in February of 2021. References Integers
https://en.wikipedia.org/wiki/Abelian%20integral
In mathematics, an abelian integral, named after the Norwegian mathematician Niels Henrik Abel, is an integral in the complex plane of the form where is an arbitrary rational function of the two variables and , which are related by the equation where is an irreducible polynomial in , whose coefficients , are rational functions of . The value of an abelian integral depends not only on the integration limits, but also on the path along which the integral is taken; it is thus a multivalued function of . Abelian integrals are natural generalizations of elliptic integrals, which arise when where is a polynomial of degree 3 or 4. Another special case of an abelian integral is a hyperelliptic integral, where , in the formula above, is a polynomial of degree greater than 4. History The theory of abelian integrals originated with a paper by Abel published in 1841. This paper was written during his stay in Paris in 1826 and presented to Augustin-Louis Cauchy in October of the same year. This theory, later fully developed by others, was one of the crowning achievements of nineteenth century mathematics and has had a major impact on the development of modern mathematics. In more abstract and geometric language, it is contained in the concept of abelian variety, or more precisely in the way an algebraic curve can be mapped into abelian varieties. Abelian integrals were later connected to the prominent mathematician David Hilbert's 16th Problem, and they continue to be considered one of the foremost challenges in contemporary mathematics. Modern view In the theory of Riemann surfaces, an abelian integral is a function related to the indefinite integral of a differential of the first kind. Suppose we are given a Riemann surface and on it a differential 1-form that is everywhere holomorphic on , and fix a point on , from which to integrate. We can regard as a multi-valued function , or (better) an honest function of the chosen path drawn on from to . Since
https://en.wikipedia.org/wiki/Differential%20of%20the%20first%20kind
In mathematics, differential of the first kind is a traditional term used in the theories of Riemann surfaces (more generally, complex manifolds) and algebraic curves (more generally, algebraic varieties), for everywhere-regular differential 1-forms. Given a complex manifold M, a differential of the first kind ω is therefore the same thing as a 1-form that is everywhere holomorphic; on an algebraic variety V that is non-singular it would be a global section of the coherent sheaf Ω1 of Kähler differentials. In either case the definition has its origins in the theory of abelian integrals. The dimension of the space of differentials of the first kind, by means of this identification, is the Hodge number h1,0. The differentials of the first kind, when integrated along paths, give rise to integrals that generalise the elliptic integrals to all curves over the complex numbers. They include for example the hyperelliptic integrals of type where Q is a square-free polynomial of any given degree > 4. The allowable power k has to be determined by analysis of the possible pole at the point at infinity on the corresponding hyperelliptic curve. When this is done, one finds that the condition is k ≤ g − 1, or in other words, k at most 1 for degree of Q 5 or 6, at most 2 for degree 7 or 8, and so on (as g = [(1+ deg Q)/2]). Quite generally, as this example illustrates, for a compact Riemann surface or algebraic curve, the Hodge number is the genus g. For the case of algebraic surfaces, this is the quantity known classically as the irregularity q. It is also, in general, the dimension of the Albanese variety, which takes the place of the Jacobian variety. Differentials of the second and third kind The traditional terminology also included differentials of the second kind and of the third kind. The idea behind this has been supported by modern theories of algebraic differential forms, both from the side of more Hodge theory, and through the use of morphisms to commutative
https://en.wikipedia.org/wiki/Point%20group
In geometry, a point group is a mathematical group of symmetry operations (isometries in a Euclidean space) that have a fixed point in common. The coordinate origin of the Euclidean space is conventionally taken to be a fixed point, and every point group in dimension d is then a subgroup of the orthogonal group O(d). Point groups are used to describe the symmetries of geometric figures and physical objects such as molecules. Each point group can be represented as sets of orthogonal matrices M that transform point x into point y according to y = Mx. Each element of a point group is either a rotation (determinant of M = 1), or it is a reflection or improper rotation (determinant of M = −1). The geometric symmetries of crystals are described by space groups, which allow translations and contain point groups as subgroups. Discrete point groups in more than one dimension come in infinite families, but from the crystallographic restriction theorem and one of Bieberbach's theorems, each number of dimensions has only a finite number of point groups that are symmetric over some lattice or grid with that number of dimensions. These are the crystallographic point groups. Chiral and achiral point groups, reflection groups Point groups can be classified into chiral (or purely rotational) groups and achiral groups. The chiral groups are subgroups of the special orthogonal group SO(d): they contain only orientation-preserving orthogonal transformations, i.e., those of determinant +1. The achiral groups contain also transformations of determinant −1. In an achiral group, the orientation-preserving transformations form a (chiral) subgroup of index 2. Finite Coxeter groups or reflection groups are those point groups that are generated purely by a set of reflectional mirrors passing through the same point. A rank n Coxeter group has n mirrors and is represented by a Coxeter-Dynkin diagram. Coxeter notation offers a bracketed notation equivalent to the Coxeter diagram, with marku
https://en.wikipedia.org/wiki/Computable%20function
Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable. This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be efficiently computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently. According to the Church–Turing thesis, computable functions are exactly the functions that can be calculated using a mechanical calculation device given unlimited amounts of time and storage space. Equivalently, this thesis states that a function is computable if and only if it has an algorithm. An algorithm in this sense is understood to be a sequence of steps a person with unlimited time and an unlimited supply of pen and paper could follow. The
https://en.wikipedia.org/wiki/BBC%20News
BBC News is an operational business division of the British Broadcasting Corporation (BBC) responsible for the gathering and broadcasting of news and current affairs in the UK and around the world. The department is the world's largest broadcast news organisation and generates about 120 hours of radio and television output each day, as well as online news coverage. The service maintains 50 foreign news bureaus with more than 250 correspondents around the world. Deborah Turness has been the CEO of news and current affairs since September 2022. In 2019, it was reported in an Ofcom report that the BBC spent £136m on news during the period April 2018 to March 2019. BBC News' domestic, global and online news divisions are housed within the largest live newsroom in Europe, in Broadcasting House in central London. Parliamentary coverage is produced and broadcast from studios in London. Through BBC English Regions, the BBC also has regional centres across England and national news centres in Northern Ireland, Scotland and Wales. All nations and English regions produce their own local news programmes and other current affairs and sport programmes. The BBC is a quasi-autonomous corporation authorised by royal charter, making it operationally independent of the government. History Early years The British Broadcasting Company broadcast its first radio bulletin from radio station 2LO on 14 November 1922. Wishing to avoid competition, newspaper publishers persuaded the government to ban the BBC from broadcasting news before 7 p.m., and to force it to use wire service copy instead of reporting on its own. The BBC gradually gained the right to edit the copy and, in 1934, created its own news operation. However, it could not broadcast news before 6 p.m. until World War II. In addition to news, Gaumont British and Movietone cinema newsreels had been broadcast on the TV service since 1936, with the BBC producing its own equivalent Television Newsreel programme from January 1948.
https://en.wikipedia.org/wiki/Hyperbolic%20sector
A hyperbolic sector is a region of the Cartesian plane bounded by a hyperbola and two rays from the origin to it. For example, the two points and on the rectangular hyperbola , or the corresponding region when this hyperbola is re-scaled and its orientation is altered by a rotation leaving the center at the origin, as with the unit hyperbola. A hyperbolic sector in standard position has and . Hyperbolic sectors are the basis for the hyperbolic functions. Area The area of a hyperbolic sector in standard position is natural logarithm of b . Proof: Integrate under 1/x from 1 to b, add triangle {(0, 0), (1, 0), (1, 1)}, and subtract triangle {(0, 0), (b, 0), (b, 1/b)} (both triangles of which have the same area). When in standard position, a hyperbolic sector corresponds to a positive hyperbolic angle at the origin, with the measure of the latter being defined as the area of the former. Hyperbolic triangle When in standard position, a hyperbolic sector determines a hyperbolic triangle, the right triangle with one vertex at the origin, base on the diagonal ray y = x, and third vertex on the hyperbola with the hypotenuse being the segment from the origin to the point (x, y) on the hyperbola. The length of the base of this triangle is and the altitude is where u is the appropriate hyperbolic angle. The analogy between circular and hyperbolic functions was described by Augustus De Morgan in his Trigonometry and Double Algebra (1849). William Burnside used such triangles, projecting from a point on the hyperbola xy = 1 onto the main diagonal, in his article "Note on the addition theorem for hyperbolic functions". Hyperbolic logarithm It is known that f(x) = xp has an algebraic antiderivative except in the case p = –1 corresponding to the quadrature of the hyperbola. The other cases are given by Cavalieri's quadrature formula. Whereas quadrature of the parabola had been accomplished by Archimedes in the third century BC (in The Quadrature of the Parabola
https://en.wikipedia.org/wiki/Hyperbolic%20angle
In geometry, hyperbolic angle is a real number determined by the area of the corresponding hyperbolic sector of xy = 1 in Quadrant I of the Cartesian plane. The hyperbolic angle parametrises the unit hyperbola, which has hyperbolic functions as coordinates. In mathematics, hyperbolic angle is an invariant measure as it is preserved under hyperbolic rotation. The hyperbola xy = 1 is rectangular with a semi-major axis of , analogous to the magnitude of a circular angle corresponding to the area of a circular sector in a circle with radius . Hyperbolic angle is used as the independent variable for the hyperbolic functions sinh, cosh, and tanh, because these functions may be premised on hyperbolic analogies to the corresponding circular trigonometric functions by regarding a hyperbolic angle as defining a hyperbolic triangle. The parameter thus becomes one of the most useful in the calculus of real variables. Definition Consider the rectangular hyperbola , and (by convention) pay particular attention to the branch . First define: The hyperbolic angle in standard position is the angle at between the ray to and the ray to , where . The magnitude of this angle is the area of the corresponding hyperbolic sector, which turns out to be . Note that, because of the role played by the natural logarithm: Unlike the circular angle, the hyperbolic angle is unbounded (because is unbounded); this is related to the fact that the harmonic series is unbounded. The formula for the magnitude of the angle suggests that, for , the hyperbolic angle should be negative. This reflects the fact that, as defined, the angle is directed. Finally, extend the definition of hyperbolic angle to that subtended by any interval on the hyperbola. Suppose are positive real numbers such that and , so that and are points on the hyperbola and determine an interval on it. Then the squeeze mapping maps the angle to the standard position angle . By the result of Gregoire de Saint-Vincent, the
https://en.wikipedia.org/wiki/Squeeze%20mapping
In linear algebra, a squeeze mapping, also called a squeeze transformation, is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is not a rotation or shear mapping. For a fixed positive real number , the mapping is the squeeze mapping with parameter . Since is a hyperbola, if and , then and the points of the image of the squeeze mapping are on the same hyperbola as is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation, as did Émile Borel in 1914, by analogy with circular rotations, which preserve circles. Logarithm and hyperbolic angle The squeeze mapping sets the stage for development of the concept of logarithms. The problem of finding the area bounded by a hyperbola (such as is one of quadrature. The solution, found by Grégoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647, required the natural logarithm function, a new concept. Some insight into logarithms comes through hyperbolic sectors that are permuted by squeeze mappings while preserving their area. The area of a hyperbolic sector is taken as a measure of a hyperbolic angle associated with the sector. The hyperbolic angle concept is quite independent of the ordinary circular angle, but shares a property of invariance with it: whereas circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping. Both circular and hyperbolic angle generate invariant measures but with respect to different transformation groups. The hyperbolic functions, which take hyperbolic angle as argument, perform the role that circular functions play with the circular angle argument. Group theory In 1688, long before abstract group theory, the squeeze mapping was described by Euclid Speidell in the terms of the day: "From a Square and an infinite company of Oblongs on a Superficies, each Equal to that square, how a curve is begotten which shall have the same properties or affections of any Hyperbola inscribe
https://en.wikipedia.org/wiki/WLBT%20Tower
The WLBT Tower was a guy-wired aerial mast for the transmission of FM radio and TV-programs in Raymond, Mississippi, United States, for the Jackson metro area. The mast was constructed in 1966. On October 23, 1997, the mast, which was property of Cosmos Broadcasting (WLBT a local NBC affiliate), collapsed during the renovation of its guy wires. Three workers were killed. The tower was rebuilt in 1999. See also List of masts References External links http://www.skyscraperpage.com/diagrams/?b7120 http://msrmaps.com/GetImageArea.ashx?t=1&s=10&lon=-90.382500&lat=32.213889&w=600&h=400&f=&fs=8&fc=ffffff99&logo=1&lp= http://www.current.org/mo/mo722f.html Towers completed in 1966 Towers in Mississippi Radio masts and towers Former radio masts and towers 1966 establishments in Mississippi
https://en.wikipedia.org/wiki/Rouch%C3%A9%27s%20theorem
Rouché's theorem, named after Eugène Rouché, states that for any two complex-valued functions and holomorphic inside some region with closed contour , if on , then and have the same number of zeros inside , where each zero is counted as many times as its multiplicity. This theorem assumes that the contour is simple, that is, without self-intersections. Rouché's theorem is an easy consequence of a stronger symmetric Rouché's theorem described below. Usage The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomial has exactly 5 zeros in the disk since for every , and , the dominating part, has five zeros in the disk. Geometric explanation It is possible to provide an informal explanation of Rouché's theorem. Let C be a closed, simple curve (i.e., not self-intersecting). Let h(z) = f(z) + g(z). If f and g are both holomorphic on the interior of C, then h must also be holomorphic on the interior of C. Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that Notice that the condition |f(z)| > |h(z) − f(z)| means that for any z, the distance from f(z) to the origin is larger than the length of h(z) − f(z), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curve f(z) is always closer to the red curve h(z) than it is to the origin. The previous paragraph shows that h(z) must wind around the origin exactly as many times as f(z). The index of both curves around zero is therefore the same, so by the argument principle, and must have the same number of zeros inside . One popular, informal
https://en.wikipedia.org/wiki/Independence%20%28mathematical%20logic%29
In mathematical logic, independence is the unprovability of a sentence from other sentences. A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T; this is not the same meaning of "decidability" as in a decision problem. A theory T is independent if each axiom in T is not provable from the remaining axioms in T. A theory for which there is an independent set of axioms is independently axiomatizable. Usage note Some authors say that σ is independent of T when T simply cannot prove σ, and do not necessarily assert by this that T cannot refute σ. These authors will sometimes say "σ is independent of and consistent with T" to indicate that T can neither prove nor refute σ. Independence results in set theory Many interesting statements in set theory are independent of Zermelo–Fraenkel set theory (ZF). The following statements in set theory are known to be independent of ZF, under the assumption that ZF is consistent: The axiom of choice The continuum hypothesis and the generalized continuum hypothesis The Suslin conjecture The following statements (none of which have been proved false) cannot be proved in ZFC (the Zermelo-Fraenkel set theory plus the axiom of choice) to be independent of ZFC, under the added hypothesis that ZFC is consistent. The existence of strongly inaccessible cardinals The existence of large cardinals The non-existence of Kurepa trees The following statements are inconsistent with the axiom of choice, and therefore with ZFC. However they are probably independent of ZF, in a corresponding sense to the above: They cannot be proved in ZF, and few working set theorists expect to find a refutation in ZF. However ZF cannot prove that they are independent of ZF, even with the added hypothesis that ZF is consistent. The axiom of determinacy The
https://en.wikipedia.org/wiki/Free%20Haven%20Project
The Free Haven Project was formed in 1999 by a group of Massachusetts Institute of Technology students with the aim to develop a secure, decentralized system of data storage. The group's work led to a collaboration with the United States Naval Research Laboratory to develop Tor, funded by DARPA. Distributed anonymous storage system The Project's early work focused on an anonymous storage system, Free Haven, which was designed to ensure the privacy and security of both readers and publishers. It contrasts Free Haven to anonymous publishing services to emphasize persistence rather than accessibility. Free Haven is a distributed peer-to-peer system designed to create a "servnet" consisting "servnet nodes" which each hold fragments ("shares") of documents, divided using Rabin's Information dispersal algorithm such that the publisher or file contents cannot be determined by any one piece. The shares are stored on the servnet along with a unique public key. To recover and recreate the file, a client broadcasts the public key to find fragments, which are sent to the client along anonymous routes. For greater security, Free Haven periodically moves the location of shares between nodes. Its function is similar to Freenet but with greater focus on persistence to ensure unpopular files do not disappear. The mechanisms that enable this persistence, however, are also the cause of some problems with inefficiency. A referral- or recommendation-based "metatrust" reputation system built into the servnet attempts to ensure reciprocity and information value by holding node operators accountable. Although nodes remain pseudonymous, communication is facilitated between operators through anonymous email. Work with Tor Tor was developed to by the US Naval Research Laboratory and the Free Haven Project to secure government communications, with initial funding from the US Office of Naval Research and DARPA. Tor was deployed in 2003, as their third generation of deployed onion routing
https://en.wikipedia.org/wiki/Sport%20Aid
Sport Aid (also known as Sports Aid) was a sport-themed campaign for African famine relief held in May 1986, involving several days of all-star exhibition events in various sports, and culminating in the Race Against Time, a 10 km fun run held simultaneously in 89 countries. Timed to coincide with a UNICEF development conference in New York City, Sport Aid raised $37 million for Live Aid and UNICEF. A second, lower-key Sport Aid was held in 1988. Organisation The event was organised by chairman and founder Chris Long, Bob Geldof (Band Aid and Live Aid) and John Anderson (Head of Global Special Events, UNICEF). A central event was the lighting of a symbolic torch at the United Nations by Omar Khalifa, a champion Sudanese 1500m runner, to signal the start of the 10K races around the world. Khalifa began his journey to the UN on May 16, when he lit a torch from the embers of a fire in El Moweilih relief camp in the Sudan. He was then flown to Athens, where the torch of Africa and the Olympic torch were symbolically joined. This marked the first time the Olympic torch had been lit outside of an Olympic Games. Khalifa then ran through 12 European capitals, and was greeted by leaders such as Margaret Thatcher, Prince Charles and Princess Diana, François Mitterrand, Helmut Kohl and Pope John Paul II. The events in the United States were not widely publicized, due in part to the Hands Across America fundraiser, to fight hunger and homelessness, occurring around the same time. Sport Aid was scheduled to take place on the eve of a UN special session on Africa; therefore, the conflict could not be avoided. 1986 Race Against Time At 15:00 UTC on Sunday, May 25, 1986, runners around the world ran, jogged or walked 10 kilometers, having collected sponsorships or donations to support African famine relief charities. 274 cities held official events, allowing over 19.8 million participants to follow designated courses, with television coverage worldwide. London saw 200,000 r
https://en.wikipedia.org/wiki/Biosolids
Biosolids are solid organic matter recovered from a sewage treatment process and used as fertilizer. In the past, it was common for farmers to use animal manure to improve their soil fertility. In the 1920s, the farming community began also to use sewage sludge from local wastewater treatment plants. Scientific research over many years has confirmed that these biosolids contain similar nutrients to those in animal manures. Biosolids that are used as fertilizer in farming are usually treated to help to prevent disease-causing pathogens from spreading to the public. Some sewage sludge can not qualify as biosolids due to persistent, bioaccumulative and toxic chemicals, radionuclides, and heavy metals at levels sufficient to contaminate soil and water when applied to land. Terminology Biosolids may be defined as organic wastewater solids that can be reused after suitable sewage sludge treatment processes leading to sludge stabilization such as anaerobic digestion and composting. Alternatively, the biosolids definition may be restricted by local regulations to wastewater solids only after those solids have completed a specified treatment sequence and/or have concentrations of pathogens and toxic chemicals below specified levels. The United States Environmental Protection Agency (EPA) defines the two terms – sewage sludge and biosolids – in the Code of Federal Regulations (CFR), Title 40, Part 503 as follows: Sewage sludge refers to the solids separated during the treatment of municipal wastewater (including domestic septage), while biosolids refers to treated sewage sludge that meets the EPA pollutant and pathogen requirements for land application and surface disposal. A similar definition has been used internationally, for example in Australia. Use of the term "biosolids" may officially be subject to government regulations. However, informal use describes a broad range of semi-solid organic products produced from sewage or sewage sludge. This could include any so
https://en.wikipedia.org/wiki/Victor%20Horsley
Sir Victor Alexander Haden Horsley (14 April 1857 – 16 July 1916) was a British scientist and professor. He was born in Kensington, London. Educated at Cranbrook School, Kent, he studied medicine at University College London and in Berlin, Germany (1881) and, in the same year, started his career as a house surgeon and registrar at the University College Hospital. From 1884 to 1890, Horsley was Professor-Superintendent of the Brown Institute. In 1886, he was appointed as Assistant Professor of Surgery at the National Hospital for Paralysis and Epilepsy, and as a Professor of Pathology (1887–1896) and Professor of Clinical Surgery (1899–1902) at University College London. He was a supporter of women's suffrage and was an opponent of tobacco and alcohol. Personal life Victor Alexander Haden Horsley was born in Kensington, London, the son of Rosamund (Haden) and John Callcott Horsley, R.A. His given name, Victor Alexander, was given to him by Queen Victoria. In 1883, he became engaged to Eldred Bramwell, daughter of Sir Frederick Bramwell. On 4 October 1887, Victor and Eldred married at St. Margaret's, Westminster. They had two sons, Siward and Oswald, and a daughter, Pamela. He was knighted in the 1902 Coronation Honours, receiving the accolade from King Edward VII at Buckingham Palace on 24 October that year. Horsley was a champion of many causes. One of his primary life crusades was the temperance movement. Having observed that many injuries admitted to the hospital were due to alcohol, Horsley threw himself into becoming a temperance reformer. He soon rose up to the position of vice president of the National Temperance League and the president of the British Medical Temperance Association. In 1907, along with Dr. Mary Sturge, he published a book on alcoholism titled Alcohol and the Human Body. According to his biographers, Tan & Black (2002), "Horsley's kindness, humility, and generous spirit endeared him to patients, colleagues, and students. Born to pri
https://en.wikipedia.org/wiki/Hebesphenomegacorona
In geometry, the hebesphenomegacorona is one of the Johnson solids (). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. It has 21 faces, 18 triangles and 3 squares, 33 edges, and 14 vertices. Johnson uses the prefix hebespheno- to refer to a blunt wedge-like complex formed by three adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -megacorona refers to a crownlike complex of 12 triangles. Joining both complexes together results in the hebesphenomegacorona. The icosahedron can be obtained from the hebesphenomegacorona by merging the middle of the three squares into an edge, turning the neighboring two squares into triangles. Cartesian coordinates Let a ≈ 0.21684 be the second smallest positive root of the polynomial Then, Cartesian coordinates of a hebesphenomegacorona with edge length 2 are given by the union of the orbits of the points under the action of the group generated by reflections about the xz-plane and the yz-plane. References External links Johnson solids
https://en.wikipedia.org/wiki/Sphenomegacorona
In geometry, the sphenomegacorona is one of the Johnson solids (). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. Johnson uses the prefix spheno- to refer to a wedge-like complex formed by two adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -megacorona refers to a crownlike complex of 12 triangles, contrasted with the smaller triangular complex that makes the sphenocorona. Joining both complexes together results in the sphenomegacorona. Cartesian coordinates Let k ≈ 0.59463 be the smallest positive root of the polynomial Then, Cartesian coordinates of a sphenomegacorona with edge length 2 are given by the union of the orbits of the points under the action of the group generated by reflections about the xz-plane and the yz-plane. We may then calculate the surface area of a sphenomegacorona of edge length a as and its volume as where the decimal expansion of ξ is given by . References External links Johnson solids
https://en.wikipedia.org/wiki/Sphenocorona
In geometry, the sphenocorona is one of the Johnson solids (). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. Johnson uses the prefix spheno- to refer to a wedge-like complex formed by two adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -corona refers to a crownlike complex of 8 equilateral triangles. Joining both complexes together results in the sphenocorona. Cartesian coordinates Let k ≈ 0.85273 be the smallest positive root of the quartic polynomial Then, Cartesian coordinates of a sphenocorona with edge length 2 are given by the union of the orbits of the points under the action of the group generated by reflections about the xz-plane and the yz-plane. One may then calculate the surface area of a sphenocorona of edge length a as and its volume as Variations The sphenocorona is also the vertex figure of the isogonal n-gonal double antiprismoid where n is an odd number greater than one, including the grand antiprism with pairs of trapezoid rather than square faces. See also Augmented sphenocorona References External links Johnson solids
https://en.wikipedia.org/wiki/Personal%20Information%20Protection%20and%20Electronic%20Documents%20Act
{{Infobox legislation | short_title = Personal Information Protection and Electronic Documents Act | image = | imagesize = | imagelink = | imagealt = | caption = | long_title = An Act to support and promote electronic commerce by protecting the personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions, and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act'| citation = S.C. 2000, c. 5 | enacted_by = Parliament of Canada | date_enacted = | date_assented = 13 April 2000 | date_signed = | date_commenced = Section 1 in force 13 April 2000; Parts 2, 3 and 4 in force 1 May 2000; Part 1 in force 1 January 2001; Part 5 in force 1 June 2009 | bill = 36th Parliament, Bill C-6 | bill_citation = | bill_date = | introduced_by = John Manley, Minister of Industry | 1st_reading = | 2nd_reading = | 3rd_reading = | white_paper = | committee_report = | amendments = | repeals = | related = | summary = | keywords = }} The Personal Information Protection and Electronic Documents Act (PIPEDA; ) is a Canadian law relating to data privacy. It governs how private sector organizations collect, use and disclose personal information in the course of commercial business. In addition, the Act contains various provisions to facilitate the use of electronic documents. PIPEDA became law on 13 April 2000 to promote consumer trust in electronic commerce. The act was also intended to reassure the European Union that the Canadian privacy law was adequate to protect the personal information of European citizens. In accordance with section 29 of PIPEDA, Part I of the Act ("Protection of Personal Information in the Private Sector") must be reviewed by Parliament every
https://en.wikipedia.org/wiki/Serial%20Attached%20SCSI
In computing, Serial Attached SCSI (SAS) is a point-to-point serial protocol that moves data to and from computer-storage devices such as hard disk drives and tape drives. SAS replaces the older Parallel SCSI (Parallel Small Computer System Interface, usually pronounced "scuzzy" or "sexy") bus technology that first appeared in the mid-1980s. SAS, like its predecessor, uses the standard SCSI command set. SAS offers optional compatibility with Serial ATA (SATA), versions 2 and later. This allows the connection of SATA drives to most SAS backplanes or controllers. The reverse, connecting SAS drives to SATA backplanes, is not possible. The T10 technical committee of the International Committee for Information Technology Standards (INCITS) develops and maintains the SAS protocol; the SCSI Trade Association (SCSITA) promotes the technology. Introduction A typical Serial Attached SCSI system consists of the following basic components: An initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard (as is the case with many server-oriented motherboards) or as an add-on host bus adapter. A target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices. A target device could be a hard disk drive or a disk array system. A service delivery subsystem: the part of an I/O system that transmits information between an initiator and a target. Typically cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem. Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to
https://en.wikipedia.org/wiki/Peptidomimetic
A peptidomimetic is a small protein-like chain designed to mimic a peptide. They typically arise either from modification of an existing peptide, or by designing similar systems that mimic peptides, such as peptoids and β-peptides. Irrespective of the approach, the altered chemical structure is designed to advantageously adjust the molecular properties such as stability or biological activity. This can have a role in the development of drug-like compounds from existing peptides. Peptidomimetics can be prepared by cyclization of linear peptides or coupling of stable unnatural amino acids. These modifications involve changes to the peptide that will not occur naturally (such as altered backbones and the incorporation of nonnatural amino acids). Unnatural amino acids can be generated from their native analogs via modifications such as amine alkylation, side chain substitution, structural bond extension cyclization, and isosteric replacements within the amino acid backbone. Based on their similarity with the precursor peptide, peptidomimetics can be grouped into four classes (A – D) where A features the most and D the least similarities. Classes A and B involve peptide-like scaffolds, while classes C and D include small molecules (Figure 1). Uses of Peptidomimetics The use of peptides as drugs has some disadvantages because of their bioavailability and biostability. Rapid degradation, poor oral availability, difficult transportation through cell membranes, nonselective receptor binding, and challenging multistep preparation are the major limitations of peptides as active pharmaceutical ingredients. Therefore, small protein-like chains called peptidomimetics could be designed and used to mimic native analogs and conceivably exhibit better pharmacological properties. Many peptidomimetics are utilized as FDA-approved drugs, such as Romidepsin (Istodax), Atazanavir (Reyataz), Saquinavir (Invirase), Oktreotid (Sandostatin), Lanreotide (Somatuline), Plecanatide (Trulance)
https://en.wikipedia.org/wiki/Memory%20B%20cell
In immunology, a memory B cell (MBC) is a type of B lymphocyte that forms part of the adaptive immune system. These cells develop within germinal centers of the secondary lymphoid organs. Memory B cells circulate in the blood stream in a quiescent state, sometimes for decades. Their function is to memorize the characteristics of the antigen that activated their parent B cell during initial infection such that if the memory B cell later encounters the same antigen, it triggers an accelerated and robust secondary immune response. Memory B cells have B cell receptors (BCRs) on their cell membrane, identical to the one on their parent cell, that allow them to recognize antigen and mount a specific antibody response. Development and activation T cell dependent mechanisms In a T-cell dependent development pathway, naïve follicular B cells are activated by antigen presenting follicular B helper T cells (TFH) during the initial infection, or primary immune response. Naïve B cells circulate through follicles in secondary lymphoid organs (i.e. spleen and lymph nodes) where they can be activated by a floating foreign peptide brought in through the lymph or by antigen presented by antigen presenting cells (APCs) such as dendritic cells (DCs). B cells may also be activated by binding foreign antigen in the periphery where they then move into the secondary lymphoid organs. A signal transduced by the binding of the peptide to the B cell causes the cells to migrate to the edge of the follicle bordering the T cell area. The B cells internalize the foreign peptides, break them down, and express them on class II major histocompatibility complexes (MHCII), which are cell surface proteins. Within the secondary lymphoid organs, most of the B cells will enter B-cell follicles where a germinal center will form. Most B cells will eventually differentiate into plasma cells or memory B cells within the germinal center. The TFHs that express T cell receptors (TCRs) cognate to the peptide (
https://en.wikipedia.org/wiki/Auramine%E2%80%93rhodamine%20stain
The auramine–rhodamine stain (AR), also known as the Truant auramine–rhodamine stain, is a histological technique used to visualize acid-fast bacilli using fluorescence microscopy, notably species in the Mycobacterium genus. Acid-fast organisms display a reddish-yellow fluorescence. Although the auramine–rhodamine stain is not as specific for acid-fast organisms (e.g. Mycobacterium tuberculosis or Nocardia) as the Ziehl–Neelsen stain, it is more affordable and more sensitive, therefore it is often utilized as a screening tool. AR stain is a mixture of auramine O and rhodamine B. It is carcinogenic. See also Auramine phenol stain (AP stain) Biological stains References Acid-fast bacilli Fluorescent dyes Histology Microbiology techniques Staining
https://en.wikipedia.org/wiki/Snubber
A snubber is a device used to suppress ("snub") a phenomenon such as voltage transients in electrical systems, pressure transients in fluid systems (caused by for example water hammer) or excess force or rapid movement in mechanical systems. Electrical systems Snubbers are frequently used in electrical systems with an inductive load where the sudden interruption of current flow leads to a large counter-electromotive force: a rise in voltage across the current switching device that opposes the change in current, in accordance with Faraday's law. This transient can be a source of electromagnetic interference (EMI) in other circuits. Additionally, if the voltage generated across the device is beyond what the device is intended to tolerate, it may damage or destroy it. The snubber provides a short-term alternative current path around the current switching device so that the inductive element may be safely discharged. Inductive elements are often unintentional, arising from the current loops implied by physical circuitry like long and/or tortuous wires. While current switching is everywhere, snubbers will generally only be required where a major current path is switched, such as in power supplies. Snubbers are also often used to prevent arcing across the contacts of relays and switches, or the electrical interference, or the welding of the contacts that can occur (see also arc suppression). Resistor-capacitor (RC) A simple RC snubber uses a small resistor (R) in series with a small capacitor (C). This combination can be used to suppress the rapid rise in voltage across a thyristor, preventing the erroneous turn-on of the thyristor; it does this by limiting the rate of rise in voltage ( ) across the thyristor to a value which will not trigger it. An appropriately designed RC snubber can be used with either DC or AC loads. This sort of snubber is commonly used with inductive loads such as electric motors. The voltage across a capacitor cannot change instantaneously, s
https://en.wikipedia.org/wiki/Lead%20zirconate%20titanate
Lead zirconate titanate, also called lead zirconium titanate and commonly abbreviated as PZT, is an inorganic compound with the chemical formula It is a ceramic perovskite material that shows a marked piezoelectric effect, meaning that the compound changes shape when an electric field is applied. It is used in a number of practical applications such as ultrasonic transducers and piezoelectric resonators. It is a white to off-white solid. Lead zirconium titanate was first developed around 1952 at the Tokyo Institute of Technology. Compared to barium titanate, a previously discovered metallic-oxide-based piezoelectric material, lead zirconium titanate exhibits greater sensitivity and has a higher operating temperature. Piezoelectric ceramics are chosen for applications because of their physical strength, chemical inertness and their relatively low manufacturing cost. PZT ceramic is the most commonly used piezoelectric ceramic because it has an even greater sensitivity and higher operating temperature than other piezoceramics. Recently, there has been a large push towards finding alternatives to PZT due to legislations in many countries restricting the use of lead alloys and compounds in commercial products. Electroceramic properties Being piezoelectric, lead zirconate titanate develops a voltage (or potential difference) across two of its faces when compressed (useful for sensor applications), and physically changes shape when an external electric field is applied (useful for actuator applications). The relative permittivity of lead zirconate titanate can range from 300 to 20000, depending upon orientation and doping. Being pyroelectric, this material develops a voltage difference across two of its faces under changing temperature conditions; consequently, lead zirconate titanate can be used as a heat sensor. Lead zirconate titanate is also ferroelectric, which means that it has a spontaneous electric polarization (electric dipole) that can be reversed in the pre
https://en.wikipedia.org/wiki/Group%20identifier
In Unix-like systems, multiple users can be put into groups. POSIX and conventional Unix file system permissions are organized into three classes, user, group, and others. The use of groups allows additional abilities to be delegated in an organized fashion, such as access to disks, printers, and other peripherals. This method, among others, also enables the superuser to delegate some administrative tasks to normal users, similar to the Administrators group on Microsoft Windows NT and its derivatives. A group identifier, often abbreviated to GID, is a numeric value used to represent a specific group. The range of values for a GID varies amongst different systems; at the very least, a GID can be between 0 and 32,767, with one restriction: the login group for the superuser must have GID 0. This numeric value is used to refer to groups in the /etc/passwd and /etc/group files or their equivalents. Shadow password files and Network Information Service also refer to numeric GIDs. The group identifier is a necessary component of Unix file systems and processes. Supplementary groups In Unix systems, every user must be a member of at least one group, the primary group, which is identified by the numeric GID of the user's entry in the passwd database, which can be viewed with the command getent passwd (usually stored in /etc/passwd or LDAP). This group is referred to as the primary group ID. A user may be listed as member of additional groups in the relevant entries in the group database, which can be viewed with getent group (usually stored in /etc/group or LDAP); the IDs of these groups are referred to as supplementary group IDs. Effective vs. real Unix processes have an effective (EUID, EGID), a real (UID, GID) and a saved (SUID, SGID) ID. Normally these are identical, but in setuid and setgid processes they are different. Conventions Type Originally, a signed 16-bit integer was used. Since the sign was not necessary – negative numbers do not make valid group IDs
https://en.wikipedia.org/wiki/BIBO%20stability
In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded. A signal is bounded if there is a finite value such that the signal magnitude never exceeds , that is For discrete-time signals: For continuous-time signals: Time-domain condition for linear time-invariant systems Continuous-time necessary and sufficient condition For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, , be absolutely integrable, i.e., its L1 norm exists. Discrete-time sufficient condition For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its norm exists. Proof of sufficiency Given a discrete time LTI system with impulse response the relationship between the input and the output is where denotes convolution. Then it follows by the definition of convolution Let be the maximum value of , i.e., the -norm. (by the triangle inequality) If is absolutely summable, then and So if is absolutely summable and is bounded, then is bounded as well because . The proof for continuous-time follows the same arguments. Frequency-domain condition for linear time-invariant systems Continuous-time signals For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s
https://en.wikipedia.org/wiki/Online%20codes
In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover the original message (with high probability). Rateless codes produce an arbitrarily large number of symbols which can be broadcast until the receivers have enough symbols. The online encoding algorithm consists of several phases. First the message is split into n fixed size message blocks. Then the outer encoding is an erasure code which produces auxiliary blocks that are appended to the message blocks to form a composite message. From this the inner encoding generates check blocks. Upon receiving a certain number of check blocks some fraction of the composite message can be recovered. Once enough has been recovered the outer decoding can be used to recover the original message. Detailed discussion Online codes are parameterised by the block size and two scalars, q and ε. The authors suggest q=3 and ε=0.01. These parameters set the balance between the complexity and performance of the encoding. A message of n blocks can be recovered, with high probability, from (1+3ε)n check blocks. The probability of failure is (ε/2)q+1. Outer encoding Any erasure code may be used as the outer encoding, but the author of online codes suggest the following. For each message block, pseudo-randomly choose q auxiliary blocks (from a total of 0.55qεn auxiliary blocks) to attach it to. Each auxiliary block is then the XOR of all the message blocks which have been attached to it. Inner encoding The inner encoding takes the composite message and generates a stream of check blocks. A check block is the XOR of all the blocks from the composite message that it is attached to. The degree of a check block is the number of blocks that it is attached to. The degree is determined by sampling a random distribution, p, which is defined as: for Once the degree of the check block is known, th
https://en.wikipedia.org/wiki/Pairing%20function
In mathematics, a pairing function is a process to uniquely encode two natural numbers into a single natural number. Any pairing function can be used in set theory to prove that integers and rational numbers have the same cardinality as natural numbers. Definition A pairing function is a bijection More generally, a pairing function on a set A is a function that maps each pair of elements from A into an element of A, such that any two pairs of elements of A are associated with different elements of A, or a bijection from to A. Hopcroft and Ullman pairing function Hopcroft and Ullman (1979) define the following pairing function: , where . This is the same as the Cantor pairing function below, shifted to exclude 0 (i.e., , , and ). Cantor pairing function The Cantor pairing function is a primitive recursive pairing function defined by where . It can also be expressed as . It is also strictly monotonic w.r.t. each argument, that is, for all , if , then ; similarly, if , then . The statement that this is the only quadratic pairing function is known as the Fueter–Pólya theorem. Whether this is the only polynomial pairing function is still an open question. When we apply the pairing function to and we often denote the resulting number as . This definition can be inductively generalized to the for as with the base case defined above for a pair: Inverting the Cantor pairing function Let be an arbitrary natural number. We will show that there exist unique values such that and hence that the function is invertible. It is helpful to define some intermediate values in the calculation: where is the triangle number of . If we solve the quadratic equation for as a function of , we get which is a strictly increasing and continuous function when is non-negative real. Since we get that and thus where is the floor function. So to calculate and from , we do: Since the Cantor pairing function is invertible, it must be one-to-one and onto. E
https://en.wikipedia.org/wiki/Lucas%E2%80%93Carmichael%20number
In mathematics, a Lucas–Carmichael number is a positive composite integer n such that if p is a prime factor of n, then p + 1 is a factor of n + 1; n is odd and square-free. The first condition resembles the Korselt's criterion for Carmichael numbers, where -1 is replaced with +1. The second condition eliminates from consideration some trivial cases like cubes of prime numbers, such as 8 or 27, which otherwise would be Lucas–Carmichael numbers (since n3 + 1 = (n + 1)(n2 − n + 1) is always divisible by n + 1). They are named after Édouard Lucas and Robert Carmichael. Properties The smallest Lucas–Carmichael number is 399 = 3 × 7 × 19. It is easy to verify that 3+1, 7+1, and 19+1 are all factors of 399+1 = 400. The smallest Lucas–Carmichael number with 4 factors is 8855 = 5 × 7 × 11 × 23. The smallest Lucas–Carmichael number with 5 factors is 588455 = 5 × 7 × 17 × 23 × 43. It is not known whether any Lucas–Carmichael number is also a Carmichael number. Thomas Wright proved in 2016 that there are infinitely many Lucas–Carmichael numbers. If we let denote the number of Lucas–Carmichael numbers up to , Wright showed that there exists a positive constant such that . List of Lucas–Carmichael numbers The first few Lucas–Carmichael numbers and their prime factors are listed below. References External links Eponymous numbers in mathematics Integer sequences
https://en.wikipedia.org/wiki/Erasure%20code
In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of k symbols into a longer message (code word) with n symbols such that the original message can be recovered from a subset of the n symbols. The fraction r = k/n is called the code rate. The fraction k’/k, where k’ denotes the number of symbols required for recovery, is called reception efficiency. Optimal erasure codes Optimal erasure codes have the property that any k out of the n code word symbols are sufficient to recover the original message (i.e., they have optimal reception efficiency). Optimal erasure codes are maximum distance separable codes (MDS codes). Parity check Parity check is the special case where n = k + 1. From a set of k values , a checksum is computed and appended to the k source values: The set of k + 1 values is now consistent with regard to the checksum. If one of these values, , is erased, it can be easily recovered by summing the remaining variables: Polynomial oversampling Example: Err-mail (k = 2) In the simple case where k = 2, redundancy symbols may be created by sampling different points along the line between the two original symbols. This is pictured with a simple example, called err-mail: Alice wants to send her telephone number (555629) to Bob using err-mail. Err-mail works just like e-mail, except About half of all the mail gets lost. Messages longer than 5 characters are illegal. It is very expensive (similar to air-mail). Instead of asking Bob to acknowledge the messages she sends, Alice devises the following scheme. She breaks her telephone number up into two parts a = 555, b = 629, and sends 2 messages – "A=555" and "B=629" – to Bob. She constructs a linear function, , in this case , such that and . She computes the values f(3), f(4), and f(5), and then transmits three redundant messages: "C=703", "D=777" and "E=851". Bob knows that the form of f(
https://en.wikipedia.org/wiki/Linux%20on%20embedded%20systems
Computer operating systems based on the Linux kernel are used in embedded systems such as consumer electronics (eg. set-top boxes, smart TVs and personal video recorders (PVRs)), in-vehicle infotainment (IVI), networking equipment (such as routers, switches, wireless access points (WAPs) or wireless routers), machine control, industrial automation, navigation equipment, spacecraft flight software, and medical instruments in general. Because of their versatility, operating systems based on the Linux kernel can be also found in mobile devices that are actually touchscreen-based embedded devices, such as smartphones and tablets, together with personal digital assistants (PDAs) and portable media players that also include a touchscreen. This is a challenge for most learners because their computer experience is mainly based on GUI (Graphical user interface) based interaction with the machine and high-level programming on the one hand and low-level programming of small microcontrollers (MCU) on the other hand while the concept of command line interfaces is widely unknown. History The Linux kernel has been ported to a variety of CPUs which are not only primarily used as the processor of a desktop or server computer, but also ARC, ARM, AVR32, ETRAX CRIS, FR-V, H8300, IP7000, m68k, MIPS, mn10300, PowerPC, SuperH, and Xtensa processors. Linux is also used as an alternative to using a proprietary operating system and its associated toolchain. Variants The Embeddable Linux Kernel Subset is a Linux distribution that fits on a floppy disk for outdated or low resource hardware. Devices coverage Due to its low cost (freely available source code) and ease of customization, Linux has been shipped in many consumer devices. Devices covering PDAs (like the Sharp Zaurus family), TomTom GPS navigation devices, residential gateways like the Linksys WRT54G series or smartphones such as the Motorola exz series, Openmoko handsets, devices running Sailfish OS developed by Jolla like Jolla
https://en.wikipedia.org/wiki/Cesspit
Cesspit, cesspool and soak pit in some contexts are terms with various meanings: they are used to describe either an underground holding tank (sealed at the bottom) or a soak pit (not sealed at the bottom). A cesspit can be used for the temporary collection and storage of feces, excreta or fecal sludge as part of an on-site sanitation system and has some similarities with septic tanks or with soak pits. Traditionally, it was a deep cylindrical chamber dug into the ground, having approximate dimensions of 1 metre (3') diameter and 2–3 metres (6' to 10') depth. Their appearance was similar to that of a hand-dug water well. The pit can be lined with bricks or concrete, covered with a slab and needing to be emptied frequently when it is used like an underground holding tank. In other cases (if soil and groundwater conditions allow), it is not constructed watertight, to allow liquid to leach out (similar to a pit latrine or to a soak pit). Uses Holding tank In the UK a cesspit is a closed tank for the reception and temporary storage of sewage; in North America this is simply referred to as a "holding tank". Because it is sealed, the tank must be emptied frequently – on average every 6 weeks – but frequency varies a great deal and can be as often as weekly or as rarely as quarterly. Because of the need for frequent emptying, the cost of maintenance of a cesspit can be high. If owners in the UK do not maintain their cesspits they can be fined up to £20,000. Infiltration systems A cesspool was at one time built like a dry well lined with loose-fitting brick or stone, used for the disposal of sewage via infiltration into the soil. Liquids leaked out through the soil as conditions allowed, while solids decayed and collected as composted matter in the base of the cesspool. As the solids accumulated, eventually the particulate solids blocked the escape of liquids, causing the cesspool to drain more slowly or to overflow. A biofilm forms in the loose soil surrounding
https://en.wikipedia.org/wiki/Cartan%20matrix
In mathematics, the term Cartan matrix has three meanings. All of these are named after the French mathematician Élie Cartan. Amusingly, the Cartan matrices in the context of Lie algebras were first investigated by Wilhelm Killing, whereas the Killing form is due to Cartan. Lie algebras A (symmetrizable) generalized Cartan matrix is a square matrix with integer entries such that For diagonal entries, . For non-diagonal entries, . if and only if can be written as , where is a diagonal matrix, and is a symmetric matrix. For example, the Cartan matrix for G2 can be decomposed as such: The third condition is not independent but is really a consequence of the first and fourth conditions. We can always choose a D with positive diagonal entries. In that case, if S in the above decomposition is positive definite, then A is said to be a Cartan matrix. The Cartan matrix of a simple Lie algebra is the matrix whose elements are the scalar products (sometimes called the Cartan integers) where ri are the simple roots of the algebra. The entries are integral from one of the properties of roots. The first condition follows from the definition, the second from the fact that for is a root which is a linear combination of the simple roots ri and rj with a positive coefficient for rj and so, the coefficient for ri has to be nonnegative. The third is true because orthogonality is a symmetric relation. And lastly, let and . Because the simple roots span a Euclidean space, S is positive definite. Conversely, given a generalized Cartan matrix, one can recover its corresponding Lie algebra. (See Kac–Moody algebra for more details). Classification An matrix A is decomposable if there exists a nonempty proper subset such that whenever and . A is indecomposable if it is not decomposable. Let A be an indecomposable generalized Cartan matrix. We say that A is of finite type if all of its principal minors are positive, that A is of affine type if its proper princi
https://en.wikipedia.org/wiki/Arnold%20Engineering%20Development%20Complex
The Arnold Engineering Development Complex (AEDC), Arnold Engineering Development Center before July 2012, is an Air Force Materiel Command facility under the control of the Air Force Test Center (AFTC). Named for General Henry "Hap" Arnold, the father of the U.S. Air Force, AEDC is the most advanced and largest complex of flight simulation test facilities in the world. Headquartered at Arnold Air Force Base, Tennessee, the Complex also operates from geographically separated units at Ames Research Center, Mountain View and Edwards AFB, California; Peterson AFB, Colorado; Eglin AFB, Florida; the Federal Research Center at White Oak, Maryland; Holloman AFB, Kirtland AFB, and White Sands Missile Range, New Mexico; Wright-Patterson AFB, Ohio, and Hill AFB, Utah. AEDC operates more than 68 test facilities, including, but not limited to, aerodynamic and propulsion wind tunnels, rocket and turbine engine test cells, environmental chambers, arc heaters, ballistic ranges, sled tracks, centrifuges, and other specialized test units. AEDC conducts developmental testing and evaluation through modeling, simulation, ground, and flight testing. Testing aims to evaluate aircraft, missile, and space systems/subsystems at the flight conditions they will experience during a mission. The complex aims to be the best value U.S. ground test and analysis source for aerospace and defense systems. Staff agencies Test Division Turbine Engine Ground Test Complex AEDC Sea Level Test Cells Aero-propulsion Systems Test Facility AEDC Engine Test Facility Propulsion Wind Tunnel Ground Test Complex Propulsion Wind Tunnel Facility Von Karman Gas Dynamics Facility Space and Missile Ground Test Complex AEDC Space Chambers Test Facility High-Enthalpy Arc Heated Facility Aero-ballistic Range Facility Aerodynamic and Propulsion Test Unit Maintenance Division Support Asset Branch Test Asset Branch Mission Support Division Communications Branch Civil Engineer Branch Services Test Syst
https://en.wikipedia.org/wiki/Raku%20%28programming%20language%29
Raku is a member of the Perl family of programming languages. Formerly known as Perl 6, it was renamed in October 2019. Raku introduces elements of many modern and historical languages. Compatibility with Perl was not a goal, though a compatibility mode is part of the specification. The design process for Raku began in 2000. History The Raku design process was first announced on 19 July 2000, on the fourth day of that year's Perl Conference, by Larry Wall in his State of the Onion 2000 talk. At that time, the primary goals were to remove "historical warts" from the language; "easy things should stay easy, hard things should get easier, and impossible things should get hard"; a general cleanup of the internal design and APIs. The process began with a series of requests for comments or "RFCs". This process was open to all contributors, and left no aspect of the language closed to change. Once the RFC process was complete, Wall reviewed and classified each of the 361 requests received. He then began the process of writing several "Apocalypses", using the original meaning of the term, "revealing". While the original goal was to write one Apocalypse for each chapter of Programming Perl, it became obvious that, as each Apocalypse was written, previous Apocalypses were being invalidated by later changes. For this reason, a set of Synopses were published, each one relating the contents of an Apocalypse, but with any subsequent changes reflected in updates. Today, the Raku specification is managed through the "roast" testing suite, while the Synopses are kept as a historical reference. There are also a series of Exegeses written by Damian Conway that explain the content of each Apocalypse in terms of practical usage. Each Exegesis consists of code examples along with discussion of the usage and implications of the examples. There are three primary methods of communication used in the development of Raku today. The first is the raku IRC channel on Libera Chat. The second
https://en.wikipedia.org/wiki/Buddhabrot
The Buddhabrot is the probability distribution over the trajectories of points that escape the Mandelbrot fractal. Its name reflects its pareidolic resemblance to classical depictions of Gautama Buddha, seated in a meditation pose with a forehead mark (tikka), a traditional oval crown (ushnisha), and ringlet of hair. Discovery The Buddhabrot rendering technique was discovered by Melinda Green, who later described it in a 1993 Usenet post to sci.fractals. Previous researchers had come very close to finding the precise Buddhabrot technique. In 1988, Linas Vepstas relayed similar images to Cliff Pickover for inclusion in Pickover's then-forthcoming book Computers, Pattern, Chaos, and Beauty. This led directly to the discovery of Pickover stalks. Noel Griffin also implemented this idea in the 1993 "Mandelcloud" option in the Fractint renderer. However, these researchers did not filter out non-escaping trajectories required to produce the ghostly forms reminiscent of Hindu art. The inverse, "Anti-Buddhabrot" filter produces images similar to no filtering. Green first named this pattern Ganesh, since an Indian co-worker "instantly recognized it as the god 'Ganesha' which is the one with the head of an elephant." The name Buddhabrot was coined later by Lori Gardi. Rendering method Mathematically, the Mandelbrot set consists of the set of points in the complex plane for which the iteratively defined sequence does tend to infinity as goes to infinity for . The Buddhabrot image can be constructed by first creating a 2-dimensional array of boxes, each corresponding to a final pixel in the image. Each box for and has size in complex coordinates of and , where and for an image of width and height . For each box, a corresponding counter is initialized to zero. Next, a random sampling of points are iterated through the Mandelbrot function. For points which escape within a chosen maximum number of iterations, and therefore are not in the Mandelbrot set, the
https://en.wikipedia.org/wiki/Boo%20%28programming%20language%29
Boo is an object-oriented, statically typed, general-purpose programming language that seeks to make use of the Common Language Infrastructure's support for Unicode, internationalization, and web applications, while using a Python-inspired syntax and a special focus on language and compiler extensibility. Some features of note include type inference, generators, multimethods, optional duck typing, macros, true closures, currying, and first-class functions. Boo was one of the three scripting languages for the Unity game engine (Unity Technologies employed De Oliveira, its designer), until official support was dropped in 2014 due to the small userbase. The Boo Compiler was removed from the engine in 2017. Boo has since been abandoned by De Oliveira, with development being taken over by Mason Wheeler. Boo is free software released under the BSD 3-Clause license. It is compatible with the Microsoft .NET and Mono frameworks. Syntax print ("Hello World") def fib(): a, b = 0L, 1L h # The 'L's make the numbers double word length (typically 64 bits) while true: yield b a, b = b, a + b # Print the first 5 numbers in the series: for index as int, element in zip(range(5), fib()): print("${index+1}: ${element}") See also Fantom Apache Groovy IronPython IronRuby Nemerle REBOL StaDyn References External links Official website How To Think Like a Computer Scientist: Learning to Program with Boo Boo Succinctly Revealed Bootorial Programming languages .NET programming languages Brazilian inventions Class-based programming languages Free compilers and interpreters Object-oriented programming languages Procedural programming languages Programming languages created in 2003 Software using the BSD license Statically typed programming languages 2003 software
https://en.wikipedia.org/wiki/Current%20yield
The current yield, interest yield, income yield, flat yield, market yield, mark to market yield or running yield is a financial term used in reference to bonds and other fixed-interest securities such as gilts. It is the ratio of the annual interest (coupon) payment and the bond's price: According to Investopedia, the clean market price of the bond should be the denominator in this calculation. Example The current yield of a bond with a face value (F) of $100 and a coupon rate (r) of 5.00% that is selling at $95.00 (clean; not including accrued interest) (P) is calculated as follows. Shortcomings of current yield The current yield refers only to the yield of the bond at the current moment. It does not reflect the total return over the life of the bond, or the factors affecting total return, such as: the length of time over which the bond produces cash flows for the investor (the maturity date of the bond), interest earned on reinvested coupon payments, or reinvestment risk (the uncertainty about the rate at which future cash flows can be reinvested), and fluctuations in the market price of a bond prior to maturity. Relationship between yield to maturity and coupon rate The concept of current yield is closely related to other bond concepts, including yield to maturity (YTM), and coupon yield. When a coupon-bearing bond sells at; a discount: YTM > current yield > coupon yield a premium: coupon yield > current yield > YTM par: YTM = current yield = coupon yield. For zero-coupon bonds selling at a discount, the coupon yield and current yield are zero, and the YTM is positive. See also Adjusted current yield References Current Yield at investopedia.com Mathematical finance Bond valuation Fixed income analysis
https://en.wikipedia.org/wiki/Exponential%20backoff
Exponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable rate. These algorithms find usage in a wide range of systems and processes, with radio networks and computer networks being particularly notable. Exponential backoff algorithm An exponential backoff algorithm is a form of closed-loop control system that reduces the rate of a controlled process in response to adverse events. For example, if a smartphone app fails to connect to its server, it might try again 1 second later, then if it fails again, 2 seconds later, then 4, etc. Each time the pause is multiplied by a fixed amount (in this case 2). In this case, the adverse event is failing to connect to the server. Other examples of adverse events include collisions of network traffic, an error response from a service, or an explicit request to reduce the rate (i.e. "back off"). The rate reduction can be modelled as an exponential function: or Here, is the time delay applied between actions, is the multiplicative factor or "base", is the number of adverse events observed, and is the frequency (or rate) of the process (i.e. number of actions per unit of time). The value of is incremented each time an adverse event is observed, leading to an exponential rise in delay and, therefore, an inversely proportionate rate. An exponential backoff algorithm where is referred to as a binary exponential backoff algorithm. When the rate has been reduced in response to an adverse event, it usually does not remain at that reduced level forever. If no adverse events are observed for some period of time, often referred to as the recovery time or cooling-off period, the rate may be increased again. The time period that must elapse before attempting to increase the rate again may, itself, be determined by an exponential backoff algorithm. Typically, recovery of the rate occurs more slowly than reduction of the rate due to backoff,
https://en.wikipedia.org/wiki/Symmetry%20in%20biology
Symmetry in biology refers to the symmetry observed in organisms, including plants, animals, fungi, and bacteria. External symmetry can be easily seen by just looking at an organism. For example, the face of a human being has a plane of symmetry down its centre, or a pine cone displays a clear symmetrical spiral pattern. Internal features can also show symmetry, for example the tubes in the human body (responsible for transporting gases, nutrients, and waste products) which are cylindrical and have several planes of symmetry. Biological symmetry can be thought of as a balanced distribution of duplicate body parts or shapes within the body of an organism. Importantly, unlike in mathematics, symmetry in biology is always approximate. For example, plant leaves – while considered symmetrical – rarely match up exactly when folded in half. Symmetry is one class of patterns in nature whereby there is near-repetition of the pattern element, either by reflection or rotation. While sponges and placozoans represent two groups of animals which do not show any symmetry (i.e. are asymmetrical), the body plans of most multicellular organisms exhibit, and are defined by, some form of symmetry. There are only a few types of symmetry which are possible in body plans. These are radial (cylindrical), bilateral, biradial and spherical symmetry. While the classification of viruses as an "organism" remains controversial, viruses also contain icosahedral symmetry. The importance of symmetry is illustrated by the fact that groups of animals have traditionally been defined by this feature in taxonomic groupings. The Radiata, animals with radial symmetry, formed one of the four branches of Georges Cuvier's classification of the animal kingdom. Meanwhile, Bilateria is a taxonomic grouping still used today to represent organisms with embryonic bilateral symmetry. Radial symmetry Organisms with radial symmetry show a repeating pattern around a central axis such that they can be separated in
https://en.wikipedia.org/wiki/Egyptian%20numerals
The system of ancient Egyptian numerals was used in Ancient Egypt from around 3000 BCE until the early first millennium CE. It was a system of numeration based on multiples of ten, often rounded off to the higher power, written in hieroglyphs. The Egyptians had no concept of a positional notation such as the decimal system. The hieratic form of numerals stressed an exact finite series notation, ciphered one-to-one onto the Egyptian alphabet. Digits and numbers The following hieroglyphs were used to denote powers of ten: Multiples of these values were expressed by repeating the symbol as many times as needed. For instance, a stone carving from Karnak shows the number 4,622 as: Egyptian hieroglyphs could be written in both directions (and even vertically). In this example the symbols decrease in value from top to bottom and from left to right. On the original stone carving, it is right-to-left, and the signs are thus reversed. Zero and negative numbers By 1740 BCE, the Egyptians had a symbol for zero in accounting texts. The symbol nfr (𓄤), meaning beautiful, was also used to indicate the base level in drawings of tombs and pyramids and distances were measured relative to the base line as being above or below this line. Fractions Rational numbers could also be expressed, but only as sums of unit fractions, i.e., sums of reciprocals of positive integers, except for and . The hieroglyph indicating a fraction looked like a mouth, which meant "part": Fractions were written with this fractional solidus, i.e., the numerator 1, and the positive denominator below. Thus, was written as: Special symbols were used for and for the non-unit fractions and, less frequently, : If the denominator became too large, the "mouth" was just placed over the beginning of the "denominator": Written numbers As with most modern day languages, the ancient Egyptian language could also write out numerals as words phonetically, just like one can write thirty instead of "30" in English
https://en.wikipedia.org/wiki/TPK%20algorithm
The TPK algorithm is a simple program introduced by Donald Knuth and Luis Trabb Pardo to illustrate the evolution of computer programming languages. In their 1977 work "The Early Development of Programming Languages", Trabb Pardo and Knuth introduced a small program that involved arrays, indexing, mathematical functions, subroutines, I/O, conditionals and iteration. They then wrote implementations of the algorithm in several early programming languages to show how such concepts were expressed. To explain the name "TPK", the authors referred to Grimm's law (which concerns the consonants 't', 'p', and 'k'), the sounds in the word "typical", and their own initials (Trabb Pardo and Knuth). In a talk based on the paper, Knuth said: The algorithm Knuth describes it as follows: In pseudocode: ask for 11 numbers to be read into a sequence S reverse sequence S for each item in sequence S call a function to do an operation if result overflows alert user else print result The algorithm reads eleven numbers from an input device, stores them in an array, and then processes them in reverse order, applying a user-defined function to each value and reporting either the value of the function or a message to the effect that the value has exceeded some threshold. Implementations Implementations in the original paper In the original paper, which covered "roughly the first decade" of the development of high-level programming languages (from 1945 up to 1957), they gave the following example implementation "in a dialect of ALGOL 60", noting that ALGOL 60 was a later development than the languages actually discussed in the paper: TPK: begin integer i; real y; real array a[0:10]; real procedure f(t); real t; value t; f := sqrt(abs(t)) + 5 × t ↑ 3; for i := 0 step 1 until 10 do read(a[i]); for i := 10 step -1 until 0 do begin y := f(a[i]); if y > 400 then write(i, 'TOO LARGE') else write(i, y); end end T
https://en.wikipedia.org/wiki/Comparison%20of%20e-readers
An e-reader, also known as an e-book reader, is a portable electronic device that is designed primarily for the purpose of reading e-books and periodicals. E-readers have a similar form factor to a tablet and usually refers to devices that use electronic paper resulting in better screen readability, especially in bright sunlight, and longer battery life when compared to a tablet. An e-reader's battery will typically last for multiple weeks. In contrast to an e-reader, a tablet has a screen capable of higher refresh rates which make them more suitable for interaction such as playing a video game or watching a video clip. Types of electronic-paper displays All electronic paper types offer lower power consumption and better sunlight contrast than LCDs. Some offer a backlight to allow low-light reading. With the backlight turned off, all have a similar appearance to "ink on paper" and are readable in bright environments. E ink: E Ink Corporation's 1st-generation technology, also known as E Ink Vizplex. Although "eInk" may be used to talk about all electronic paper displays, "eInk" and "E Ink" are trademarked by E Ink, which provides the majority of the electronic paper displays used in devices. E Ink Pearl: E Ink's 2nd-generation technology, which has higher contrast and a greater number of different levels of gray than their earlier technology. A further revision of Pearl is Mobius, which uses flexible plastic in the display. E Ink Triton: E Ink's 3rd-generation technology that featured the ability to show color in the display. E Ink Carta: E Ink's 4th-generation technology, which has higher contrast and a greater number of different levels of gray than their earlier technology; the displays have a pixel density between 212 and 300 ppi. SiPix: A former competitor to E Ink, which had displays with almost instantaneous page turns. It was acquired by E Ink in December 2012. LG Flex: Electronic paper that allows a flexible screen; is still in development. With thes
https://en.wikipedia.org/wiki/Rotary%20converter
A rotary converter is a type of electrical machine which acts as a mechanical rectifier, inverter or frequency converter. Rotary converters were used to convert alternating current (AC) to direct current (DC), or DC to AC power, before the advent of chemical or solid state power rectification and inverting. They were commonly used to provide DC power for commercial, industrial and railway electrification from an AC power source. Principles of operation The rotary converter can be thought of as a motor-generator, where the two machines share a single rotating armature and set of field coils. The basic construction of the rotary converter consists of a DC generator (dynamo) with a set of slip rings tapped into its rotor windings at evenly spaced intervals. When a dynamo is spun the electric currents in its rotor windings alternate as it rotates in the magnetic field of the stationary field windings. This alternating current is rectified by means of a commutator, which allows direct current to be extracted from the rotor. This principle is taken advantage of by energizing the same rotor windings with AC power, which causes the machine to act as a synchronous AC motor. The rotation of the energized coils excites the stationary field windings producing part of the direct current. The other part is alternating current from the slip rings, which is directly rectified into DC by the commutator. This makes the rotary converter a hybrid dynamo and mechanical rectifier. When used in this way it is referred to as a synchronous rotary converter or simply a synchronous converter. The AC slip rings also allow the machine to act as an alternator. The device can be reversed and DC applied to the field and commutator windings to spin the machine and produce AC power. When operated as a DC to AC machine it is referred to as an inverted rotary converter. One way to envision what is happening in an AC-to-DC rotary converter is to imagine a rotary reversing switch that is being dr
https://en.wikipedia.org/wiki/Computer%20tower
In personal computing, a tower is a form factor of desktop computer case whose height is much greater than its width, thus having the appearance of an upstanding tower block, as opposed to a traditional "pizza box" computer case whose width is greater than its height and appears lying flat. Compared to a pizza box case, the tower tends to be larger and offers more potential for internal volume for the same desk area occupied, and therefore allows more hardware installation and theoretically better airflow for cooling. Multiple size subclasses of the tower form factor have been established to differentiate their varying heights, including full-tower, mid-tower, midi-tower and mini-tower; these classifications are however nebulously defined and inconsistently applied by different manufacturers. Although the traditional layout for a tower system is to have the case placed on top of the desk alongside the monitor and other peripherals, a far more common configuration is to place the case on the floor below the desk or in an under-desk compartment, in order to free up desktop space for other items. Computer systems housed in the horizontal "pizza box" form factor — once popularized by the IBM PC in the 1980s but fallen out of mass use since the late 1990s — have been given the term desktops to contrast them with the often underdesk-situated towers. Subclasses Tower cases are often categorized as mini-tower, midi-tower, mid-tower, or full-tower. The terms are subjective and inconsistently defined by different manufacturers. Full-tower Full-tower cases, typically or more in height, are designed for maximum scalability. For case modding enthusiasts and gamers wanting to play the most technically challenging video games, the full-tower case also makes for an ideal gaming PC case because of their ability to accommodate extensive water cooling setups and larger case fans. Traditionally, full-tower systems had between four and six externally accessible half-height 5.25-
https://en.wikipedia.org/wiki/Instant-on
In computing, instant-on is the ability to boot nearly instantly, allowing to go online or to use a specific application without waiting for a PC's traditional operating system to launch. Instant-on technology is today mostly used on laptops, netbooks, and nettops because the user can boot up one program, instead of waiting for the PC's operating system to boot. This allows a user to launch a single program, such as a movie-playing program or a web browser, without the need of the whole operating system. There still remain a few true instant-on machines such as the Atari ST, as described in the Booting article. These machines had complete Operating Systems resident in ROM similar to the way in which the BIOS function is conventionally provided on current computer architectures. The "instant-on" concept as used here results from loading an OS, such as a legacy system DOS, with a small hard drive footprint. Latency inherent to mechanical drive performance can also be eliminated by using Live USB or Live SD flash memory to load systems at electronic speeds which are orders of magnitude faster. List of systems Acer InstaBoot Netbook (based on Splashtop) Acer RevoBoot Nettop (based on Splashtop) Asus Express Gate motherboards, notebooks, Eee Box (nettop), and EeePCs (based on Splashtop) Canonical product announced in early 2010 Dell Latitude ON, Latitude On Reader (based on Splashtop), Latitude On Flash (based on Splashtop) Google ChromeOS HP QuickWeb Probook notebook (based on Splashtop) HP Instant On Solution Voodoo & Envy notebook (based on Splashtop) HP Instant Web netbook (based on Splashtop) Lenovo QuickStart (based on Splashtop) LG SmartOn (based on Splashtop) Mandriva InstantOn MSI Winki Palm Foleo Phoenix HyperSpace Sony Quick Web Access (based on Splashtop) Splashtop Inc. Splashtop Xandros Presto Timeline In October 2007, ASUS introduced an instant-on capability branded "Express Gate" on select motherboards, using DeviceVM's Splashtop software and dedicat
https://en.wikipedia.org/wiki/Slurry
A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micrometre up to hundreds of millimetres. The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive. Examples Examples of slurries include: Cement slurry, a mixture of cement, water, and assorted dry and liquid additives used in the petroleum and other industries Soil/cement slurry, also called Controlled Low-Strength Material (CLSM), flowable fill, controlled density fill, flowable mortar, plastic soil-cement, K-Krete, and other names A mixture of thickening agent, oxidizers, and water used to form a gel explosive A mixture of pyroclastic material, rocky debris, and water produced in a volcanic eruption and known as a lahar A mixture of bentonite and water used to make slurry walls Coal slurry, a mixture of coal waste and water, or crushed coal and water Slip, a mixture of clay and water used for joining, glazing and decoration of ceramics and pottery. Slurry oil, the highest boiling fraction distilled from the effluent of an FCC unit in an oil refinery. It contains a large amount of catalyst, in form of sediments hence the denomination of slurry. A mixture of wood pulp and water used to make paper Manure slurry, a mixture of animal waste, organic matter, and sometimes water often known simply as "slurry" in agricultural use, used as fertilizer after aging in a slurry pit Meat slurry, a mixture of finely ground meat and water, centrifugally dewatered and used as a food ingredient. An abrasive substance used in chemical-mechanical polishing Slurry ice, a mixture of ice crystals, freezing point depressant, and water A mixture of raw materials
https://en.wikipedia.org/wiki/157%20%28number%29
157 (one hundred [and] fifty-seven) is the number following 156 and preceding 158. In mathematics 157 is: the 37th prime number. The next prime is 163 and the previous prime is 151. a balanced prime, because the arithmetic mean of those primes yields 157. an emirp. a Chen prime. the largest known prime p which is also prime. (see ). the least irregular prime with index 2. a palindromic number in bases 7 (3137) and 12 (11112). a repunit in base 12, so it is a unique prime in the same base. a prime whose digits sum to a prime. (see ). a prime index prime. In base 10, 1572 is 24649, and 1582 is 24964, which uses the same digits. Numbers having this property are listed in . The previous entry is 13, and the next entry after 157 is 913. The simplest right angle triangle with rational sides that has area 157 has the longest side with a denominator of 45 digits. In the military was a United States Coast Guard cutter built in 1926 was a United States Navy Type T2 tanker during World War II was a United States Navy Alamosa-class cargo ship during World War II was a United States Navy Admirable-class minesweeper during World War II was a United States Navy Wickes-class destroyer during World War II was a United States Navy Buckley-class destroyer escort during World War II was a United States Navy General G. O. Squier-class transport ship during World War II was a United States Navy LST-542-class tank landing ship during World War II was a United States Navy ship during World War II was a United States Navy transport military ship during World War II was a United States Navy yacht during World War I ZIL-157 is a 2.5-ton truck produced in post-World War II Russia In music "157 Riverside Avenue" is a song by REO Speedwagon from their debut album, REO Speedwagon in 1971. Its title refers to a Westport, Connecticut address where the band stayed while recording it. Piano Sonata No. 1 in E major, D. 157 is a piano sonata in three movements
https://en.wikipedia.org/wiki/Radioimmunoassay
A radioimmunoassay (RIA) is an immunoassay that uses radiolabeled molecules in a stepwise formation of immune complexes. A RIA is a very sensitive in vitro assay technique used to measure concentrations of substances, usually measuring antigen concentrations (for example, hormone levels in blood) by use of antibodies. Although the RIA technique is extremely sensitive and extremely specific, requiring specialized equipment, it remains among the least expensive methods to perform such measurements. It requires special precautions and licensing, since radioactive substances are used. In contrast, an immunoradiometric assay (IRMA) is an immunoassay that uses radiolabeled molecules but in an immediate rather than stepwise way. A radioallergosorbent test (RAST) is an example of radioimmunoassay. It is used to detect the causative allergen for an allergy. Method Classically, to perform a radioimmunoassay, a known quantity of an antigen is made radioactive, frequently by labeling it with gamma-radioactive isotopes of iodine, such as 125-I, or tritium attached to tyrosine. This radiolabeled antigen is then mixed with a known amount of antibody for that antigen, and as a result, the two specifically bind to one another. Then, a sample of serum from a patient containing an unknown quantity of that same antigen is added. This causes the unlabeled (or "cold") antigen from the serum to compete with the radiolabeled antigen ("hot") for antibody binding sites. As the concentration of "cold" antigen is increased, more of it binds to the antibody, displacing the radiolabeled variant, and reducing the ratio of antibody-bound radiolabeled antigen to free radiolabeled antigen. The bound antigens are then separated and the radioactivity of the free(unbound) antigen remaining in the supernatant is measured using a gamma counter. This value is then compared to a standardised calibration curve to work out the concentration of the unlabelled antigen in the patient serum sample. RIAs c
https://en.wikipedia.org/wiki/Nicolaus%20II%20Bernoulli
Nicolaus II Bernoulli (also spelled as Niklaus or Nikolaus; 6 February 1695 in Basel – 31 July 1726 in Saint Petersburg) was a Swiss mathematician as were his father Johann Bernoulli and one of his brothers, Daniel Bernoulli. He was one of the many prominent mathematicians in the Bernoulli family. Work Nicolaus worked mostly on curves, differential equations, and probability. He was a friend and contemporary of Leonhard Euler, who studied under Nicolaus' father. He also contributed to fluid dynamics. He was older brother of Daniel Bernoulli, to whom he also taught mathematics. Even in his youth he had learned several languages. From the age of 13, he studied mathematics and law at the University of Basel. In 1711 he received his Master's of Philosophy; in 1715 he received a Doctorate in Law. In 1716-17 he was a private tutor in Venice. From 1719 he had the Chair in Mathematics at the University of Padua, as the successor of Giovanni Poleni. He served as an assistant to his father, among other areas, in the correspondence over the priority dispute between Isaac Newton and Leibniz, and also in the priority dispute between his father and the English mathematician Brook Taylor. In 1720 he posed the problem of reciprocal orthogonal trajectories, which was intended as a challenge for the English Newtonians. From 1723 he was a law professor at the Berner Oberen Schule. In 1725 he together with his brother Daniel, with whom he was touring Italy and France at this time, was invited by Peter the Great to the newly founded St. Petersburg Academy. Eight months after his appointment he came down with a fever and died. His professorship was succeeded in 1727 by Leonhard Euler, whom the Bernoulli brothers had recommended. His early death cut short a promising career. See also Bernoulli distribution Bernoulli process Bernoulli trial St. Petersburg paradox External links Further reading 1695 births 1726 deaths Mathematical analysts Probability theorists Swiss Calvin
https://en.wikipedia.org/wiki/FL%20%28complexity%29
In computational complexity theory, the complexity class FL is the set of function problems which can be solved by a deterministic Turing machine in a logarithmic amount of memory space. As in the definition of L, the machine reads its input from a read-only tape and writes its output to a write-only tape; the logarithmic space restriction applies only to the read/write working tape. Loosely speaking, a function problem takes a complicated input and produces a (perhaps equally) complicated output. Function problems are distinguished from decision problems, which produce only Yes or No answers and corresponds to the set L of decision problems which can be solved in deterministic logspace. FL is a subset of FP, the set of function problems which can be solved in deterministic polynomial time. FL is known to contain several natural problems, including arithmetic on numbers. Addition, subtraction and multiplication of two numbers are fairly simple, but achieving division is a far deeper problem which was open for decades. Similarly one may define FNL, which has the same relation with NL as FNP has with NP. References Complexity classes
https://en.wikipedia.org/wiki/Polydimethylsiloxane
Polydimethylsiloxane (PDMS), also known as dimethylpolysiloxane or dimethicone, is a silicone polymer with a wide variety of uses, from cosmetics to industrial lubrication. It is particularly known for its unusual rheological (or flow) properties. PDMS is optically clear and, in general, inert, non-toxic, and non-flammable. It is one of several types of silicone oil (polymerized siloxane). Its applications range from contact lenses and medical devices to elastomers; it is also present in shampoos (as it makes hair shiny and slippery), food (antifoaming agent), caulk, lubricants and heat-resistant tiles. Structure The chemical formula of PDMS is , where n is the number of repeating monomer units. Industrial synthesis can begin from dimethyldichlorosilane and water by the following net reaction: The polymerization reaction evolves hydrochloric acid. For medical and domestic applications, a process was developed in which the chlorine atoms in the silane precursor were replaced with acetate groups. In this case, the polymerization produces acetic acid, which is less chemically aggressive than HCl. As a side-effect, the curing process is also much slower in this case. The acetate is used in consumer applications, such as silicone caulk and adhesives. Branching and capping Hydrolysis of generates a polymer that is terminated with silanol groups (). These reactive centers are typically "capped" by reaction with trimethylsilyl chloride: Silane precursors with more acid-forming groups and fewer methyl groups, such as methyltrichlorosilane, can be used to introduce branches or cross-links in the polymer chain. Under ideal conditions, each molecule of such a compound becomes a branch point. This can be used to produce hard silicone resins. In a similar manner, precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain. Well-defined PDMS with a low polydispersity
https://en.wikipedia.org/wiki/Pregnancy%20over%20age%2050
Pregnancy over the age of 50 has become possible for more women due to advances in assisted reproductive technology, in particular egg donation. Typically, a woman's fecundity ends with menopause, which, by definition, is 12 consecutive months without having had any menstrual flow at all. During perimenopause, the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods. In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55. The oldest recorded mother to date to conceive was 73 years. According to statistics from the Human Fertilisation and Embryology Authority, in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs). Maria del Carmen Bousada de Lara formerly held the record of oldest verified mother; she was aged 66 years 358 days when she gave birth to twins; she was 130 days older than Adriana Iliescu, who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. The oldest verified mother to conceive naturally (listed currently in the Guinness Records) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997. Erramatti Mangamma currently holds the record for being the oldest living mother who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad, India. She delivered twin baby girls, making her also the oldest mother to give birth to twins. The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar, India who gave birth to a baby boy at age 72 through in-vitro fertilisation. Age considerations Menopause typically occurs between 44 and 58 years of age. DNA testi
https://en.wikipedia.org/wiki/Linksys%20WRT54G%20series
The Linksys WRT54G Wi-Fi series is a series of Wi-Fi–capable residential gateways marketed by Linksys, a subsidiary of Cisco, from 2003 until acquired by Belkin in 2013. A residential gateway connects a local area network (such as a home network) to a wide area network (such as the Internet). Models in this series use one of various 32-bit MIPS processors. All WRT54G models support Fast Ethernet for wired data links, and 802.11b/g for wireless data links. Hardware and revisions WRT54G The original WRT54G was first released in December 2002. It has a 4+1 port network switch (the Internet/WAN port is part of the same internal network switch, but on a different VLAN). The devices have two removable antennas connected through Reverse Polarity TNC connectors. The WRT54GC router is an exception and has an internal antenna with optional external antenna. As a cost-cutting measure, as well as to satisfy FCC rules that prohibit fitting external antennas with higher gain, the design of the latest version of the WRT54G no longer has detachable antennas or TNC connectors. Instead, version 8 routers simply route thin wires into antenna 'shells' eliminating the connector. As a result, Linksys HGA7T and similar external antennas are no longer compatible with this model. Until version 5, WRT54G shipped with Linux-based firmware. WRT54GS The WRT54GS is nearly identical to the WRT54G except for additional RAM, flash memory, and SpeedBooster software. Versions 1 to 3 of this router have 8 MB of flash memory. Since most third parties' firmware only use up to 4 MB flash, a JFFS2-based read/write filesystem can be created and used on the remaining 4 MB free flash. This allows for greater flexibility of configurations and scripting, enabling this small router to both load-balance multiple ADSL lines (multi-homed) or to be run as a hardware layer-2 load balancer (with appropriate third party firmware). WRT54GL Linksys released the WRT54GL (the best-selling router of all time)
https://en.wikipedia.org/wiki/Admissible%20rule
In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955). Definitions Admissibility has been systematically studied only in the case of structural (i.e. substitution-closed) rules in propositional non-classical logics, which we will describe next. Let a set of basic propositional connectives be fixed (for instance, in the case of superintuitionistic logics, or in the case of monomodal logics). Well-formed formulas are built freely using these connectives from a countably infinite set of propositional variables p0, p1, .... A substitution σ is a function from formulas to formulas that commutes with applications of the connectives, i.e., for every connective f, and formulas A1, ... , An. (We may also apply substitutions to sets Γ of formulas, making ) A Tarski-style consequence relation is a relation between sets of formulas, and formulas, such that for all formulas A, B, and sets of formulas Γ, Δ. A consequence relation such that for all substitutions σ is called structural. (Note that the term "structural" as used here and below is unrelated to the notion of structural rules in sequent calculi.) A structural consequence relation is called a propositional logic. A formula A is a theorem of a logic if . For example, we identify a superintuitionistic logic L with its standard consequence relation generated by modus ponens and axioms, and we identify a normal modal logic with its global consequence relation generated by modus ponens, necessitation, and (as axioms) the theorems of the logic. A structural inference rule (or just rule for short) is given by a pair (Γ, B), usually written as where Γ = {A1, ... , An} is a finite set
https://en.wikipedia.org/wiki/Codex%20Alimentarius
The is a collection of internationally recognized standards, codes of practice, guidelines, and other recommendations published by the Food and Agriculture Organization of the United Nations relating to food, food production, food labeling, and food safety. History and governance Its name is derived from the Codex Alimentarius Austriacus. Its texts are developed and maintained by the Codex Alimentarius Commission (CAC), a body established in early November 1961 by the Food and Agriculture Organization of the United Nations (FAO), was joined by the World Health Organization (WHO) in June 1962, and held its first session in Rome in October 1963. The Commission's main goals are to protect the health of consumers, to facilitate international trade, and ensure fair practices in the international food trade. The CAC is an intergovernmental organization; the member states of the FAO and WHO send delegations to the CAC. As of 2021, there were 189 members of the CAC (188 member countries plus one member organization, the European Union (EU) and 239 Codex observers (59 intergovernmental organizations, 164 non-governmental organizations, and 16 United Nations organizations). The CAC develops food standards on scientific evidence furnished by the scientific committees of the FAO and WHO; the oldest of these, the Joint FAO/WHO Expert Committee on Food Additives (JECFA), was established in 1956 and predates the establishment of the CAC itself. According to a 2013 study, the CAC's primary functions are "establishing international food standards for approved food additives providing maximum levels in foods, maximum limits for contaminants and toxins, maximum residue limits for pesticides and for veterinary drugs used in veterinary animals, and establishing hygiene and technological function practice codes". The CAC does not have regulatory authority, and the Codex Alimentarius is a reference guide, not an enforceable standard on its own. However, several nations adopt the Co
https://en.wikipedia.org/wiki/Logic%20in%20computer%20science
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas: Theoretical foundations and analysis Use of computer technology to aid logicians Use of concepts from logic for computer applications Theoretical foundations and analysis Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect." In addition some other major areas of theoretical overlap between logic and computer science are: Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software. The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals and state of an artificial intelligence agent. The Curry–Howard correspondence is a relation between logical systems and software. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply typed lambda calculus correspond to proofs of intuitionistic propositional logic. Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems fo
https://en.wikipedia.org/wiki/Mathematics%20Genealogy%20Project
The Mathematics Genealogy Project (MGP) is a web-based database for the academic genealogy of mathematicians. it contained information on 274,575 mathematical scientists who contributed to research-level mathematics. For a typical mathematician, the project entry includes graduation year, thesis title (in its Mathematics Subject Classification), alma mater, doctoral advisor, and doctoral students. Origin of the database The project grew out of founder Harry Coonce's desire to know the name of his advisor's advisor. Coonce was Professor of Mathematics at Minnesota State University, Mankato, at the time of the project's founding, and the project went online there in fall 1997. Coonce retired from Mankato in 1999, and in fall 2002 the university decided that it would no longer support the project. The project relocated at that time to North Dakota State University. Since 2003, the project has also operated under the auspices of the American Mathematical Society and in 2005 it received a grant from the Clay Mathematics Institute. Harry Coonce has been assisted by Mitchel T. Keller, Assistant Professor at Morningside College. Keller is currently the Managing Director of the project. Mission and scope The Mathematics Genealogy Mission statement: "Throughout this project when we use the word 'mathematics' or 'mathematician' we mean that word in a very inclusive sense. Thus, all relevant data from statistics, computer science, philosophy or operations research is welcome." Scope The genealogy information is obtained from sources such as Dissertation Abstracts International and Notices of the American Mathematical Society, but may be supplied by anyone via the project's website. The searchable database contains the name of the mathematician, university which awarded the degree, year when the degree was awarded, title of the dissertation, names of the advisor and second advisor, a flag of the country where the degree was awarded, a listing of doctoral students, and a cou
https://en.wikipedia.org/wiki/IP%20Multimedia%20Subsystem
The IP Multimedia Subsystem or IP Multimedia Core Network Subsystem (IMS) is a standardised architectural framework for delivering IP multimedia services. Historically, mobile phones have provided voice call services over a circuit-switched-style network, rather than strictly over an IP packet-switched network. Alternative methods of delivering voice (VoIP) or other multimedia services have become available on smartphones, but they have not become standardized across the industry. IMS is an architectural framework that provides such standardization. IMS was originally designed by the wireless standards body 3rd Generation Partnership Project (3GPP), as a part of the vision for evolving mobile networks beyond GSM. Its original formulation (3GPP Rel-5) represented an approach for delivering Internet services over GPRS. This vision was later updated by 3GPP, 3GPP2 and ETSI TISPAN by requiring support of networks other than GPRS, such as Wireless LAN, CDMA2000 and fixed lines. IMS uses IETF protocols wherever possible, e.g., the Session Initiation Protocol (SIP). According to the 3GPP, IMS is not intended to standardize applications, but rather to aid the access of multimedia and voice applications from wireless and wireline terminals, i.e., to create a form of fixed-mobile convergence (FMC). This is done by having a horizontal control layer that isolates the access network from the service layer. From a logical architecture perspective, services need not have their own control functions, as the control layer is a common horizontal layer. However, in implementation this does not necessarily map into greater reduced cost and complexity. Alternative and overlapping technologies for access and provisioning of services across wired and wireless networks include combinations of Generic Access Network, softswitches and "naked" SIP. Since it is becoming increasingly easier to access content and contacts using mechanisms outside the control of traditional wireless/fixed o
https://en.wikipedia.org/wiki/Expectiminimax
The expectiminimax algorithm is a variation of the minimax algorithm, for use in artificial intelligence systems that play two-player zero-sum games, such as backgammon, in which the outcome depends on a combination of the player's skill and chance elements such as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take the expected value of a random event occurring. In game theory terms, an expectiminimax tree is the game tree of an extensive-form game of perfect, but incomplete information. In the traditional minimax method, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of the utility values of their children, chance nodes take a weighted average, with the weight being the probability that child is reached. The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player). For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min". Pseudocode The expectiminimax algorithm is a variant of the minimax algorithm and was firstly proposed by Donald Michie in 1966. Its pseudocode is given below. function expectiminimax(node, depth) if node is a terminal node or depth = 0 return the heuristic value of node if the adversary is to play at node // Return value of minimum-valued child node let α := +∞ foreach child of node α := min(α, expectiminimax(child, depth-1))
https://en.wikipedia.org/wiki/HPE%20Integrity%20Servers
HPE Integrity Servers is a series of server computers produced by Hewlett Packard Enterprise (formerly Hewlett-Packard) since 2003, based on the Itanium processor. The Integrity brand name was inherited by HP from Tandem Computers via Compaq. In 2015 HP released the Superdome X line of Integrity Servers based on the x86 Architecture. It is a 'small' Box holding up to 8 dual Socket Blades and supporting up to 16 processors/240 cores (when populated with Intel Xeon E7-2890 or E7-2880 Processors). General Over the years, Integrity systems have supported Windows Server, HP-UX 11i, OpenVMS, NonStop, Red Hat Enterprise Linux and SUSE Linux Enterprise Server operating systems on Integrity servers. As of 2020 the operating systems that are supported are HP-UX 11i, OpenVMS and NonStop. Early Integrity servers were based on two closely related chipsets. The zx1 chipset supported up to 4 CPUs and up to 8 PCI-X busses. They consisted of three distinct application-specific integrated circuits; a memory and I/O controller, a scalable memory adapter and an I/O adapter. The PA-8800 and PA-8900 microprocessors use the same bus as the Itanium 2 processors, allowing HP to also use this chipset for the HP 9000 servers and C8000 workstations. The memory and I/O controller can be attached directly to up to 12 DDR SDRAM slots. If more slots than this are needed, two scalable memory adapters can be attached instead, allowing up to 48 memory slots. The chipset supports DIMM sizes up to 4 GB, theoretically allowing a machine to support up to 192 GB of RAM, although the largest supported configuration was 128 GB. The sx1000 chipset supported up to 64 CPUs and up to 192 PCI-X buses. The successor chipsets were the zx2 and sx2000 respectively. Entry-level servers rx1600 series The 1U rx1600 server is based on the zx1 chipset and has support for one or two 1 GHz Deerfield Itanium 2 CPUs. The 1U rx1620 server is based on the zx1 chipset and has support for one or two 1.3/1.6 GHz Fanwood I
https://en.wikipedia.org/wiki/Runway%20bus
The Runway bus is a front-side bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family. The Runway bus is a 64-bit wide, split transaction, time multiplexed address and data bus running at 120 MHz. This scheme was chosen by HP as they determined that a bus using separate address and data wires would have only delivered 20% more bandwidth for a 50% increase in pin count, which would have made microprocessors using the bus more expensive. The Runway bus was introduced with the release of the PA-7200 and was subsequently used by the PA-8000, PA-8200, PA-8500, PA-8600 and PA-8700 microprocessors. Early implementations of the bus used in the PA-7200, PA-8000 and PA-8200 had a theoretical bandwidth of 960 MB/s. Beginning with the PA-8500, the Runway bus was revised to transmit on both rising and falling edges of a 125 MHz clock signal, which increased its theoretical bandwidth to 2 GB/s. The Runway bus was succeeded with the introduction of the PA-8800, which used the Itanium 2 bus. Bus features 64-bit multiplexed address/data 20 bus protocol signals Supports cache coherency Three frequency options (1.0, 0.75 and 0.67 of CPU clock — 0.50 apparently was later added) Parity protection on address/data and control signal Each attached device contains its own arbitrator logic Split transactions, up to six transactions can be pending at once Snooping cache coherency protocol 1-4 processors "glueless" multi-processing (no support chips needed) 768 MB/s sustainable throughput, peak 960 MB/s at 120 MHz Runway+/Runway DDR: On PA-8500, PA-8600 and PA-8700, the bus operates in DDR (double data rate) mode, resulting in a peak bandwidth of about 2.0 GB/s (Runway+ or Runway DDR) with 125 MHz Most machines use the Runway bus to connect the CPUs directly to the IOMMU (Astro, U2/Uturn or Java) and memory. However, the N class and L3000 servers use an interface chip called Dew to bridge the Runway bus to the Merced bus that connects to the IOMMU and mem
https://en.wikipedia.org/wiki/Double%20data%20rate
In computing, double data rate (DDR) describes a computer bus that transfers data on both the rising and falling edges of the clock signal. This is also known as double pumped, dual-pumped, and double transition. The term toggle mode is used in the context of NAND flash memory. Overview The simplest way to design a clocked electronic circuit is to make it perform one transfer per full cycle (rise and fall) of a clock signal. This, however, requires that the clock signal changes twice per transfer, while the data lines change at most once per transfer. When operating at a high bandwidth, signal integrity limitations constrain the clock frequency. By using both edges of the clock, the data signals operate with the same limiting frequency, thereby doubling the data transmission rate. This technique has been used for microprocessor front-side busses, Ultra-3 SCSI, expansion buses (AGP, PCI-X), graphics memory (GDDR), main memory (both RDRAM and DDR1 through DDR5), and the HyperTransport bus on AMD's Athlon 64 processors. It is more recently being used for other systems with high data transfer speed requirements as an example, for the output of analog-to-digital converters (ADCs). DDR should not be confused with dual channel, in which each memory channel accesses two RAM modules simultaneously. The two technologies are independent of each other and many motherboards use both, by using DDR memory in a dual channel configuration. An alternative to double or quad pumping is to make the link self-clocking. This tactic was chosen by InfiniBand and PCI Express. Relation of bandwidth and frequency Describing the bandwidth of a double-pumped bus can be confusing. Each clock edge is referred to as a beat, with two beats (one upbeat and one downbeat) per cycle. Technically, the hertz is a unit of cycles per second, but many people refer to the number of transfers per second. Careful usage generally talks about "500 MHz, double data rate" or "1000 MT/s", but many refe
https://en.wikipedia.org/wiki/Planimetrics
Planimetrics is the study of plane measurements, including angles, distances, and areas. History To measure planimetrics a planimeter or dot planimeter is used. This rather advanced analog technology is being taken over by simple image measurement software tools like, ImageJ, Adobe Acrobat, Google Earth Pro, Gimp, Photoshop and KLONK Image Measurement which can help do this kind of work from digitalized images. In geography Planimetric elements in geography are those features that are independent of elevation, such as roads, building footprints, and rivers and lakes. They are represented on two-dimensional maps as they are seen from the air, or in aerial photography. These features are often digitized from orthorectified aerial photography into data layers that can be used in analysis and cartographic outputs. A planimetric map is one that does not include relief data. See also Horizontal position representation Two-dimensional space References Cartography Infographics Surveying
https://en.wikipedia.org/wiki/EIA-608
EIA-608, also known as "line 21 captions" and "CEA-608", was once the standard for closed captioning for NTSC TV broadcasts in the United States, Canada and Mexico. It was developed by the Electronic Industries Alliance and required by law to be implemented in most television receivers made in the United States. It specifies an "Extended Data Service", which is a means for including a VCR control service with an electronic program guide for NTSC transmissions that operates on the even line 21 field, similar to the TeleText based VPS that operates on line 16 which is used in PAL countries. EIA-608 captions are transmitted on either the odd or even fields of line 21 with an odd parity bit in the non-visible active video data area in NTSC broadcasts, and are also sometimes present in the picture user data in ATSC transmissions. It uses a fixed bandwidth of 480 bit/s per line 21 field for a maximum of 32 characters per line per caption (maximum four captions) for a 30 frame broadcast. The odd field captions relate to the primary audio track and the even field captions related to the SAP or secondary audio track which is generally a second language translation of the primary audio, such as a French or Spanish translation of an English-speaking TV show. Raw EIA-608 caption byte pairs are becoming less prevalent as digital television replaces analog. ATSC broadcasts instead use the EIA-708 caption protocol to encapsulate both the EIA-608 caption pairs as well as add a native EIA-708 stream. EIA-608 has had revisions with the addition of extended character sets to fully support the representation of the Spanish, French, German languages, and cross section of other Western European languages. EIA-608 was also extended to support two byte characters for the Korean and Japanese markets. The full version of EIA-708 has support for more character sets and better caption positioning options; however, because of existing EIA-608 hardware and revisions to the format, there h
https://en.wikipedia.org/wiki/Pyrosequencing
Pyrosequencing is a method of DNA sequencing (determining the order of nucleotides in DNA) based on the "sequencing by synthesis" principle, in which the sequencing is performed by detecting the nucleotide incorporated by a DNA polymerase. Pyrosequencing relies on light detection based on a chain reaction when pyrophosphate is released. Hence, the name pyrosequencing. The principle of pyrosequencing was first described in 1993 by, Bertil Pettersson, Mathias Uhlen and Pål Nyren by combining the solid phase sequencing method using streptavidin coated magnetic beads with recombinant DNA polymerase lacking 3´to 5´exonuclease activity (proof-reading) and luminescence detection using the firefly luciferase enzyme. A mixture of three enzymes (DNA polymerase, ATP sulfurylase and firefly luciferase) and a nucleotide (dNTP) are added to single stranded DNA to be sequenced and the incorporation of nucleotide is followed by measuring the light emitted. The intensity of the light determines if 0, 1 or more nucleotides have been incorporated, thus showing how many complementary nucleotides are present on the template strand. The nucleotide mixture is removed before the next nucleotide mixture is added. This process is repeated with each of the four nucleotides until the DNA sequence of the single stranded template is determined. A second solution-based method for pyrosequencing was described in 1998 by Mostafa Ronaghi, Mathias Uhlen and Pål Nyren. In this alternative method, an additional enzyme apyrase is introduced to remove nucleotides that are not incorporated by the DNA polymerase. This enabled the enzyme mixture including the DNA polymerase, the luciferase and the apyrase to be added at the start and kept throughout the procedure, thus providing a simple set-up suitable for automation. An automated instrument based on this principle was introduced to the market the following year by the company Pyrosequencing. A third microfluidic variant of the pyrosequencing method wa
https://en.wikipedia.org/wiki/Hemachandra
Hemachandra was a 12th century () Indian Jain saint, scholar, poet, mathematician, philosopher, yogi, grammarian, law theorist, historian, lexicographer, rhetorician, logician, and prosodist. Noted as a prodigy by his contemporaries, he gained the title kalikālasarvajña, "the knower of all knowledge in his times" and father of Gujarati language. Born as Changadeva, he was ordained in the Śvētāmbara school of Jainism in 1110 and took the name Somachandra. In 1125 he became an adviser to King Kumarapala and wrote Arhanniti, a work on politics from a Jain perspective. He also produced Trishashti-shalaka-purusha-charita (“Deeds of the 63 Illustrious Men”), a Sanskrit epic poem on the history of important figures of Jainism. Later in his life, he changed his name to Hemachandra. Early life Hemachandra was born in Dhandhuka, in present-day Gujarat, on Kartika Sud Purnima (the full moon day of Kartika month). His date of birth differs according to sources but 1088 is generally accepted. His father, Chachiga-deva was a Modh Bania Vaishnava. His mother, Pahini, was a Jain. Hemchandra's original given name was Changadeva. In his childhood, the Jain monk Devachandra Suri visited Dhandhuka and was impressed by the young Hemachandra's intellect. His mother and maternal uncle concurred with Devachandra, in opposition to his father, that Hemachandra be a disciple of his. Devachandra took Hemachandra to Khambhat, where Hemachandra was placed under the care of the local governor Udayana. Chachiga came to Udayana's place to take his son back, but was so overwhelmed by the kind treatment he received, that he decided to willingly leave his son with Devachandra. Some years later, Hemachandra was initiated a Jain monk on Magha Sud Chauth (4th day of the bright half of Magha month) and was given a new name, Somchandra. Udayana helped Devchandra Suri in the ceremony. He was trained in religious discourse, philosophy, logic and grammar and became well versed in Jain and non–Jain scriptur
https://en.wikipedia.org/wiki/LPX%20%28form%20factor%29
LPX (short for Low Profile eXtension), originally developed by Western Digital, was a loosely defined motherboard format (form factor) widely used in the 1990s. There was never any official LPX specification, but the design normally featured a motherboard with the main I/O ports mounted on the back (something that was later adopted by the ATX form factor), and a riser card in the center of the motherboard, on which the PCI and ISA slots were mounted. Due to the lack of standardized specification, riser cards were seldom compatible from one motherboard design to another, much less one manufacturer to another. The internal PSU connector was of the same type used in the AT form factor. One of the more successful features to come out of the LPX specification was its use of more compact power supplies, which later became widely used on Baby AT and even full size AT cases. Because LPX form factor power supplies became ubiquitous in most computer cases prior to the ATX standard, it was not unusual for manufacturers to refer to them as "AT" power supplies (or occasionally as "PS/2" power supplies due to its use by the IBM PS/2), even though the actual AT and Baby AT power supply form factors were larger in size. The LPX form factor power supply eventually formed the basis for the ATX form factor power supply, which is the same width and height. The specification was very popular in the early-mid 1990s, and briefly displaced the AT form factor as the most commonly used. However, the release of the Pentium II in 1997 highlighted the flaws of the format, as a good airflow was important in Pentium II systems, owing to the relatively high heat dispersal requirements of the processor. LPX systems suffered a restricted airflow due to the centrally placed riser cards. The introduction of the AGP format further complicated matters, as the design not only increased the pin count on riser cards, but it limited most cards to one AGP, one PCI and one ISA slot, which was too restr
https://en.wikipedia.org/wiki/Mix-minus
In audio engineering, a mix-minus or clean feed is a particular setup of a mixing console or matrix mixer, such that an output of the mixer contains everything except a designated input. Mix-minus is often used to prevent echoes or feedback in broadcast or sound reinforcement systems. Examples A common situation in which a mix-minus is used is when a telephone hybrid is connected to a console, usually at a radio station. The person on the telephone hears all relevant feeds, usually an identical mix to that of the master bus, including the DJ's mic feed, except that the caller does not hear their own voice. Mix-minus is also often used together with IFB systems in electronic news gathering (ENG) for television news reporters and interview subjects speaking to a host from a remote outside broadcast (OB) location. Because of the delay that is introduced in most means of transmission (including satellite feeds and audio over IP connections), the remote subject's voice has to be removed from their earpiece. Otherwise, the subject would hear themselves with a slight (but very distracting) delay. Another common example is in the field of sound reinforcement, when people need to hear a full mix except their own microphone. Legislative bodies of government may use a large mix-minus system, for instance houses of parliament or congressional groups that have a small loudspeaker and a microphone at each seat. From the desktop loudspeaker, each person hears every microphone except their own. This enables all participants to hear each other clearly but minimizes problems with acoustic feedback. In 1994, the first digital audio implementation of such a system was installed at the United States Senate building, with more than 100 mix-minus outputs, one for each senator and also guest seats. Mix-minus is also used with VoIP communication when recording for podcasts: mix-minus removes the caller's voice from the VoIP call, but allows them to hear all other channels availabl
https://en.wikipedia.org/wiki/USENIX%20Annual%20Technical%20Conference
The USENIX Annual Technical Conference (USENIX ATC, or, canonically, USENIX) is a conference of computing professions sponsored by the USENIX association. The conference includes computing tutorials, and a single track technical session for presenting refereed research papers, SIG meetings, and BoFs. There have been several notable announcements and talks at USENIX. In 1995, James Gosling announced "Oak", which was to become the Java Programming Language. John Ousterhout first presented TCL here, and Usenet was announced here. It is considered one of the most prestigious operating systems venues and has an 'A' rating from the Australian Ranking of ICT Conferences (ERA). List of conferences USENIX ATC 2018 — Boston, Massachusetts, July 11–13, 2018. USENIX ATC 2017 — Santa Clara, California, July 12–14, 2017. USENIX ATC 2016 — Denver, Colorado, June 22–24, 2016. USENIX ATC 2015 — Santa Clara, California, July 8–10, 2015. USENIX ATC 2014 — Philadelphia, Pennsylvania, June 19–20, 2014. USENIX ATC 2013 — San Jose, California, June 26–28, 2013. USENIX ATC 2012 — Boston, Massachusetts, June 13–15, 2012. USENIX ATC 2011 — Portland, Oregon, June 15–17, 2011. USENIX ATC 2010 — Boston, Massachusetts, June 23–25, 2010. USENIX ATC 2009 — San Diego, California, June 14–19, 2009. USENIX ATC 2008 — Boston, Massachusetts, June 22–June 27, 2008. USENIX ATC 2007 — Santa Clara, California, June 17–22, 2007. USENIX ATC 2006 — Boston, Massachusetts, May 30–June 3, 2006. USENIX ATC 2005 — Anaheim, California, April 10–15, 2005. USENIX ATC 2004 — Boston, Massachusetts, June 27–July 2, 2004. USENIX ATC 2003 — San Antonio, Texas, June 9–14, 2003. USENIX ATC 2002 — Monterey, California, June 10–15, 2002. USENIX ATC 2001 — Boston, Massachusetts, June 25–30, 2001. USENIX ATC 2000 — San Diego, California, June 18–23, 2000. USENIX ATC 1999 — Monterey, California, June 6–11, 1999. USENIX ATC 1998 — New Orleans, Louisiana, June 15–19, 1998. USENIX ATC 1997 — Anaheim, Cal
https://en.wikipedia.org/wiki/Soft%20error
In electronics and computing, a soft error is a type of error where a signal or datum is wrong. Errors may be caused by a defect, usually understood either to be a mistake in design or construction, or a broken component. A soft error is also a signal or datum which is wrong, but is not assumed to imply such a mistake or breakage. After observing a soft error, there is no implication that the system is any less reliable than before. One cause of soft errors is single event upsets from cosmic rays. In a computer's memory system, a soft error changes an instruction in a program or a data value. Soft errors typically can be remedied by cold booting the computer. A soft error will not damage a system's hardware; the only damage is to the data that is being processed. There are two types of soft errors, chip-level soft error and system-level soft error. Chip-level soft errors occur when particles hit the chip, e.g., when secondary particles from cosmic rays land on the silicon die. If a particle with certain properties hits a memory cell it can cause the cell to change state to a different value. The atomic reaction in this example is so tiny that it does not damage the physical structure of the chip. System-level soft errors occur when the data being processed is hit with a noise phenomenon, typically when the data is on a data bus. The computer tries to interpret the noise as a data bit, which can cause errors in addressing or processing program code. The bad data bit can even be saved in memory and cause problems at a later time. If detected, a soft error may be corrected by rewriting correct data in place of erroneous data. Highly reliable systems use error correction to correct soft errors on the fly. However, in many systems, it may be impossible to determine the correct data, or even to discover that an error is present at all. In addition, before the correction can occur, the system may have crashed, in which case the recovery procedure must include a reboo
https://en.wikipedia.org/wiki/Jon%20Freeman%20%28game%20designer%29
Jon Freeman is a game designer and co-founder of software developer Automated Simulations, which was later renamed to Epyx and became a major company during the 8-bit era of home computing. He is married to game programmer Anne Westfall, and they work together as Free Fall Associates. Free Fall is best known for Archon: The Light and the Dark, one of the earliest titles from Electronic Arts. Career Automated Simulations and Epyx Freeman worked as a game designer for video game developer and publisher, Epyx, which he co-founded with Jim Connelley in 1978 as Automated Simulations. Their first game, Starfleet Orion, was a two-player only game developed mainly so Connelley could write off the cost of his Commodore PET computer. Freeman provided design while Connelley handled the programming in BASIC. Freeman was amazed when they actually had a finished product and they had to create a company to publish it. So, both he and Connelley fell into the computer game industry by accident. It was while with this company, still known as Automated Simulations in 1980, that Freeman met his future wife, Anne Westfall, at a computer fair. Starfleet Orion was quickly followed by Invasion Orion. What followed was a slew of very successful titles for various platforms. Freeman designed or co-designed a number of Epyx games, such as Crush, Crumble and Chomp! and Rescue at Rigel. Freeman tired of what he called "office politics" and yearned to get away from the now much larger company. The Complete Book of Wargames In 1980, Freeman, in collaboration with the editors of Consumer Guide, wrote The Complete Book of Wargames, which was published by Simon & Schuster under their "Fireside" imprint. In the book, Freeman, explained the history of wargames to that point, the notable companies, the usual components, and evaluated most of the major wargames in print at the time, as well as the role that computer games would play in this field. Free Fall Associates In 1981, Freeman and A
https://en.wikipedia.org/wiki/Business%20process%20automation
Business process automation (BPA), also known as business automation, is the technology-enabled automation of business processes. It can streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery, or contain costs. BPA consists of integrating applications, restructuring labor resources, and using software applications throughout the organization. Robotic process automation is an emerging field within BPA. Deployment Toolsets vary in sophistication, but there is an increasing trend towards the use of artificial intelligence technologies that can understand natural language and unstructured data sets, interact with human beings, and adapt to new types of problems without human-guided training. In order to automate the processes, connectors are needed to fit these systems/solutions together with a data exchange layer to transfer the information. A process driven messaging service is an option for optimizing data exchange layer. By mapping the end-to-end process workflow, an integration between individual platforms using a process driven messaging platform can be built. A business process management implementation A business process management system is different from BPA. However, it is possible to build automation on the back of a BPM implementation. The actual tools to achieve this vary, from writing custom application code to using specialist BPA tools. The advantages and disadvantages of this approach are inextricably linked – the BPM implementation provides an architecture for all processes in the business to be mapped, but this in itself delays the automation of individual processes and so benefits may be lost in the meantime. Robotic process automation The practice of performing robotic process automation (RPA) results in the deployment of attended or unattended software agents into an organization's environment. These software agents, or robots, are deployed to perform pre-defined structured and
https://en.wikipedia.org/wiki/Graded%20category
If is a category, then a -graded category is a category together with a functor . Monoids and groups can be thought of as categories with a single object. A monoid-graded or group-graded category is therefore one in which to each morphism is attached an element of a given monoid (resp. group), its grade. This must be compatible with composition, in the sense that compositions have the product grade. Definition There are various different definitions of a graded category, up to the most abstract one given above. A more concrete definition of a graded abelian category is as follows: Let be an abelian category and a monoid. Let be a set of functors from to itself. If is the identity functor on , for all and is a full and faithful functor for every we say that is a -graded category. See also Differential graded category Graded (mathematics) Graded algebra Slice category References Category theory
https://en.wikipedia.org/wiki/Graded%20vector%20space
In mathematics, a graded vector space is a vector space that has the extra structure of a grading or gradation, which is a decomposition of the vector space into a direct sum of vector subspaces, generally indexed by the integers. For "pure" vector spaces, the concept has been introduced in homological algebra, and it is widely used for graded algebras, which are graded vector spaces with additional structures. Integer gradation Let be the set of non-negative integers. An -graded vector space, often called simply a graded vector space without the prefix , is a vector space together with a decomposition into a direct sum of the form where each is a vector space. For a given n the elements of are then called homogeneous elements of degree n. Graded vector spaces are common. For example the set of all polynomials in one or several variables forms a graded vector space, where the homogeneous elements of degree n are exactly the linear combinations of monomials of degree n. General gradation The subspaces of a graded vector space need not be indexed by the set of natural numbers, and may be indexed by the elements of any set I. An I-graded vector space V is a vector space together with a decomposition into a direct sum of subspaces indexed by elements i of the set I: Therefore, an -graded vector space, as defined above, is just an I-graded vector space where the set I is (the set of natural numbers). The case where I is the ring (the elements 0 and 1) is particularly important in physics. A -graded vector space is also known as a supervector space. Homomorphisms For general index sets I, a linear map between two I-graded vector spaces is called a graded linear map if it preserves the grading of homogeneous elements. A graded linear map is also called a homomorphism (or morphism) of graded vector spaces, or homogeneous linear map: for all i in I. For a fixed field and a fixed index set, the graded vector spaces form a category whose morphisms
https://en.wikipedia.org/wiki/Computer%20reservation%20system
Computer reservation systems, or central reservation systems (CRS), are computerized systems used to store and retrieve information and conduct transactions related to air travel, hotels, car rental, or other activities. Originally designed and operated by airlines, CRSs were later extended for use by travel agencies, and global distribution systems (GDSs) to book and sell tickets for multiple airlines. Most airlines have outsourced their CRSs to GDS companies, which also enable consumer access through Internet gateways. Modern GDSs typically also allow users to book hotel rooms, rental cars, airline tickets as well as other activities and tours. They also provide access to railway reservations and bus reservations in some markets, although these are not always integrated with the main system. These are also used to relay computerized information for users in the hotel industry, making reservation and ensuring that the hotel is not overbooked. Airline reservations systems may be integrated into a larger passenger service system, which also includes an airline inventory system and a departure control system. The current centralised reservation systems are vulnerable to network-wide system disruptions. History Origins In 1946, American Airlines installed the first automated booking system, the experimental electromechanical Reservisor. A newer machine with temporary storage based on a magnetic drum, the Magnetronic Reservisor, soon followed. This system proved successful, and was soon being used by several airlines, as well as Sheraton Hotels and Goodyear for inventory control. It was seriously hampered by the need for local human operators to do the actual lookups; ticketing agents would have to call a booking office, whose operators would direct a small team operating the Reservisor and then read the results over the telephone. There was no way for agents to directly query the system. The MARS-1 train ticket reservation system was designed and planned in the 19
https://en.wikipedia.org/wiki/Sabre%20%28travel%20reservation%20system%29
Sabre Global Distribution System, owned by Sabre Corporation, is a travel reservation system used by travel agents and companies to search, price, book, and ticket travel services provided by airlines, hotels, car rental companies, rail providers and tour operators. Originally developed by American Airlines under CEO C.R. Smith with the assistance of IBM in 1960, the booking service became available for use by external travel agents in 1976 and became independent of the airline in March 2000. Overview The system's parent company is organized into three business units: Sabre Travel Network: global distribution system Sabre Airline Solutions: airline technology Sabre Hospitality Solutions: hotel technology solutions Sabre is headquartered in Southlake, Texas, and has employees in various locations around the world. History The name of the travel reservation system is an abbreviation for "Semi-automated Business Research Environment", and was originally styled in all-capital letters as SABRE. It was developed to automate the way American Airlines booked reservations. In the 1950s, American Airlines was facing a serious challenge in its ability to quickly handle airline reservations in an era that witnessed high growth in passenger volumes in the airline industry. Before the introduction of SABRE, the airline's system for booking flights was entirely manual, having developed from the techniques originally developed at its Little Rock, Arkansas, reservations center in the 1920s. In this manual system, a team of eight operators would sort through a rotating file with cards for every flight. When a seat was booked, the operators would place a mark on the side of the card, and knew visually whether it was full. This part of the process was not all that slow, at least when there were not that many planes, but the entire end-to-end task of looking for a flight, reserving a seat, and then writing up the ticket could take up to three hours in some cases, and 90 minutes on
https://en.wikipedia.org/wiki/Sabre%20Corporation
Sabre Corporation is a travel technology company based in Southlake, Texas. It is the largest global distribution systems provider for air bookings in North America. American Airlines founded the company in 1960, and it was spun off in 2000. In 2007, Texas Pacific Group and Silver Lake Partners acquired what was then Sabre Holdings. Sabre began publicly trading on the NASDAQ in 2014. History Early history In 1953, C.R. Smith, the president of American Airlines, met Blair Smith, an IBM salesman, on a flight and developed the Sabre (the Semi-Automatic Business Research Environment) concept. The system was based on SAGE, the first major system to use interactive real-time computing, which IBM had developed for military use. Sabre Corporation was founded in 1960 by American Airlines. Sabre Corporation installed the first Sabre reservation system in Briarcliff Manor, New York that year. The system consisted of two IBM 7090 mainframe computers and processed 84,000 calls per day. In 1964, Sabre's nationwide network was completed and became the largest commercial real-time data-processing system in the world. Sabre Corporation handled 7500 passenger reservations per hour in 1965. The Sabre system upgraded to IBM System/360 and moved to a new center in Tulsa, Oklahoma in 1972. In 1976, the Sabre system was installed into a travel agency for the first time. This allowed travel agents to have instant access to flights. By the end of the year, 130 locations installed the Sabre system. Sabre introduced BargainFinder, the industry's first automated low-fare search capability, in 1984. The following year, easySabre was launched. It gave consumers with personal computers access to the Sabre system to make airline, hotel and car rental reservations. In 1989 The New York Times reported Sabre having "about 38 percent of the reservations market." In 1996, the company launched Travelocity, an online travel agency. Sabre formed a joint venture with Abacus International in 1998 to
https://en.wikipedia.org/wiki/Varicode
Varicode is a self-synchronizing code for use in PSK31. It supports all ASCII characters, but the characters used most frequently in English have shorter codes. The space between characters is indicated by a 00 sequence, an implementation of Fibonacci coding. Originally created for speeding up real-time keyboard-to-keyboard exchanges over low bandwidth links, Varicode is freely available. Limitations Varicode provides somewhat weaker compression in languages other than English that use same characters as in English. Varicode table Control characters Printable characters Character lengths Beginning with the single-bit code "1", valid varicode values may be formed by prefixing a "1" or "10" to a shorter code. Thus, the number of codes of length n is equal to the Fibonacci number Fn. Varicode uses the 88 values of lengths up to 9 bits, and 40 of the 55 codes of length 10. As transmitted, the codes are two bits longer due to the trailing delimiter 00. References Character encoding
https://en.wikipedia.org/wiki/Detection%20theory
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of electronics, signal recovery is the separation of such patterns from a disguising background. According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g., fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954. Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases. Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the
https://en.wikipedia.org/wiki/Biofilter
Biofiltration is a pollution control technique using a bioreactor containing living material to capture and biologically degrade pollutants. Common uses include processing waste water, capturing harmful chemicals or silt from surface runoff, and microbiotic oxidation of contaminants in air. Industrial biofiltration can be classified as the process of utilizing biological oxidation to remove volatile organic compounds, odors, and hydrocarbons. Examples of biofiltration Examples of biofiltration include: Bioswales, biostrips, biobags, bioscrubbers, Vermifilters and trickling filters Constructed wetlands and natural wetlands Slow sand filters Treatment ponds Green belts Green walls Riparian zones, riparian forests, bosques Bivalve bioaccumulation Control of air pollution When applied to air filtration and purification, biofilters use microorganisms to remove air pollution. The air flows through a packed bed and the pollutant transfers into a thin biofilm on the surface of the packing material. Microorganisms, including bacteria and fungi are immobilized in the biofilm and degrade the pollutant. Trickling filters and bioscrubbers rely on a biofilm and the bacterial action in their recirculating waters. The technology finds the greatest application in treating malodorous compounds and volatile organic compounds (VOCs). Industries employing the technology include food and animal products, off-gas from wastewater treatment facilities, pharmaceuticals, wood products manufacturing, paint and coatings application and manufacturing and resin manufacturing and application, etc. Compounds treated are typically mixed VOCs and various sulfur compounds, including hydrogen sulfide. Very large airflows may be treated and although a large area (footprint) has typically been required—a large biofilter (>200,000 acfm) may occupy as much or more land than a football field—this has been one of the principal drawbacks of the technology. Since the early 1990s, engineered biofil
https://en.wikipedia.org/wiki/Implicant
In Boolean logic, the term implicant has either a generic or a particular meaning. In the generic use, it refers to the hypothesis of an implication (implicant). In the particular use, a product term (i.e., a conjunction of literals) P is an implicant of a Boolean function F, denoted , if P implies F (i.e., whenever P takes the value 1 so does F). For instance, implicants of the function include the terms , , , , as well as some others. Prime implicant A prime implicant of a function is an implicant (in the above particular sense) that cannot be covered by a more general, (more reduced, meaning with fewer literals) implicant. W. V. Quine defined a prime implicant to be an implicant that is minimal—that is, the removal of any literal from P results in a non-implicant for F. Essential prime implicants (also known as core prime implicants) are prime implicants that cover an output of the function that no combination of other prime implicants is able to cover. Using the example above, one can easily see that while (and others) is a prime implicant, and are not. From the latter, multiple literals can be removed to make it prime: , and can be removed, yielding . Alternatively, and can be removed, yielding . Finally, and can be removed, yielding . The process of removing literals from a Boolean term is called expanding the term. Expanding by one literal doubles the number of input combinations for which the term is true (in binary Boolean algebra). Using the example function above, we may expand to or to without changing the cover of . The sum of all prime implicants of a Boolean function is called its complete sum, minimal covering sum, or Blake canonical form. See also Quine–McCluskey algorithm Karnaugh map Petrick's method References External links Slides explaining implicants, prime implicants and essential prime implicants Examples of finding essential prime implicants using K-map Boolean algebra
https://en.wikipedia.org/wiki/Population%20ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration. The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics. History In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology. Terminology A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature). Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates
https://en.wikipedia.org/wiki/Volatile%20memory
Volatile memory, in contrast to non-volatile memory, is computer memory that requires power to maintain the stored information; it retains its contents while powered on but when the power is interrupted, the stored data is quickly lost. Volatile memory has several uses including as primary storage. In addition to usually being faster than forms of mass storage such as a hard disk drive, volatility can protect sensitive information, as it becomes unavailable on power-down. Most general-purpose random-access memory (RAM) is volatile. Types There are two kinds of volatile RAM: dynamic and static. Even though both types need continuous electrical current to retain data, there are some important differences between them. Dynamic RAM (DRAM) is very popular due to its cost-effectiveness. DRAM stores each bit of information in a different capacitor within the integrated circuit. DRAM chips need just one single capacitor and one transistor to store each bit of information. This makes it space-efficient and inexpensive. The main advantage of static RAM (SRAM) is that it is much faster than dynamic RAM. Its disadvantage is its high price. SRAM does not need continuous electrical refreshes, but it still requires constant current to sustain the difference in voltage. Every single bit in a static RAM chip needs a cell of six transistors, whereas dynamic RAM requires only one capacitor and one transistor. As a result, SRAM is unable to accomplish the storage capabilities of the DRAM family. SRAM is commonly used as CPU cache and for processor registers and in networking devices. References Computer memory
https://en.wikipedia.org/wiki/K42
K42 is a discontinued open-source research operating system (OS) for cache-coherent 64-bit multiprocessor systems. It was developed primarily at IBM Thomas J. Watson Research Center in collaboration with the University of Toronto and University of New Mexico. The main focus of this OS is to address performance and scalability issues of system software on large-scale, shared memory, non-uniform memory access (NUMA) multiprocessing computers. K42 uses a microkernel architecture rather than the traditional monolithic kernel design. K42 consists of a small exception-handling component that serves as the microkernel, a fast inter-process communication (IPC) mechanism named protected procedure call (PPC), and servers for most other components of the operating system. These servers exist in separate address spaces and rely upon the fast IPC mechanism for communication with the microkernel and other servers. History The core of K42 is based on the University of Toronto's operating system Tornado. K42 is the university's third generation of research on scalable operating systems. Tornado OS on a nuMachine Multiprocessor was the second generation and Hurricane OS on a Hector Multiprocessor was the first generation. Features K42 supports the Linux PowerPC 64 and 32 application binary interfaces (ABIs), so most PowerPC Linux binary files can run on K42 without modification, including the relational database DB2. K42 has some device drivers implemented specifically for it, but it gets most of its hardware support by directly linking in Linux device drivers to a special server. Another goal of the K42 design is to achieve a customizable and maintainable system. Being built with an object-oriented programming design, it allows applications to customize and thus optimize the OS services required, and then on the fly, hot swap kernel object implementations. This is particularly important for applications, such as databases and web servers, where the ability to control physical r
https://en.wikipedia.org/wiki/W%5EX
W^X ("write xor execute", pronounced W xor X) is a security feature in operating systems and virtual machines. It is a memory protection policy whereby every page in a process's or kernel's address space may be either writable or executable, but not both. Without such protection, a program can write (as data "W") CPU instructions in an area of memory intended for data and then run (as executable "X"; or read-execute "RX") those instructions. This can be dangerous if the writer of the memory is malicious. W^X is the Unix-like terminology for a strict use of the general concept of executable space protection, controlled via the system call. W^X is relatively simple on processors that support fine-grained page permissions, such as Sun's SPARC and SPARC64, AMD's AMD64, Hewlett-Packard's PA-RISC, HP's (originally Digital Equipment Corporation's) Alpha, and ARM. The term W^X has also been applied to file system write/execute permissions to mitigate file write vulnerabilities (as with in memory) and attacker persistence. Enforcing restrictions on file permissions can also close gaps in W^X enforcement caused by memory mapped files. Outright forbidding the usage of arbitrary native code can also mitigate kernel and CPU vulnerabilities not exposed via the existing code on the computer. A less intrusive approach is to lock a file for the duration of any mapping into executable memory, which suffices to prevent post-inspection bypasses. Compatibility Some early Intel 64 processors lacked the NX bit required for W^X, but this appeared in later chips. On more limited processors such as the Intel i386, W^X requires using the CS code segment limit as a "line in the sand", a point in the address space above which execution is not permitted and data is located, and below which it is allowed and executable pages are placed. This scheme was used in Exec Shield. Linker changes are generally required to separate data from code (such as trampolines that are needed for linker and l
https://en.wikipedia.org/wiki/Evolutionary%20medicine
Evolutionary medicine or Darwinian medicine is the application of modern evolutionary theory to understanding health and disease. Modern biomedical research and practice have focused on the molecular and physiological mechanisms underlying health and disease, while evolutionary medicine focuses on the question of why evolution has shaped these mechanisms in ways that may leave us susceptible to disease. The evolutionary approach has driven important advances in the understanding of cancer, autoimmune disease, and anatomy. Medical schools have been slower to integrate evolutionary approaches because of limitations on what can be added to existing medical curricula. The International Society for Evolution, Medicine and Public Health coordinates efforts to develop the field. It owns the Oxford University Press journal Evolution, Medicine and Public Health and The Evolution and Medicine Review. Core principles Utilizing the Delphi method, 56 experts from a variety of disciplines, including anthropology, medicine, nursing, and biology agreed upon 14 core principles intrinsic to the education and practice of evolutionary medicine. These 14 principles can be further grouped into five general categories: question framing, evolution I and II (with II involving a higher level of complexity), evolutionary trade-offs, reasons for vulnerability, and culture. Additional information regarding these principles may be found in the table below. Human adaptations Adaptation works within constraints, makes compromises and trade-offs, and occurs in the context of different forms of competition. Constraints Adaptations can only occur if they are evolvable. Some adaptations which would prevent ill health are therefore not possible. DNA cannot be totally prevented from undergoing somatic replication corruption; this has meant that cancer, which is caused by somatic mutations, has not (so far) been eliminated by natural selection. Humans cannot biosynthesize vitamin C, and so risk
https://en.wikipedia.org/wiki/Mixed-signal%20integrated%20circuit
A mixed-signal integrated circuit is any integrated circuit that has both analog circuits and digital circuits on a single semiconductor die. Their usage has grown dramatically with the increased use of cell phones, telecommunications, portable electronics, and automobiles with electronics and digital sensors. Overview Integrated circuits (ICs) are generally classified as digital (e.g. a microprocessor) or analog (e.g. an operational amplifier). Mixed-signal ICs contain both digital and analog circuitry on the same chip, and sometimes embedded software. Mixed-signal ICs process both analog and digital signals together. For example, an analog-to-digital converter (ADC) is a typical mixed-signal circuit. Mixed-signal ICs are often used to convert analog signals to digital signals so that digital devices can process them. For example, mixed-signal ICs are essential components for FM tuners in digital products such as media players, which have digital amplifiers. Any analog signal can be digitized using a very basic ADC, and the smallest and most energy efficient of these are mixed-signal ICs. Mixed-signal ICs are more difficult to design and manufacture than analog-only or digital-only integrated circuits. For example, an efficient mixed-signal IC may have its digital and analog components share a common power supply. However, analog and digital components have very different power needs and consumption characteristics, which makes this a non-trivial goal in chip design. Mixed-signal functionality involves both traditional active elements (like transistors) and well-performing passive elements (like coils, capacitors, and resistors) on the same chip. This requires additional modelling understanding and options from manufacturing technologies. High voltage transistors might be needed in the power management functions on a chip with digital functionality, possibly with a low-power CMOS processor system. Some advanced mixed-signal technologies may enable combining an
https://en.wikipedia.org/wiki/Depolarization%20ratio
In Raman spectroscopy, the depolarization ratio is the intensity ratio between the perpendicular component and the parallel component of Raman scattered light. Early work in this field was carried out by George Placzek, who developed the theoretical treatment of bond polarizability. The Raman scattered light is emitted by the stimulation of the electric field of the incident light. Therefore, the direction of the vibration of the electric field, or polarization direction, of the scattered light might be expected to be the same as that of the incident light. In reality, however, some fraction of the Raman scattered light has a polarization direction that is perpendicular to that of the incident light. This component is called the . Naturally, the component of the Raman scattered light whose polarization direction is parallel to that of the incident light is called the , and the Raman scattered light consists of the parallel component and the perpendicular component. The ratio of the peak intensity of the parallel and perpendicular component is known as the depolarization ratio (), defined in equation 1. For example, a spectral band with a peak of intensity 10 units when the polarizers are parallel, and an intensity 1 unit when the polarizers are perpendicular, would have a depolarization ratio of 1/10 = 0.1, which corresponds to a highly polarized band. The value of the depolarization ratio of a Raman band depends on the symmetry of the molecule and the normal vibrational mode, in other words, the point group of the molecule and its irreducible representation to which the normal mode belongs. Under Placzek's polarizability approximation, it is known that the depolarization ratio of a totally symmetric vibrational mode is less than 0.75, and that of the other modes equals 0.75. A Raman band whose depolarization ratio is less than 0.75 is called a , and a band with a depolarization ratio equal to or greater than 0.75 is called a . For a spherical top molecule in
https://en.wikipedia.org/wiki/Data%20security
Data security means protecting digital data, such as those in a database, from destructive forces and from the unwanted actions of unauthorized users, such as a cyberattack or a data breach. Technologies Disk encryption Disk encryption refers to encryption technology that encrypts data on a hard disk drive. Disk encryption typically takes form in either software (see disk encryption software) or hardware (see disk encryption hardware). Disk encryption is often referred to as on-the-fly encryption (OTFE) or transparent encryption. Software versus hardware-based mechanisms for protecting data Software-based security solutions encrypt the data to protect it from theft. However, a malicious program or a hacker could corrupt the data to make it unrecoverable, making the system unusable. Hardware-based security solutions prevent read and write access to data, which provides very strong protection against tampering and unauthorized access. Hardware-based security or assisted computer security offers an alternative to software-only computer security. Security tokens such as those using PKCS#11 or a mobile phone may be more secure due to the physical access required in order to be compromised. Access is enabled only when the token is connected and the correct PIN is entered (see two-factor authentication). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in hardware-based security solve this problem by offering full proof of security for data. Working off hardware-based security: A hardware device allows a user to log in, log out and set different levels through manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and changing privilege levels. The current state of a user of the device is read by controllers in peripheral devices such as hard disks. Illegal access by a malicious user or a malicious program is interrupted based on the current state of a user by har
https://en.wikipedia.org/wiki/Hasse%E2%80%93Weil%20zeta%20function
In mathematics, the Hasse–Weil zeta function attached to an algebraic variety V defined over an algebraic number field K is a meromorphic function on the complex plane defined in terms of the number of points on the variety after reducing modulo each prime number p. It is a global L-function defined as an Euler product of local zeta functions. Hasse–Weil L-functions form one of the two major classes of global L-functions, alongside the L-functions associated to automorphic representations. Conjecturally, these two types of global L-functions are actually two descriptions of the same type of global L-function; this would be a vast generalisation of the Taniyama-Weil conjecture, itself an important result in number theory. For an elliptic curve over a number field K, the Hasse–Weil zeta function is conjecturally related to the group of rational points of the elliptic curve over K by the Birch and Swinnerton-Dyer conjecture. Definition The description of the Hasse–Weil zeta function up to finitely many factors of its Euler product is relatively simple. This follows the initial suggestions of Helmut Hasse and André Weil, motivated by the case in which V is a single point, and the Riemann zeta function results. Taking the case of K the rational number field Q, and V a non-singular projective variety, we can for almost all prime numbers p consider the reduction of V modulo p, an algebraic variety Vp over the finite field Fp with p elements, just by reducing equations for V. Scheme-theoretically, this reduction is just the pullback of V along the canonical map Spec Fp → Spec Z. Again for almost all p it will be non-singular. We define to be the Dirichlet series of the complex variable s, which is the infinite product of the local zeta functions . Then , according to our definition, is well-defined only up to multiplication by rational functions in a finite number of . Since the indeterminacy is relatively harmless, and has meromorphic continuation everywhere, the
https://en.wikipedia.org/wiki/Tryggve%20Fossum
Tryggve Fossum is a Norwegian computer architect at Intel. He transferred there from DEC, where he was a lead architect of Alpha processors, after working on several VAX processors. References Year of birth missing (living people) Living people Computer hardware engineers Digital Equipment Corporation people Intel people Norwegian engineers Place of birth missing (living people)
https://en.wikipedia.org/wiki/Pingala
Acharya Pingala (; c. 3rd2nd century BCE) was an ancient Indian poet and mathematician, and the author of the (), also called the Pingala-sutras (), the earliest known treatise on Sanskrit prosody. The is a work of eight chapters in the late Sūtra style, not fully comprehensible without a commentary. It has been dated to the last few centuries BCE. In the 10th century CE, Halayudha wrote a commentary elaborating on the . According to some historians Maharshi Pingala was the brother of Pāṇini, the famous Sanskrit grammarian, considered the first descriptive linguist. Another think tank identifies him as Patanjali, the 2nd century CE scholar who authored Mahabhashya. Combinatorics The presents systematic enumeration of metres with fixed patterns of short and long syllables using a binary representation. Pingala's work also includes material related to the Fibonacci numbers, called . Pingala is credited with the use of light (laghu) and heavy (guru) syllables to describe combinatorics of Sanskrit metre. Because of this, Pingala is sometimes also credited with the first use of zero, as he used the Sanskrit word śūnya to explicitly refer to the number. Pingala's binary representation increases towards the right, and not to the left as modern binary numbers usually do. In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values. Editions A. Weber, Indische Studien 8, Leipzig, 1863. Notes See also Chandas Sanskrit prosody Indian mathematics Indian mathematicians History of the binomial theorem List of Indian mathematicians References Amulya Kumar Bag, 'Binomial theorem in ancient India', Indian J. Hist. Sci. 1 (1966), 68–74. George Gheverghese Joseph (2000). The Crest of the Peacock, p. 254, 355. Princeton University Press. Klaus Mylius, Geschichte der altindischen Literatur, Wiesbaden (1983). Exte
https://en.wikipedia.org/wiki/Mayer%20f-function
The Mayer f-function is an auxiliary function that often appears in the series expansion of thermodynamic quantities related to classical many-particle systems. It is named after chemist and physicist Joseph Edward Mayer. Definition Consider a system of classical particles interacting through a pair-wise potential where the bold labels and denote the continuous degrees of freedom associated with the particles, e.g., for spherically symmetric particles and for rigid non-spherical particles where denotes position and the orientation parametrized e.g. by Euler angles. The Mayer f-function is then defined as where the inverse absolute temperature in units of energy−1 . See also Virial coefficient Cluster expansion Excluded volume Notes Special functions
https://en.wikipedia.org/wiki/DNA%20sequencing
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery. Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged. The rapid speed of sequencing attained with modern DNA sequencing technology has been instrumental in the sequencing of complete DNA sequences, or genomes, of numerous types and species of life, including the human genome and other complete DNA sequences of many animal, plant, and microbial species. The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, DNA sequencing has become easier and orders of magnitude faster. Applications DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. DNA sequencing is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, DNA sequencing has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropolog
https://en.wikipedia.org/wiki/Transport%20Neutral%20Encapsulation%20Format
Transport Neutral Encapsulation Format or TNEF is a proprietary email attachment format used by Microsoft Outlook and Microsoft Exchange Server. An attached file with TNEF encoding is most often named winmail.dat or win.dat, and has a MIME type of Application/MS-TNEF. The official (IANA) media type, however, is application/vnd.ms-tnef. Overview Some TNEF files contain information used only by Outlook to generate a richly formatted view of the message, such as embedded (OLE) documents or Outlook-specific features such as forms, voting buttons, and meeting requests. Other TNEF files may contain files which have been attached to an e-mail message. Within the Outlook e-mail client, TNEF encoding cannot be explicitly enabled or disabled (except via a registry setting). Selecting RTF as the format for sending an e-mail implicitly enables TNEF encoding, using it instead of the more common and widely compatible MIME standard. When sending plain text or HTML format messages, some versions of Outlook (apparently including Outlook 2000) prefer MIME, but may still use TNEF under some circumstances (for example, if an Outlook feature requires it). TNEF attachments can contain security-sensitive information such as user login name and file paths, from which access controls could possibly be inferred. Exchange Server Native-mode Microsoft Exchange 2000 organizations will, in some circumstances, send entire messages as TNEF-encoded raw binary independent of what is advertised by the receiving SMTP server. As documented in Microsoft KBA #323483, this technique is not RFC-compliant because these messages have the following characteristics: They may include non-ASCII characters outside the 0–127 US-ASCII range. The lines in these messages are often too long for transport via SMTP. They do not follow the CRLF.CRLF message termination semantics as specified in RFC 821. Internal communications between Exchange Servers (2000 and later) over SMTP encode the message in S/TNEF (Su