source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Acetogen
An acetogen is a microorganism that generates acetate (CH3COO−) as an end product of anaerobic respiration or fermentation. However, this term is usually employed in a narrower sense only to those bacteria and archaea that perform anaerobic respiration and carbon fixation simultaneously through the reductive acetyl coenzyme A (acetyl-CoA) pathway (also known as the Wood-Ljungdahl pathway). These genuine acetogens are also known as "homoacetogens" and they can produce acetyl-CoA (and from that, in most cases, acetate as the end product) from two molecules of carbon dioxide (CO2) and four molecules of molecular hydrogen (H2). This process is known as acetogenesis, and is different from acetate fermentation, although both occur in the absence of molecular oxygen (O2) and produce acetate. Although previously thought that only bacteria are acetogens, some archaea can be considered to be acetogens. Acetogens are found in a variety of habitats, generally those that are anaerobic (lack oxygen). Acetogens can use a variety of compounds as sources of energy and carbon; the best studied form of acetogenic metabolism involves the use of carbon dioxide as a carbon source and hydrogen as an energy source. Carbon dioxide reduction is carried out by the key enzyme acetyl-CoA synthase. Together with methane-forming archaea, acetogens constitute the last limbs in the anaerobic food web that leads to the production of methane from polymers in the absence of oxygen. Acetogens may represent ancestors of the first bioenergetically active cells in evolution. Metabolic roles Acetogens have diverse metabolic roles, which help them thrive in different environments. One of their metabolic products is acetate which is an important nutrient for the host and its inhabiting microbial community, most seen in termite's guts. Acetogens also serve as "hydrogen sinks" in termite's GI tract. Hydrogen gas inhibits biodegradation and acetogens use up these hydrogen gases in the anaerobic environmen
https://en.wikipedia.org/wiki/Utility%20computing
Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented. This repackaging of computing services became the foundation of the shift to "on demand" computing, software as a service and cloud computing models that further propagated the idea of computing, application and network as a service. There was some initial skepticism about such a significant shift. However, the new model of computing caught on and eventually became mainstream. IBM, HP and Microsoft were early leaders in the new field of utility computing, with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility computing can support grid computing which has the characteristic of very large computations or sudden peaks in demand which are supported via a large number of computers. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer.
https://en.wikipedia.org/wiki/List%20of%20ICD-9%20codes
The following is a list of codes for International Statistical Classification of Diseases and Related Health Problems. List of ICD-9 codes 001–139: infectious and parasitic diseases List of ICD-9 codes 140–239: neoplasms List of ICD-9 codes 240–279: endocrine, nutritional and metabolic diseases, and immunity disorders List of ICD-9 codes 280–289: diseases of the blood and blood-forming organs List of ICD-9 codes 290–319: mental disorders List of ICD-9 codes 320–389: diseases of the nervous system and sense organs List of ICD-9 codes 390–459: diseases of the circulatory system List of ICD-9 codes 460–519: diseases of the respiratory system List of ICD-9 codes 520–579: diseases of the digestive system List of ICD-9 codes 580–629: diseases of the genitourinary system List of ICD-9 codes 630–679: complications of pregnancy, childbirth, and the puerperium List of ICD-9 codes 680–709: diseases of the skin and subcutaneous tissue List of ICD-9 codes 710–739: diseases of the musculoskeletal system and connective tissue List of ICD-9 codes 740–759: congenital anomalies List of ICD-9 codes 760–779: certain conditions originating in the perinatal period List of ICD-9 codes 780–799: symptoms, signs, and ill-defined conditions List of ICD-9 codes 800–999: injury and poisoning List of ICD-9 codes E and V codes: external causes of injury and supplemental classification See also International Statistical Classification of Diseases and Related Health Problems: ICD-9 – provides multiple external links for looking up ICD codes MS Access MDB file at United States Department of Health and Human Services in the downloads section at the bottom References International Classification of Diseases Medical lists
https://en.wikipedia.org/wiki/Poka-yoke
is a Japanese term that means "mistake-proofing" or "error prevention". A poka-yoke is any mechanism in a process that helps an equipment operator avoid () mistakes () and defects by preventing, correcting, or drawing attention to human errors as they occur. The concept was formalized, and the term adopted, by Shigeo Shingo as part of the Toyota Production System. Etymology Poka-yoke was originally , but as this means "fool-proofing" (or "idiot-proofing") the name was changed to the milder poka-yoke. Poka-yoke is derived from (), a term in shogi that means avoiding an unthinkably bad move. Usage More broadly, the term can refer to any behavior-shaping constraint designed into a process to prevent incorrect operation by the user. A simple poka-yoke example is demonstrated when a driver of the car equipped with a manual gearbox must press on the clutch pedal (a process step, therefore a poka-yoke) prior to starting an automobile. The interlock serves to prevent unintended movement of the car. Another example of poka-yoke would be the car equipped with an automatic transmission, which has a switch that requires the car to be in "Park" or "Neutral" before the car can be started (some automatic transmissions require the brake pedal to be depressed as well). These serve as behavior-shaping constraints as the action of "car in Park (or Neutral)" or "foot depressing the clutch/brake pedal" must be performed before the car is allowed to start. The requirement of a depressed brake pedal to shift most of the cars with an automatic transmission from "Park" to any other gear is yet another example of a poka-yoke application. Over time, the driver's behavior is conformed with the requirements by repetition and habit. History The term poka-yoke was applied by Shigeo Shingo in the 1960s to industrial processes designed to prevent human errors. Shingo redesigned a process in which factory workers, while assembling a small switch, would often forget to insert the required sprin
https://en.wikipedia.org/wiki/Simply%20typed%20lambda%20calculus
The simply typed lambda calculus (), a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor () that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus. The term simple type is also used to refer extensions of the simply typed lambda calculus such as products, coproducts or natural numbers (System T) or even full recursion (like PCF). In contrast, systems which introduce polymorphic types (like System F) or dependent types (like the Logical Framework) are not considered simply typed. The simple types, except for full recursion, are still considered simple because the Church encodings of such structures can be done using only and suitable type variables, while polymorphism and dependency cannot. Syntax In this article, the symbols and are used to range over types. Informally, the function type refers to the type of functions that, given an input of type , produce an output of type . By convention, associates to the right: is read as . To define the types, a set of base types, , must first be defined. These are sometimes called atomic types or type constants. With this fixed, the syntax of types is: . For example, , generates an infinite set of types starting with A set of term constants is also fixed for the base types. For example, it might be assumed that a base type , and the term constants could be the natural numbers. In the original presentation, Church used only two base types: for "the type of propositions" and for "the type of individuals". The type has no term constants, whereas has one term constant. Frequently the calculus with only one base type, usually , is considered. The syntax of the simply typed lambda calculus is essentially that of the lambda calculus itself. The term denotes that the
https://en.wikipedia.org/wiki/Symbolic%20dynamics
In mathematics, symbolic dynamics is the practice of modeling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. Formally, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another. History The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature. It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund. The first formal treatment was developed by Morse and Hedlund in their 1938 paper. George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations. Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory. During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss, and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures. In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen. A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964). Examples Concepts such as heteroclinic orbits and homoclinic orbits have a particularly simple representation in symbolic dynamics.
https://en.wikipedia.org/wiki/Subshift%20of%20finite%20type
In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all possible sequences executed by a finite state machine. The most widely studied shift spaces are the subshifts of finite type. Definition Let be a finite set of symbols (alphabet). Let denote the set of all bi-infinite sequences of elements of together with the shift operator . We endow with the discrete topology and with the product topology. A symbolic flow or subshift is a closed -invariant subset of and the associated language is the set of finite subsequences of . Now let be an adjacency matrix with entries in Using these elements we construct a directed graph with the set of vertices and the set of edges containing the directed edge in if and only if . Let be the set of all infinite admissible sequences of edges, where by admissible it is meant that the sequence is a walk of the graph, and the sequence can be either one-sided or two-sided infinite. Let be the left shift operator on such sequences; it plays the role of the time-evolution operator of the dynamical system. A subshift of finite type is then defined as a pair obtained in this way. If the sequence extends to infinity in only one direction, it is called a one-sided subshift of finite type, and if it is bilateral, it is called a two-sided subshift of finite type. Formally, one may define the sequences of edges as This is the space of all sequences of symbols such that the symbol can be followed by the symbol only if the -th entry of the matrix is 1. The space of all bi-infinite sequences is defined analogously: The shift operator maps a sequence in the one- or two-sided shift to another by shifting all symbols to the left, i.e. Clearly this map is only invertible in the case of the two-sided shift. A subshift of finite type is called transitive if is strongly connected
https://en.wikipedia.org/wiki/Ferranti%20Pegasus
Pegasus was an early British vacuum-tube (valve) computer built by Ferranti, Ltd that pioneered design features to make life easier for both engineers and programmers. Originally it was named the Ferranti Package Computer as its hardware design followed that of the Elliott 401 with modular plug-in packages. Much of the development was the product of three men: W. S. (Bill) Elliott (hardware); Christopher Strachey (software) and Bernard Swann (marketing and customer support). It was Ferranti's most popular valve computer with 38 being sold. The first Pegasus was delivered in 1956 and the last was delivered in 1959. Ferranti received funding for the development from the National Research Development Corporation (NRDC). At least two Pegasus machines survive, one in The Science Museum, London and one which was displayed in the Science and Industry Museum, Manchester but which has now been removed to the storage in the Science Museum archives at Wroughton. The Pegasus in The Science Museum, London ran its first program in December 1959 and was regularly demonstrated until 2009 when it developed a severe electrical fault. In early 2014, the Science Museum decided to retire it permanently, effectively ending the life of one of the world's oldest working computers. The Pegasus officially held the title of the world's oldest computer until 2012, when the restoration of the Harwell computer was completed at the National Museum of Computing. Design In those days it was common for it to be unclear whether a failure was due to the hardware or the program. As a consequence, Christopher Strachey of NRDC, who was himself a brilliant programmer, recommended the following design objectives: The necessity for optimum programming (favoured by Alan Turing) was to be minimised, "because it tended to become a time-wasting intellectual hobby of the programmers". The needs of the programmer were to be a governing factor in selecting the instruction set. It was to be cheap and reliable.
https://en.wikipedia.org/wiki/Anticaking%20agent
An anticaking agent is an additive placed in powdered or granulated materials, such as table salt or confectioneries, to prevent the formation of lumps (caking) and for easing packaging, transport, flowability, and consumption. Caking mechanisms depend on the nature of the material. Crystalline solids often cake by formation of liquid bridge and subsequent fusion of microcrystals. Amorphous materials can cake by glass transitions and changes in viscosity. Polymorphic phase transitions can also induce caking. Some anticaking agents function by absorbing excess moisture or by coating particles and making them water-repellent. Calcium silicate (CaSiO3), a commonly used anti-caking agent, added to e.g. table salt, absorbs both water and oil. Anticaking agents are also used in non-food items such as road salt, fertilisers, cosmetics, and detergents. Some studies suggest that anticaking agents may have a negative effect on the nutritional content of food; one such study indicated that most anti-caking agents result in the additional degradation of vitamin C added to food. Examples An anticaking agent in salt is denoted in the ingredients, for example, as "anti-caking agent (554)", which is sodium aluminosilicate. This product is present in many commercial table salts as well as dried milk, egg mixes, sugar products, flours and spices. In Europe, sodium ferrocyanide (535) and potassium ferrocyanide (536) are more common anticaking agents in table salt. "Natural" anticaking agents used in more expensive table salt include calcium carbonate and magnesium carbonate. Diatomaceous earth, mostly consisting of silicon dioxide (SiO2), may also be used as an anticaking agent in animal foods, typically mixed at 2% rate of a product dry weight. List of anticaking agents The most widely used anticaking agents include the stearates of calcium and magnesium, silica and various silicates, talc, as well as flour and starch. Ferrocyanides are used for table salt. The following a
https://en.wikipedia.org/wiki/Shaft%20sinking
Shaft mining or shaft sinking is the action of excavating a mine shaft from the top down, where there is initially no access to the bottom. Shallow shafts, typically sunk for civil engineering projects, differ greatly in execution method from deep shafts, typically sunk for mining projects. Shaft sinking is one of the most difficult of all mine development methods: restricted space, gravity, groundwater and specialized procedures make the task quite formidable. Shafts may be sunk by conventional drill and blast or mechanised means. Historically, mine shaft sinking has been among the most dangerous of all the mining occupations and the preserve of mining contractors called sinkers. Today shaft sinking contractors are concentrated in Canada, Germany, China and South Africa. The modern shaft sinking industry is gradually shifting further towards greater mechanisation. Recent innovations in the form of full-face shaft boring (akin to a vertical tunnel boring machine) have shown promise but the use of this method is, as of 2019, not widespread. Mine shafts Mine shafts are vertical or near-vertical tunnels, which are "sunk" as a means of accessing an underground ore body, during the development of an underground mine. The shape (in plan view), dimensions and depth of mine shafts vary greatly in response to the specific needs of the mine they are part of and the geology they are sunk through. For example, in North and South America, smaller shafts are designed to be rectangular in plan view with timber supports. Larger shafts are round in plan and are concrete lined. Mine shafts may be used for a variety of purposes, including as a means of escape in the event of an emergency underground and allowing for the movement of: People Materials Mine services (such as compressed air, water, backfill, power, communications and fuel) Ventilation air Broken rock (in the form of payable ore, or non payable waste) Or any combination of the above When the top of the excav
https://en.wikipedia.org/wiki/OpenURL
An OpenURL is similar to a web address, but instead of referring to a physical website, it refers to an article, book, patent, or other resource within a website. OpenURLs are similar to permalinks because they are permanently connected to a resource, regardless of which website the resource is connected to. Libraries and other resource centers are the most common place to find OpenURLs because an OpenURL can help Internet users find a copy of a resource that they may otherwise have limited access to. The source that generates an OpenURL is often a bibliographic citation or bibliographic record in a database. Examples of these databases include Ovid Technologies, Web of Science, Chemical Abstracts Service, Modern Language Association and Google Scholar. The National Information Standards Organization (NISO) has developed standards for OpenURL and its data container as American National Standards Institute (ANSI) standard ANSI/NISO Z39.88-2004. OpenURL standards create a clear structure for links that go from information resource databases (sources) to library services (targets). A target is a resource or service that helps satisfy a user's information needs. Examples of targets include full-text repositories, online journals, online library catalogs and other Web resources and services. OpenURL knowledge bases provide links to the appropriate targets available. History OpenURL was created by Herbert Van de Sompel, a librarian at the University of Ghent, in the late 1990s. His link-server software, SFX, was purchased by the library automation company Ex Libris Group which popularized OpenURL in the information industry. In 2005, a revised version of OpenURL (version 1.0) became ANSI/NISO standard Z39.88-2004, with Van de Sompel's version designated as version 0.1. The new standard provided a framework for describing new formats, as well as defining XML versions of the various formats. In 2006 a research report found some problems affecting the efficiency o
https://en.wikipedia.org/wiki/Abutment
An abutment is the substructure at the ends of a bridge span or dam supporting its superstructure. Single-span bridges have abutments at each end that provide vertical and lateral support for the span, as well as acting as retaining walls to resist lateral movement of the earthen fill of the bridge approach. Multi-span bridges require piers to support ends of spans unsupported by abutments. Dam abutments are generally the sides of a valley or gorge, but may be artificial in order to support arch dams such as Kurobe Dam in Japan. The civil engineering term may also refer to the structure supporting one side of an arch, or masonry used to resist the lateral forces of a vault. The impost or abacus of a column in classical architecture may also serve as an abutment to an arch. The word derives from the verb "abut", meaning to "touch by means of a mutual border". Use An abutment may be used to transfer loads from a superstructure to its foundation, to resist or transfer self weight, lateral loads (such as the earth pressure) and wind loads, to support one end of an approach slab or to balance vertical and horizontal forces in an arch bridge. Types Types of abutments include: Gravity abutment, resists horizontal earth pressure with its own dead weight U abutment, U-shaped gravity abutment Cantilever abutment, cantilever retaining wall designed for large vertical loads Full height abutment, cantilever abutment that extends from the underpass grade line to the grade line of the overpass roadway Stub abutment, short abutments at the top of an embankment or slope, usually supported on piles Semi-stub abutment, size between full height and stub abutment Counterfort abutment, similar to counterfort retaining walls Spill-through abutment, vertical buttresses with open spaces between them MSE systems, "Reinforced Earth" system: modular units with metallic reinforcement Pile bent abutment, similar to spill-through abutment References External links Ohio Departme
https://en.wikipedia.org/wiki/LogMeIn%20Hamachi
LogMeIn Hamachi is a virtual private network (VPN) application developed and released in 2004 by Alex Pankratov. It is capable of establishing direct links between computers that are behind network address translation (NAT) firewalls without requiring reconfiguration (when the user's PC can be accessed directly without relays from the Internet/WAN side). Like other VPNs, it establishes a connection over the Internet that emulates the connection that would exist if the computers were connected over a local area network (LAN). Hamachi became a LogMeIn product after the acquisition of Applied Networking Inc. in 2006. It is currently available as a production version for Microsoft Windows and macOS, as a beta version for Linux, and as a system-VPN-based client compatible with Android and iOS. For paid subscribers Hamachi runs in the background on idle computers. The feature was previously available to all users but became restricted to paid subscribers only as of November 19, 2012. Operational summary Hamachi is a proprietary centrally-managed VPN system, consisting of the server cluster managed by the vendor of the system and the client software, which is installed on end-user devices. Client software adds a virtual network interface to a computer, and it is used for intercepting outbound as well as injecting inbound VPN traffic. Outbound traffic sent by the operating system to this interface is delivered to the client software, which encrypts and authenticates it and then sends it to the destination VPN peer over a specially initiated UDP connection. Hamachi currently handles tunneling of IP traffic including broadcasts and multicast. The Windows version also recognizes and tunnels IPX traffic. Each client establishes and maintains a control connection to the server cluster. When the connection is established, the client goes through a login sequence, followed by the discovery process and state synchronization. The login step authenticates the client to the serve
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay%20filter
A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data. Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article". Applications The data consists of a set of points {xj, yj}, j = 1, ..., n, where xj is an independent variable and yj is an observed value. They are treated with a set of m convolution coefficients, Ci, according to the expression Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, m = 5, i = −2, −1, 0, 1, 2 and the jth smoothed data point, Yj, is given by , where, C−2 = −3/35, C−1 = 12 / 35, etc. There are numerous applications of smoothing, which is performed primarily to make the data appear to be less noisy than it really is. The following are applic
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Kalm%C3%A1r
László Kalmár (27 March 1905, Edde – 2 August 1976, Mátraháza) was a Hungarian mathematician and Professor at the University of Szeged. Kalmár is considered the founder of mathematical logic and theoretical computer science in Hungary. Biography Kalmár was of Jewish ancestry. His early life mixed promise and tragedy. His father died when he was young, and his mother died when he was 17, the year he entered the University of Budapest, making him essentially an orphan. Kalmár's brilliance manifested itself while in Budapest schools. At the University of Budapest, his teachers included Kürschák and Fejér. His fellow students included the future logician Rózsa Péter. Kalmár graduated in 1927. He discovered mathematical logic, his chosen field, while visiting Göttingen in 1929. Upon completing his doctorate at Budapest, he took up a position at the University of Szeged. That university was mostly made up of staff from the former University of Kolozsvár, a major Hungarian university before World War I that found itself after the War in Romania. Kolozsvár was renamed Cluj. The Hungarian university moved to Szeged in 1920, where there had previously been no university. The appointment of Haar and Riesz turned Szeged into a major research center for mathematics. Kalmár began his career as a research assistant to Haar and Riesz. Kalmár was appointed a full professor at Szeged in 1947. He was the inaugural holder of Szeged's chair for the Foundations of Mathematics and Computer Science. He also founded Szeged's Cybernetic Laboratory and the Research Group for Mathematical Logic and Automata Theory. In mathematical logic, Kalmár proved that certain classes of formulas of the first-order predicate calculus were decidable. In 1936, he proved that the predicate calculus could be formulated using a single binary predicate, if the recursive definition of a term was sufficiently rich. (This result is commonly attributed to a 1954 paper of Quine's.) He discovered an alternative fo
https://en.wikipedia.org/wiki/Western%20Digital%20FD1771
The FD1771, sometimes WD1771, is the first in a line of floppy disk controllers produced by Western Digital. It uses single density FM encoding introduced in the IBM 3740. Later models in the series added support for MFM encoding and increasingly added onboard circuitry that formerly had to be implemented in external components. Originally packaged as 40-pin dual in-line package (DIP) format, later models moved to a 28-pin format that further lowered implementation costs. Derivatives The FD1771 was succeeded by many derivatives that were mostly software-compatible: The FD1781 was designed for double density, but required external modulation and demodulation circuitry, so it could support MFM, M2FM, GCR or other double-density encodings. The FD1791-FD1797 series added internal support for double density (MFM) modulation, compatible with the IBM System/34 disk format. They required an external data separator. The WD1761-WD1767 series were versions of the FD179x series rated for a maximum clock frequency of 1 MHz, resulting in a data rate limit of 125 kbit/s for single density and 250 kbit/s for double density, thus preventing them from being used for 8-in (200 mm) floppy drives or the later "high-density" or 90 mm floppy drives. These were sold at a lower price point and widely used in home computer floppy drives. The WD2791-WD2797 series added an internal data separator using an analog phase-locked loop, with some external passive components required for the VCO. They took a 1 MHz or 2 MHz clock and were intended for and drives. The WD1770, WD1772, and WD1773 added an internal digital data separator and write precompensator, eliminating the need for external passive components but raising the clock rate requirement to 8 MHz. They supported double density, despite the apparent regression of the part number, and were packaged in 28-pin DIP packages. The WD1772PH02-02 was a version of the chip that Atari fitted to the Atari STE which supported high density (500
https://en.wikipedia.org/wiki/Computational%20cognition
Computational cognition (sometimes referred to as computational cognitive science or computational psychology or cognitive simulation) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology. Artificial intelligence There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature. In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology. The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols. However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to break down the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism. Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started
https://en.wikipedia.org/wiki/Tatung%20Company
Tatung Company () (Tatung; ) is a multinational corporation established in 1918 and headquartered in Zhongshan, Taipei, Taiwan. Description Established in 1918 and headquartered in Taipei, Tatung Company holds 3 business groups, which includes 8 business units: Industrial Appliance BU, Motor BU, Wire & Cable BU, Solar BU, Smart Meter BU, System Integration BU, Appliance BU, and Advanced Electronics BU. As a conglomerate, Tatung's investees involve in some major industries such as optoelectronics, energy, system integration, industrial system, branding retail channel, and asset development. History Xie Zhi Business Enterprise, the forerunner of Tatung Company, was established in 1918 by Shang-Zhi Lin. It was involved in high-profile construction projects, including the Tamsui River embankment project and the Executive Yuan building. In 1939, Tatung Iron Works was established as the company ventured into iron and steel manufacturing. Following the arrival of the ROC administration in 1945, Tatung Iron Works was renamed to Tatung Steel and Machinery Manufacturing Company. The company began mass production of electrical motors and appliances 10 years later in 1949. In 1962 the company became publicly listed on the Taiwan Stock Exchange, and was renamed to Tatung Company in 1968. A year later, Tatung began production of color TVs, and adopted the "Tatung Boy" mascot, which became a Taiwanese cultural symbol. Timeline 1970 Revenues exceeded NT$2.2 billion, making Tatung Taiwan's foremost private company. 1972 W. S. Lin, the grandson of Shang-Zhi Lin, was appointed as president of Tatung. Shortly thereafter he was implicated in a case of embezzlement at Tatung which would take more than ten years to litigate. 1977 Participated in the Ten Major Construction Projects with the construction of a slag treatment facility for China Steel and provision for Chiang Kai-shek International Airport's power control station 2000 Chunghwa Picture Tubes was listed on the OTC marke
https://en.wikipedia.org/wiki/8x8
8x8 Inc. is an American provider of Voice over IP products. Its products include cloud-based voice, contact center, video, mobile and unified communications for businesses. Since 2018, 8x8 manages Jitsi. History The company was founded in 1987 by Chi-Shin Wang and Y. W. Sing formerly of Weitek as Integrated Information Technology, Inc., or IIT. The name was changed in the mid-1990s. According to the company, IIT began as an integrated circuit designer. The company produced math coprocessors for x86 microprocessors, as well as Graphics accelerator cards for the personal computer market during the late 1980s. The company later changed its name to 8x8, and began producing products for the videoconferencing market. 8x8 went public on the NASDAQ market in 1997. The company moved their trading to NYSE in 2017, under the ticker symbol EGHT. In 1999, 8x8 acquired two companies, Odisei and U|Force, to acquire network and server VoIP technologies. In March 2000, 8x8 relaunched itself as a VoIP service provider under the name Netergy Networks. The company changed its name back to 8x8 in July 2001. 8x8 began trading on the Nasdaq SmallCap Market on 26 July 2002. The company's stock was listed on the New York Stock Exchange for a time before switching back to Nasdaq in November 2022. In 2003, the company launched a videophone service. In July 2007, after startup SunRocket was liquidated, 8x8 entered an agreement to accept 200,000 of its customers. Gartner has listed 8x8 several times as a Leader for UCaaS (Unified Communications as a Service) within its Gartner Magic Quadrant, a series of technology market reports. 8x8 has been awarded 128 patents related to semiconductors, computer architecture, video processing algorithms, videophones and communications technologies and security. In 2018, the company acquired Jitsi and Jitsi Meet. Acquisitions In May 2010, 8x8 acquired Central Host, a California-based managed hosting company. In June 2011, the company announced the acq
https://en.wikipedia.org/wiki/Indeterminate%20growth
In biology and botany, indeterminate growth is growth that is not terminated in contrast to determinate growth that stops once a genetically pre-determined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies. Inflorescences In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts), an indeterminate type (such as a raceme) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened. Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus; another type is exemplified by Allium; and yet ot
https://en.wikipedia.org/wiki/Aura%20Battler%20Dunbine
is an anime television series created by Yoshiyuki Tomino and produced by Sotsu and Sunrise. Forty-nine episodes aired on Nagoya TV from February 5, 1983, to January 21, 1984, . An three-episode anime OVA sequel entitled New Story of Aura Battler Dunbine (also known as The Tale of Neo Byston Well) was released in 1988. The series was later dubbed by ADV Films and was released to DVD in North America, along with the original Japanese version in 2003. It soon went out of print, and until 2018, was only available as a digital purchase from the now-defunct Daisuki site, then Sentai Filmworks licensed the series. Premise The story is set in Byston Well, a parallel world that resembles the countryside of medieval Europe with kingdoms ruled by monarchs in castles, armies of unicorn-riding cavalry armed with swords and crossbows, and little winged creatures called Ferario, flying about offering help or hindrance depending on their mood. The main draw to the series were the insect-like Aura Battlers, used by the population of Byston Well to fight their wars. These fighting suits are powered by a powerful energy called "aura" or "life energy." Certain people are strong enough with the aura-energy to act as power-supply to these mecha, making them Aura Warriors. Plot The series followed Shō Zama, as he finds himself pulled into the world of Byston Well during a vehicular incident with one of his rivals. Byston Well is located in another dimension located between the sea and the land, and is populated with dragons, castles, knights, and powerful robots known as Aura Battlers. Once Shō is discovered to possess a formidable "aura", he is drafted into the Byston Well conflict as the pilot of the lavender-colored Dunbine. As in other of Tomino's works, a young man is caught in the midst of an ongoing war that threatens to destabilize the world. There are romances that cut across battle lines, and the non-stop battles between elaborate fighting craft on land and on the air. Most
https://en.wikipedia.org/wiki/Game%20demo
A game demo is a trial version of a video game that is limited to a certain time period or a point in progress. A game demo comes in forms such as shareware, demo disc, downloadable software, and tech demos. Distribution In the early 1990s, shareware distribution was a popular method for publishing games for smaller developers, including then-fledgling companies such as Apogee Software (now 3D Realms), Epic MegaGames (now Epic Games), and id Software. It gave consumers the chance to try a trial portion of the game, usually restricted to the game's complete first section or "episode", before purchasing the rest of the adventure. Racks of games on single 5" and later 3.5" floppy disks were common in many stores, often very cheaply. Since the shareware versions were essentially free, the cost only needed to cover the disk and minimal packaging. Sometimes, the demo disks were packaged within the box of another game by the same company. As the increasing size of games in the mid-1990s made them impractical to fit on floppy disks, and retail publishers and developers began to earnestly mimic the practice, shareware games were replaced by shorter demos that were either distributed free on CDs with gaming magazines or as free downloads over the Internet, in some cases becoming exclusive content for specific websites. Shareware was also the distribution method of choice of early modern first-person shooters (FPS). There is a technical difference between shareware and demos. Up to the early 1990s, shareware could easily be upgraded to the full version by adding the "other episodes" or full portion of the game; this would leave the existing shareware files intact. Demos are different in that they are "self-contained" programs that cannot be upgraded to the full version. An example is the Descent shareware versus the Descent II demo; players were able to retain their saved games on the former but not the latter. Magazines that include the demos on a CD or DVD and likewise
https://en.wikipedia.org/wiki/Phagosome
In cell biology, a phagosome is a vesicle formed around a particle engulfed by a phagocyte via phagocytosis. Professional phagocytes include macrophages, neutrophils, and dendritic cells (DCs). A phagosome is formed by the fusion of the cell membrane around a microorganism, a senescent cell or an apoptotic cell. Phagosomes have membrane-bound proteins to recruit and fuse with lysosomes to form mature phagolysosomes. The lysosomes contain hydrolytic enzymes and reactive oxygen species (ROS) which kill and digest the pathogens. Phagosomes can also form in non-professional phagocytes, but they can only engulf a smaller range of particles, and do not contain ROS. The useful materials (e.g. amino acids) from the digested particles are moved into the cytosol, and waste is removed by exocytosis. Phagosome formation is crucial for tissue homeostasis and both innate and adaptive host defense against pathogens. However, some bacteria can exploit phagocytosis as an invasion strategy. They either reproduce inside of the phagolysosome (e.g. Coxiella spp.) or escape into the cytoplasm before the phagosome fuses with the lysosome (e.g. Rickettsia spp.). Many Mycobacteria, including Mycobacterium tuberculosis and Mycobacterium avium paratuberculosis, can manipulate the host macrophage to prevent lysosomes from fusing with phagosomes and creating mature phagolysosomes. Such incomplete maturation of the phagosome maintains an environment favorable to the pathogens inside it. Formation Phagosomes are large enough to degrade whole bacteria, or apoptotic and senescent cells, which are usually >0.5μm in diameter. This means a phagosome is several orders of magnitude bigger than an endosome, which is measured in nanometres. Phagosomes are formed when pathogens or opsonins bind to a transmembrane receptor, which are randomly distributed on the phagocyte cell surface. Upon binding, "outside-in" signalling triggers actin polymerisation and pseudopodia formation, which surrounds and fus
https://en.wikipedia.org/wiki/Digital%20Data%20Communications%20Message%20Protocol
Digital Data Communications Message Protocol (DDCMP) is a byte-oriented communications protocol devised by Digital Equipment Corporation in 1974 to allow communication over point-to-point network links for the company's DECnet Phase I network protocol suite. The protocol uses full or half duplex synchronous and asynchronous links and allowed errors introduced in transmission to be detected and corrected. It was retained and extended for later versions of the DECnet protocol suite. DDCMP has been described as the "most popular and pervasive of the commercial byte-count data link protocols". See also DEX References Overview of the protocol Protocol specification (courtesy of DEC) Notes Network protocols
https://en.wikipedia.org/wiki/Euclidean%20plane%20isometry
In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections (see below under ). The set of Euclidean plane isometries forms a group under composition: the Euclidean group in two dimensions. It is generated by reflections in lines, and every element of the Euclidean group is the composite of at most three distinct reflections. Informal discussion Informally, a Euclidean plane isometry is any way of transforming the plane without "deforming" it. For example, suppose that the Euclidean plane is represented by a sheet of transparent plastic sitting on a desk. Examples of isometries include: Shifting the sheet one inch to the right. Rotating the sheet by ten degrees around some marked point (which remains motionless). Turning the sheet over to look at it from behind. Notice that if a picture is drawn on one side of the sheet, then after turning the sheet over, we see the mirror image of the picture. These are examples of translations, rotations, and reflections respectively. There is one further type of isometry, called a glide reflection (see below under classification of Euclidean plane isometries). However, folding, cutting, or melting the sheet are not considered isometries. Neither are less drastic alterations like bending, stretching, or twisting. Formal definition An isometry of the Euclidean plane is a distance-preserving transformation of the plane. That is, it is a map such that for any points p and q in the plane, where d(p, q) is the usual Euclidean distance between p and q. Classification It can be shown that there are four types of Euclidean plane isometries. (Note: the notations for the types of isometries listed below are not completely standardised.) Reflections Reflections, or mirror isometries, denoted by Fc,v, where c is a point
https://en.wikipedia.org/wiki/William%20Nierenberg
William Aaron Nierenberg (February 13, 1919 – September 10, 2000) was an American physicist who worked on the Manhattan Project and was director of the Scripps Institution of Oceanography from 1965 through 1986. He was a co-founder of the George C. Marshall Institute in 1984. Background Nierenberg was born on February 13, 1919, at 213 E. 13th Street, on the Lower East Side of New York, the son of very poor Jewish immigrants from Austro-Hungary. He went to Townsend Harris High School and then the City College of New York (CCNY), where he won a scholarship to spend his junior year abroad in France at the University of Paris. In 1939, he became the first recipient of a William Lowell Putnam fellowship from the City College. Also in 1939, he participated in research at Columbia University, where he took a course in statistical mechanics from his future mentor, I. I. Rabi. He went on to graduate work at Columbia, but from 1941 spent the war years seconded to the Manhattan Project, working on isotope separation, before returning to Columbia to complete his PhD. Career In 1948, Nierenberg took up his first academic staff position, as Assistant Professor of Physics at the University of Michigan. From 1950 to 1965, he was Associate and then Professor of Physics at the University of California, Berkeley, where he had a very large and productive low energy nuclear physics laboratory, graduating 40 PhD’s during this time and publishing about 100 papers. He was responsible for the determination of more nuclear moments than any other single individual. This work was cited when he was elected to the National Academy of Sciences in 1971. During this period, in 1953, Nierenberg took a one-year leave to serve as the director of the Columbia University Hudson Laboratories, working on naval warfare problems. Later, he oversaw the design and construction of the “new” physics building at Berkeley. Much later (1960–1962) he took leave once again as Assistant Secretary General of the
https://en.wikipedia.org/wiki/Database%20audit
Database auditing involves observing a database to be aware of the actions of database users. Database administrators and consultants often set up auditing for security purposes, for example, to ensure that those without the permission to access information do not access it. References Further reading Gallegos, F. C. Gonzales, D. Manson, and S. Senft. Information Technology Control and Audit. Second Edition. Boca Raton, Florida: CRC Press LLC, 2000. Ron Ben-Natan, IBM Gold Consultant and Guardium CTO. Implementing Database Security and Auditing. Digital Press, 2005. KK Mookhey (2005). IT Audit. Vol. 8. Auditing MS SQL Server Security. IT Audit. Vol. 8 Murray Mazer. Database Auditing-Essential Business Practice for Today’s Risk Management May 19, 2005. Audit Types of auditing Computer access control
https://en.wikipedia.org/wiki/GunZ%3A%20The%20Duel
GunZ: The Duel (), or simply GunZ, was an online third-person shooting game, created by South Korean-based MAIET Entertainment. It was free-to-play, with a microtransaction business model for purchasing premium in-game items. The game allowed players to perform exaggerated, gravity-defying action moves, including wall running, stunning, tumbling, and blocking bullets with swords, in the style of action films and anime. Gameplay In Quest mode, players, in a group of up to 4 members, went through parts of a map for a certain number of stages, which were determined by the quest level. In each stage, players were required to kill 18 to 44 creatures, and the game ended when every member of the player team died or completed all of the stages. Quests could take place in the Prison, Mansion, or Dungeon map. Players could make the quests tougher and more profitable by using special quest items to increase the quest level that could be bought from the in-game store or obtained during a quest. Quest items in-game were stored in glowing chests that spawned where the monster that it came from died; certain items could have been dropped depending on the monster killed. Players ran through these to obtain an item randomly selected from the possibilities of that monster. The items obtained depended on the monster that the chest came from. By sacrificing certain items in combination, players could enter a boss quest. Boss items were obtained through pages and other boss quests, and pages were obtained through the in-game shop. The quest system was designed to reduce the amount of time needed to prepare for boss raids that are typical in many other online games. A significant and unique part of the gameplay was the movement system. Players could run on walls, perform flips off of them, and do quick mid-air dodges in any horizontal direction. Advanced movement and combat techniques were commonly referred to as "K-Style" or Korean style; a variety of techniques fell under this cat
https://en.wikipedia.org/wiki/Regulator%20%28automatic%20control%29
In automatic control, a regulator is a device which has the function of maintaining a designated characteristic. It performs the activity of managing or maintaining a range of values in a machine. The measurable property of a device is managed closely by specified conditions or an advance set value; or it can be a variable according to a predetermined arrangement scheme. It can be used generally to connote any set of various controls or devices for regulating or controlling items or objects. Examples are a voltage regulator (which can be a transformer whose voltage ratio of transformation can be adjusted, or an electronic circuit that produces a defined voltage), a pressure regulator, such as a diving regulator, which maintains its output at a fixed pressure lower than its input, and a fuel regulator (which controls the supply of fuel). Regulators can be designed to control anything from gases or fluids, to light or electricity. Speed can be regulated by electronic, mechanical, or electro-mechanical means. Such instances include; Electronic regulators as used in modern railway sets where the voltage is raised or lowered to control the speed of the engine Mechanical systems such as valves as used in fluid control systems. Purely mechanical pre-automotive systems included such designs as the Watt centrifugal governor whereas modern systems may have electronic fluid speed sensing components directing solenoids to set the valve to the desired rate. Complex electro-mechanical speed control systems used to maintain speeds in modern cars (cruise control) - often including hydraulic components, An aircraft engine's constant speed unit changes the propeller pitch to maintain engine speed. See also Controller (control theory) Governor (device) Process control Control engineering
https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance
Surface plasmon resonance (SPR) is a phenomenon that occurs where electrons in a thin metal sheet become excited by light that is directed to the sheet with a particular angle of incidence, and then travel parallel to the sheet. Assuming a constant light source wavelength and that the metal sheet is thin, the angle of incidence that triggers SPR is related to the refractive index of the material and even a small change in the refractive index will cause SPR to not be observed. This makes SPR a possible technique for detecting particular substances (analytes) and SPR biosensors have been developed to detect various important biomarkers. Explanation The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface. To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model, etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' "dielectric function", is the complex permittivity. In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive). LSPRs (localized surface plasmon resonances) are co
https://en.wikipedia.org/wiki/Geordie%20lamp
The Geordie lamp was a safety lamp for use in flammable atmospheres, invented by George Stephenson in 1815 as a miner's lamp to prevent explosions due to firedamp in coal mines. Origin In 1815, Stephenson was the engine-wright at the Killingworth Colliery in Northumberland and had been experimenting for several years with candles close to firedamp emissions in the mine. In August, he ordered an oil lamp, which was delivered on 21 October and tested by him in the mine in the presence of explosive gases. He improved this over several weeks with the addition of capillary tubes at the base so that it gave more light, and tried new versions on 4 and 30 November. This was presented to the Literary and Philosophical Society of Newcastle upon Tyne (Lit & Phil) on 5 December 1815. Although controversy arose between Stephenson's design and the Davy lamp (invented by Humphry Davy in the same year), Stephenson's original design worked on significantly different principles from Davy's final design. If the lamp were sealed except for a restricted air ingress (and a suitably sized chimney) then the presence of dangerous amounts of firedamp in the incoming air would (by its combustion) reduce the oxygen concentration inside the lamp so much that the flame would be extinguished. Stephenson had convinced himself of the validity of this approach by his experiments with candles near lit blowers: as lit candles were placed upwind of the blower, the blower flame grew duller; with enough upwind candles, the blower flame went out. To guard against the possibility of a flame travelling back through the incoming gases (an explosive backblast), air ingress was by a number of small-bore tubes through which the ingress air flowed at a higher velocity than the velocity of a flame fueled by a mixture of firedamp (mostly methane) and air. These ingress tubes were physically separate from the exhaust chimney. The body of the lamp was lengthened to give the flame a greater convective draw, and
https://en.wikipedia.org/wiki/Harem%20%28zoology%29
A harem is an animal group consisting of one or two males, a number of females, and their offspring. The dominant male drives off other males and maintains the unity of the group. If present, the second male is subservient to the dominant male. As juvenile males grow, they leave the group and roam as solitary individuals or join bachelor herds. Females in the group may be inter-related. The dominant male mates with the females as they become sexually active and drives off competitors, until he is displaced by another male. In some species, incoming males that achieve dominant status may commit infanticide. For the male, the primary benefit of the harem system is obtaining exclusive access to a group of mature females. The females benefit from being in a stable social group and the associated benefits of grooming, predator avoidance and cooperative defense of territory. The disadvantages for the male are the energetic costs of gaining or defending a harem which may leave him with reduced reproductive success. The females are disadvantaged if their offspring are killed during dominance battles or by incoming males. Overview The term harem is used in zoology to distinguish social organization consisting of a group of females, their offspring, and one to two males. The single male, called the dominant male, may be accompanied by another young male, called a "follower" male. Females that closely associate with the dominant male are called "central females," while females who associate less frequently with the dominant male are called "peripheral females." Juvenile male offspring leave the harem and live either solitarily, or, with other young males in groups known as bachelor herds. Sexually mature female offspring may stay within their natal harem, or may join another harem. The females in a harem may be, but are not exclusively, genetically related. For instance, the females in hamadryas baboon harems are not usually genetically related because their harems are for
https://en.wikipedia.org/wiki/Vlasov%20equation
The Vlasov equation is a differential equation describing time evolution of the distribution function of plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. Difficulties of the standard kinetic approach First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics: Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma. Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms. Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma. Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates: explicitly a PDE: and adapted it to the case of a plasma, leading to the systems of equations shown below. Here is a general distribution function of particles with momentum at coordinates and given time . Note that the term is the force acting on the particle. The Vlasov–Maxwell system of equations (Gaussian units) Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions and for electrons and (positive) plasma ions. The distribution functi
https://en.wikipedia.org/wiki/RepRap
RepRap (a contraction of replicating rapid prototyper) is a project to develop low-cost 3D printers that can print most of their own components. As an open design, all of the designs produced by the project are released under a free software license, the GNU General Public License. Due to the ability of these machines to make some of their own parts, authors envisioned the possibility of cheap RepRap units, enabling the manufacture of complex products without the need for extensive industrial infrastructure. They intended for the RepRap to demonstrate evolution in this process as well as for it to increase in number exponentially. A preliminary study claimed that using RepRaps to print common products results in economic savings. The RepRap project started in England in 2005 as a University of Bath initiative, but it is now made up of hundreds of collaborators worldwide. History RepRap was founded in 2005 by Dr Adrian Bowyer, a Senior Lecturer in mechanical engineering at the University of Bath in England. Funding was obtained from the Engineering and Physical Sciences Research Council. On 13 September 2006, the RepRap 0.2 prototype printed the first part identical to its own, which was then substituted for the original part created by a commercial 3D printer. On 9 February 2008, RepRap 1.0 "Darwin" made at least one instance of over half its rapid-prototyped parts. On 14 April 2008, RepRap made an end-user item: a clamp to hold an iPod to the dashboard of a Ford Fiesta car. By September that year, at least 100 copies had been produced in various countries. On 29 May 2008, Darwin achieved self replication by making a complete copy of all its rapid-prototyped parts (which represent 48% of all the parts, excluding fasteners). A couple hours later the "child" machine had made its first part: a timing-belt tensioner. In April 2009, electronic circuit boards were produced automatically with a RepRap, using an automated control system and a swappable head system ca
https://en.wikipedia.org/wiki/The%20Laws%20of%20Thought
An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities by George Boole, published in 1854, is the second of Boole's two monographs on algebraic logic. Boole was a professor of mathematics at what was then Queen's College, Cork (now University College Cork), in Ireland. Review of the contents The historian of logic John Corcoran wrote an accessible introduction to Laws of Thought and a point by point comparison of Prior Analytics and Laws of Thought. According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were “to go under, over, and beyond” Aristotle's logic by: Providing it with mathematical foundations involving equations; Extending the class of problems it could treat from assessing validity to solving equations, and; Expanding the range of applications it could handle — e.g. from propositions having only two terms to those having arbitrarily many. More specifically, Boole agreed with what Aristotle said; Boole's ‘disagreements’, if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms of Aristotle's logic to formulas in the form of equations—by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic—another revolutionary idea—involved Boole's doctrine that Aristotle's rules of inference (the “perfect syllogisms”) must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce “No quadrangle that is a square is a rectangle that is a rhombus” from “No square that is a quadrangle is a rhombus that is a rectangle” or from “No rhombus that is a rectangle is a square that is a quadrangle”.
https://en.wikipedia.org/wiki/Factiva
Factiva is a business information and research tool owned by Dow Jones & Company. Factiva aggregates content from both licensed and free sources. Providing organizations with search, alerting, dissemination, and other information management capabilities. Factiva products claim to provide access to more than 32,000 sources such as newspapers, journals, magazines, television and radio transcripts, photos, etc. These are sourced from nearly every country in the world in 28 languages, including more than 600 continuously updated newswires. History The company was founded as a joint-venture between Reuters and Dow Jones & Company in May 1999 under the Dow Jones Reuters Business Interactive name, and renamed Factiva six months later. Timothy M. Andrews, a longtime Dow Jones executive, was founding president and chief executive of the venture. Mr. Andrews was succeeded by Clare Hart in January 2000, another longtime Dow Jones executive, who was serving as Factiva's vice president and director of global sales. It developed modules with Microsoft, Oracle Corp., IBM and Yahoo!. Factiva has also partnered with EuroSpider, Comintelli, PeopleSoft, Media Map, Biz360, Choice Point, BTRadianz, AtHoc, and Reuters. Factiva has also been included as of March 2003 in Microsoft's Office 2003 program as one of the News research options within the Research Pane. In 2005, Factiva acquired two private companies: London-based 2B Reputation Intelligence Ltd. and Denver, Colorado-based taxonomy services and software firm, Synapse, the Knowledge Link Corporation. 2B was a technology and consulting business, specializing in media monitoring and reputation management. Synapse provided taxonomy management software, pre-built taxonomies and taxonomy-building and indexing services. This acquisition brought with it Synaptica, the taxonomy management software tool developed by Synapse, and Taxonomy Warehouse, a website developed by Synapse. Both Synaptica and Taxonomy Warehouse were developed b
https://en.wikipedia.org/wiki/Swedish%20grid
The Swedish grid (in Swedish Rikets Nät, RT 90) is a coordinate system that was previously used for government maps in Sweden. RT 90 is a slightly modified version of the RT 38 from 1938. RT 90 has been replaced with SWEREF 99 as the official Swedish spatial reference system. While the system could be used with negative numbers to represent all four "quarters" of the earth (NE, NW, SE, and SW hemispheres), the standard application of RT 90 is only useful for the northern half of the eastern hemisphere where numbers are positive. The coordinate system is based on metric measures rooting from the crossing of the Prime Meridian and the Equator at 0,0. The Central Meridian used to be based on a meridian located at the old observatory in Stockholm, but today it is based on the Prime Meridian at Greenwich. The numbering system's first digit represents the largest distance, followed by what can be seen as fractional decimal digits (though without an explicit decimal point). Therefore, X 65 is located halfway between X 6 and X 7. The coordinate grid is specified using two numbers, named X and Y, X being the south–north axis and Y the west–east axis. Two seven-digit numbers are sufficient to specify a location with a one m resolution. Example: X=6620000 Y=1317000 (X is the northing and Y is the easting) denotes a position 6620 km north of the Equator and -183 km (1317 km-1500 km) west of the Central Meridian, which happens to be somewhere near the town center of Arvika. RT90 Map Projection Parameters References Three-dimensional systems: SWEREF 99 Lantmäteriet (Swedish Land Survey). Two-dimensional systems: RT 90 Lantmäteriet (Swedish Land Survey). Geography of Sweden Geographic coordinate systems
https://en.wikipedia.org/wiki/Diffractometer
A diffractometer is a measuring instrument for analyzing the structure of a material from the scattering pattern produced when a beam of radiation or particles (such as X-rays or neutrons) interacts with it. Principle A typical diffractometer consists of a source of radiation, a monochromator to choose the wavelength, slits to adjust the shape of the beam, a sample and a detector. In a more complicated apparatus, a goniometer can also be used for fine adjustment of the sample and the detector positions. When an area detector is used to monitor the diffracted radiation, a beamstop is usually needed to stop the intense primary beam that has not been diffracted by the sample, otherwise the detector might be damaged. Usually the beamstop can be completely impenetrable to the X-rays or it may be semitransparent. The use of a semitransparent beamstop allows the possibility to determine how much the sample absorbs the radiation using the intensity observed through the beamstop. There are several types of X-ray diffractometer, depending on the research field (material sciences, powder diffraction, life sciences, structural biology, etc.) and the experimental environment, if it is a laboratory with its home X-ray source or a Synchrotron. In laboratory, diffractometers are usually an "all in one" equipment, including the diffractometer, the video microscope and the X-ray source. Plenty of companies manufacture "all in one" equipment for X-ray home laboratory, such as Rigaku, PANalytical, Thermo Fisher Scientific, Bruker, and many others. There are fewer diffractometer manufacturers for synchrotrons, owing to few numbers of x-ray beamlines to equip and the need of solid expertise of the manufacturer. For material sciences, Huber diffractometers are widely known and, for structural biology, Arinax diffractometers are the reference. Nonetheless, due to few numbers of manufacturers, a large amount of synchrotron diffractometers are "homemade" diffractometers, realized by sync
https://en.wikipedia.org/wiki/Imus%20in%20the%20Morning
Imus in the Morning was a long-running radio show hosted by Don Imus. The show originated on June 2, 1968, on various stations in the Western United States and Cleveland, Ohio, before settling on WNBC radio in New York City in 1971. In October 1988, the show moved to WFAN when that station took over WNBC's dial position following an ownership change. It was later syndicated to 60 other stations across the country by Westwood One, a division of CBS Radio, airing weekdays from 5:30 to 10 am Eastern time. Beginning September 3, 1996, the 6 to 9 am portion was simulcast on the cable television network MSNBC. The show had been broadcast almost every weekday morning for 36 years on radio and 11 years on MSNBC until it was canceled on April 12, 2007, due to controversial comments made on the April 4, 2007, broadcast. Imus in the Morning program returned to the morning drive on New York radio station WABC on December 3, 2007. WABC is the flagship station of ABC Radio Networks (which itself was eventually subsumed into Westwood One in 2012), which syndicates the show nationally. From 2007 to August 2009, the show was simulcast on television nationwide on RFD-TV and rebroadcast each evening on RFD HD in high-definition. After Imus and RFD reached a mutual agreement to prematurely terminate the five-year deal, Fox Business Network began simulcasting the program on October 5, 2009, an arrangement which ended on May 29, 2015. In March 2018, Cumulus Media, in the middle of a bankruptcy process, told Imus they were going to stop paying him, and as a result, Imus ended the show. The final broadcast of Imus in the Morning was March 29, 2018. History Following a successful run as an on-air personality in Cleveland, Don Imus was hired by WNBC to host Imus in the Morning in late 1971. Imus is credited with introducing New York, and the larger Top 40 radio community, to the shock jock style of hosting. His initial run in New York ended in August 1977, when NBC management ordered a
https://en.wikipedia.org/wiki/International%20Chemical%20Identifier
The International Chemical Identifier (InChI or ) is a textual identifier for chemical substances, designed to provide a standard way to encode molecular information and to facilitate the search for such information in databases and on the web. Initially developed by the International Union of Pure and Applied Chemistry (IUPAC) and National Institute of Standards and Technology (NIST) from 2000 to 2005, the format and algorithms are non-proprietary. Since May 2009, it has been developed by the InChI Trust, a nonprofit charity from the United Kingdom which works to implement and promote the use of InChI. The identifiers describe chemical substances in terms of layers of information — the atoms and their bond connectivity, tautomeric information, isotope information, stereochemistry, and electronic charge information. Not all layers have to be provided; for instance, the tautomer layer can be omitted if that type of information is not relevant to the particular application. The InChI algorithm converts input structural information into a unique InChI identifier in a three-step process: normalization (to remove redundant information), canonicalization (to generate a unique number label for each atom), and serialization (to give a string of characters). InChIs differ from the widely used CAS registry numbers in three respects: firstly, they are freely usable and non-proprietary; secondly, they can be computed from structural information and do not have to be assigned by some organization; and thirdly, most of the information in an InChI is human readable (with practice). InChIs can thus be seen as akin to a general and extremely formalized version of IUPAC names. They can express more information than the simpler SMILES notation and, in contrast to SMILES strings, every structure has a unique InChI string, which is important in database applications. Information about the 3-dimensional coordinates of atoms is not represented in InChI; for this purpose a format such
https://en.wikipedia.org/wiki/Business.com
Business.com is a digital media company and B2B web destination which offers various performance marketing advertising, including lead generation products on a pay per lead and pay per click basis, directory listings, and display advertising. The site covers business industry news and trends for growth companies and the B2B community to stay up-to-date, and hosted more than 15,000 pieces of content as of November 2014. Business.com operates as a subsidiary of the Purch Group since being acquired in 2016. Having sold their brands to Future, Purch's existing B2B assets later reorganised into Business.com. History Business.com, Inc. was founded in 1999 by Jake Winebaum, previously chairman of the Walt Disney Internet Group; and Sky Dayton, founder of Earthlink, Boingo Wireless, and Helio, among others. Around that time, the Business.com domain name was purchased from Marc Ostrofsky by Winebaum's eCompanies Ventures for $7.5 million. In addition to investment by eCompanies, early funding in the amount of $61 million was provided in 2000 by Pearson PLC, Reed Business Information, McGraw Hill, and others. In its initial form, Business.com aimed to be the Internet's leading search engine for small business and corporate information. Business.com struggled through the Dot-com bubble years. The company retooled beginning in 2002 after massive layoffs and a new focus on developing a pay for performance ad network model. In April 2003, the company achieved profitability, and on November 8, 2004, the company secured an additional $10 million in venture capital funding from Benchmark Capital. On October 9, 2006, Business.com launched Work.com, a site with business how-to guides contributed by the small business community. Work.com was sold in March 2012 and is now owned by Salesforce.com. Then on July 26, 2007, after beating out Dow Jones & Company, the New York Times Company, IAC/InterActiveCorp, and News Corp, print and interactive marketing company R.H. Donnelley Corpor
https://en.wikipedia.org/wiki/Thermalisation
In physics, thermalisation (or thermalization) is the process of physical bodies reaching thermal equilibrium through mutual interaction. In general the natural tendency of a system is towards a state of equipartition of energy and uniform temperature that maximizes the system's entropy. Thermalisation, thermal equilibrium, and temperature are therefore important fundamental concepts within statistical physics, statistical mechanics, and thermodynamics; all of which are a basis for many other specific fields of scientific understanding and engineering application. Examples of thermalisation include: the achievement of equilibrium in a plasma. the process undergone by high-energy neutrons as they lose energy by collision with a moderator. the process of heat or phonon emission by charge carriers in a solar cell, after a photon that exceeds the semiconductor band gap energy is absorbed. The hypothesis, foundational to most introductory textbooks treating quantum statistical mechanics, assumes that systems go to thermal equilibrium (thermalisation). The process of thermalisation erases local memory of the initial conditions. The eigenstate thermalisation hypothesis is a hypothesis about when quantum states will undergo thermalisation and why. Not all quantum states undergo thermalisation. Some states have been discovered which do not (see below), and their reasons for not reaching thermal equilibrium are unclear . Theoretical description The process of equilibration can be described using the H-theorem or the relaxation theorem, see also entropy production. Systems resisting thermalisation Some such phenomena resisting the tendency to thermalize include (see, e.g., a quantum scar): Conventional quantum scars, which refer to eigenstates with enhanced probability density along unstable periodic orbits much higher than one would intuitively predict from classical mechanics. Perturbation-induced quantum scarring: despite the similarity in appearance to conven
https://en.wikipedia.org/wiki/Grain%20boundary
In materials science, a grain boundary is the interface between two grains, or crystallites, in a polycrystalline material. Grain boundaries are two-dimensional defects in the crystal structure, and tend to decrease the electrical and thermal conductivity of the material. Most grain boundaries are preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep. On the other hand, grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve mechanical strength, as described by the Hall–Petch relationship. High and low angle boundaries It is convenient to categorize grain boundaries according to the extent of misorientation between the two grains. Low-angle grain boundaries (LAGB) or subgrain boundaries are those with a misorientation less than about 15 degrees. Generally speaking they are composed of an array of dislocations and their properties and structure are a function of the misorientation. In contrast the properties of high-angle grain boundaries, whose misorientation is greater than about 15 degrees (the transition angle varies from 10–15 degrees depending on the material), are normally found to be independent of the misorientation. However, there are 'special boundaries' at particular orientations whose interfacial energies are markedly lower than those of general high-angle grain boundaries. The simplest boundary is that of a tilt boundary where the rotation axis is parallel to the boundary plane. This boundary can be conceived as forming from a single, contiguous crystallite or grain which is gradually bent by some external force. The energy associated with the elastic bending of the lattice can be reduced by inserting a dislocation, which is essentially a half-plane of atoms that act like a wedge, that creates a permanent misorientation between the two sides. As the grain is bent further, more
https://en.wikipedia.org/wiki/Nucleation
In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately. Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface. Characteristics Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts th
https://en.wikipedia.org/wiki/Microtubule-associated%20protein
In cell biology, microtubule-associated proteins (MAPs) are proteins that interact with the microtubules of the cellular cytoskeleton. MAPs are integral to the stability of the cell and its internal structures and the transport of components within the cell. Function MAPs bind to the tubulin subunits that make up microtubules to regulate their stability. A large variety of MAPs have been identified in many different cell types, and they have been found to carry out a wide range of functions. These include both stabilizing and destabilizing microtubules, guiding microtubules towards specific cellular locations, cross-linking microtubules and mediating the interactions of microtubules with other proteins in the cell. Within the cell, MAPs bind directly to the tubulin dimers of microtubules. This binding can occur with either polymerized or depolymerized tubulin, and in most cases leads to the stabilization of microtubule structure, further encouraging polymerization. Usually, it is the C-terminal domain of the MAP that interacts with tubulin, while the N-terminal domain can bind with cellular vesicles, intermediate filaments or other microtubules. MAP-microtubule binding is regulated through MAP phosphorylation. This is accomplished through the function of the microtubule-affinity-regulating-kinase (MARK) protein. Phosphorylation of the MAP by the MARK causes the MAP to detach from any bound microtubules. This detachment is usually associated with a destabilization of the microtubule causing it to fall apart. In this way the stabilization of microtubules by MAPs is regulated within the cell through phosphorylation. Types MAPs have been divided into several different categories and sub-categories. There are "structural" MAPs which bind along the microtubules and "+TIP" MAPs which bind to the growing end of the microtubules. Structural MAPs have been divided into MAP1, MAP2, MAP4, and Tau families. +TIP MAPs are motor proteins such as kinesin, dyneins, and other MAPs
https://en.wikipedia.org/wiki/Information%20security%20audit
An information security audit is an audit of the level of information security in an organization. It is an independent review and examination of system records, activities, and related documents. These audits are intended to improve the level of information security, avoid improper information security designs, and optimize the efficiency of the security safeguards and security processes. Within the broad scope of auditing information security there are multiple types of audits, multiple objectives for different audits, etc. Most commonly the controls being audited can be categorized as technical, physical and administrative. Auditing information security covers topics from auditing the physical security of data centers to auditing the logical security of databases, and highlights key components to look for and different methods for auditing these areas. When centered on the Information technology (IT) aspects of information security, it can be seen as a part of an information technology audit. It is often then referred to as an information technology security audit or a computer security audit. However, information security encompasses much more than IT. The audit process Step 1: Preliminary audit assessment The auditor is responsible for assessing the current technological maturity level of a company during the first stage of the audit. This stage is used to assess the current status of the company and helps identify the required time, cost and scope of an audit. First, you need to identify the minimum security requirements: Security policy and standards Organizational and Personal security Communication, Operation and Asset management Physical and environmental security Access control and Compliance IT systems development and maintenance IT security incident management Disaster recovery and business continuity management Risk management Step 2: Planning & preparation The auditor should plan a company's audit based on the information found in the p
https://en.wikipedia.org/wiki/Crystal%20twinning
Crystal twinning occurs when two or more adjacent crystals of the same mineral are oriented so that they share some of the same crystal lattice points in a symmetrical manner. The result is an intergrowth of two separate crystals that are tightly bonded to each other. The surface along which the lattice points are shared in twinned crystals is called a composition surface or twin plane. Crystallographers classify twinned crystals by a number of twin laws. These twin laws are specific to the crystal structure. The type of twinning can be a diagnostic tool in mineral identification. Deformation twinning, in which twinning develops in a crystal in response to a shear stress, is an important mechanism for permanent shape changes in a crystal. Definition Twinning is a form of symmetrical intergrowth between two or more adjacent crystals of the same mineral. It differs from the ordinary random intergrowth of mineral grains in a mineral deposit, because the relative orientations of the two crystal segments show a fixed relationship that is characteristic of the mineral structure. The relationship is defined by a symmetry operation called a twin operation. The twin operation is not one of the normal symmetry operations of the untwinned crystal structure. For example, the twin operation may be reflection across a plane that is not a symmetry plane of the single crystal. On the microscopic level, the twin boundary is characterized by a set of atomic positions in the crystal lattice that are shared between the two orientations. These shared lattice points give the junction between the crystal segments much greater strength than that between randomly oriented grains, so that the twinned crystals do not easily break apart. Twin laws Twin laws are symmetry operations that define the orientation between twin crystal segments. These are as characteristic of the mineral as are its crystal face angles. For example, crystals of staurolite show twinning at angles of almost prec
https://en.wikipedia.org/wiki/Damper%20%28flow%29
A damper is a valve or plate that stops or regulates the flow of air inside a duct, chimney, VAV box, air handler, or other air-handling equipment. A damper may be used to cut off central air conditioning (heating or cooling) to an unused room, or to regulate it for room-by-room temperature and climate control -- for example in the case of Volume Control Dampers. Its operation can be manual or automatic. Manual dampers are turned by a handle on the outside of a duct. Automatic dampers are used to regulate airflow constantly and are operated by electric or pneumatic motors, in turn controlled by a thermostat or building automation system. Automatic or motorized dampers may also be controlled by a solenoid, and the degree of air-flow calibrated, perhaps according to signals from the thermostat going to the actuator of the damper in order to modulate the flow of air-conditioned air in order to effect climate control. In a chimney flue, a damper closes off the flue to keep the weather (and birds and other animals) out and warm or cool air in. This is usually done in the summer, but also sometimes in the winter between uses. In some cases, the damper may also be partly closed to help control the rate of combustion. The damper may be accessible only by reaching up into the fireplace by hand or with a woodpoker, or sometimes by a lever or knob that sticks down or out. On a wood-burning stove or similar device, it is usually a handle on the vent duct as in an air conditioning system. Forgetting to open a damper before beginning a fire can cause serious smoke damage to the interior of a home, if not a house fire. Automated zone dampers A zone damper (also known as a Volume Control Damper or VCD) is a specific type of damper used to control the flow of air in an HVAC heating or cooling system. In order to improve efficiency and occupant comfort, HVAC systems are commonly divided up into multiple zones. For example, in a house, the main floor may be served by one he
https://en.wikipedia.org/wiki/Turbomachinery
Turbomachinery, in mechanical engineering, describes machines that transfer energy between a rotor and a fluid, including both turbines and compressors. While a turbine transfers energy from a fluid to a rotor, a compressor transfers energy from a rotor to a fluid. These two types of machines are governed by the same basic relationships including Newton's second Law of Motion and Euler's pump and turbine equation for compressible fluids. Centrifugal pumps are also turbomachines that transfer energy from a rotor to a fluid, usually a liquid, while turbines and compressors usually work with a gas. History The first turbomachines could be identified as water wheels, which appeared between the 3rd and 1st centuries BCE in the Mediterranean region. These were used throughout the medieval period and began the first Industrial Revolution. When steam power started to be used, as the first power source driven by the combustion of a fuel rather than renewable natural power sources, this was as reciprocating engines. Primitive turbines and conceptual designs for them, such as the smoke jack, appeared intermittently but the temperatures and pressures required for a practically efficient turbine exceeded the manufacturing technology of the time. The first patent for gas turbines were filed in 1791 by John Barber. Practical hydroelectric water turbines and steam turbines did not appear until the 1880s. Gas turbines appeared in the 1930s. The first impulse type turbine was created by Carl Gustaf de Laval in 1883. This was closely followed by the first practical reaction type turbine in 1884, built by Charles Parsons. Parsons’ first design was a multi-stage axial-flow unit, which George Westinghouse acquired and began manufacturing in 1895, while General Electric acquired de Laval's designs in 1897. Since then, development has skyrocketed from Parsons’ early design, producing 0.746 kW, to modern nuclear steam turbines producing upwards of 1500 MW. Furthermore, steam turbines ac
https://en.wikipedia.org/wiki/Seed%20ball
Seed balls, also known as earth balls or , consist of seeds rolled within a ball of clay and other matter to assist germination. They are then thrown into vacant lots and over fences as a form of 'guerilla gardening'. Matter such as humus and compost are often placed around the seeds to provide microbial inoculants. Cotton-fibres or liquefied paper are sometimes added to further protect the clay ball in particularly harsh habitats. An ancient technique, it was re-discovered by Japanese natural farming pioneer Masanobu Fukuoka. Development of technique The technique for creating seed balls was rediscovered by Japanese natural farming pioneer Masanobu Fukuoka. The technique was also used, for instance, in ancient Egypt to repair farms after the annual spring flooding of the Nile. Masanobu Fukuoka developed his technique during the period of the Second World War, while working in a Japanese government lab as a plant scientist on the mountainous island of Shikoku. He wanted to find a technique that would increase food production without taking away from the land already allocated for traditional rice production which thrived in the volcanic rich soils of Japan. Construction To make a seed ball, generally about five measures of red clay by volume are combined with one measure of seeds. The balls are formed between 10 mm and 80 mm (about " to 3") in diameter. After the seed balls have been formed, they must dry for 24–48 hours before use. Seed bombing Seed bombing is the practice of introducing vegetation to land by throwing or dropping seed balls. It is used in modern aerial seeding as a way to deter seed predation. It has also been popularized by green movements such as guerrilla gardening as a way to introduce new plants to an environment. Guerrilla gardening The term "seed green-aide" was first used by Liz Christy in 1973 when she started the Green Guerillas. The first seed green-aides were made from condoms filled with tomato seeds, and fertilizer. They were t
https://en.wikipedia.org/wiki/Online%20identity
Internet identity (IID), also online identity, online personality or internet persona, is a social identity that an Internet user establishes in online communities and websites. It may also be an actively constructed presentation of oneself. Although some people choose to use their real names online, some Internet users prefer to be anonymous, identifying themselves by means of pseudonyms, which reveal varying amounts of personally identifiable information. An online identity may even be determined by a user's relationship to a certain social group they are a part of online. Some can be deceptive about their identity. In some online contexts, including Internet forums, online chats, and massively multiplayer online role-playing games (MMORPGs), users can represent themselves visually by choosing an avatar, an icon-sized graphic image. Avatars are one way users express their online identity. Through interaction with other users, an established online identity acquires a reputation, which enables other users to decide whether the identity is worthy of trust. Online identities are associated with users through authentication, which typically requires registration and logging in. Some websites also use the user's IP address or tracking cookies to identify users. The concept of the self, and how this is influenced by emerging technologies, are a subject of research in fields such as education, psychology, and sociology. The online disinhibition effect is a notable example, referring to a concept of unwise and uninhibited behavior on the Internet, arising as a result of anonymity and audience gratification. Online personal identity Triangular relationships of personal online identity There are three key interaction conditions in the identity processes: Fluid Nature of Online and Offline, overlapping social networks, and expectations of accuracy. Social actors accomplish the ideal-authentic balance through self-triangulation, presenting a coherent image in multiple ar
https://en.wikipedia.org/wiki/Reconstruction%20from%20zero%20crossings
The problem of reconstruction from zero crossings can be stated as: given the zero crossings of a continuous signal, is it possible to reconstruct the signal (to within a constant factor)? Worded differently, what are the conditions under which a signal can be reconstructed from its zero crossings? This problem has two parts. Firstly, proving that there is a unique reconstruction of the signal from the zero crossings, and secondly, how to actually go about reconstructing the signal. Though there have been quite a few attempts, no conclusive solution has yet been found. Ben Logan from Bell Labs wrote an article in 1977 in the Bell System Technical Journal giving some criteria under which unique reconstruction is possible. Though this has been a major step towards the solution, many people are dissatisfied with the type of condition that results from his article. According to Logan, a signal is uniquely reconstructible from its zero crossings if: The signal x(t) and its Hilbert transform xt have no zeros in common with each other. The frequency-domain representation of the signal is at most 1 octave long, in other words, it is bandpass-limited between some frequencies B and 2B. Further reading External links Signal processing
https://en.wikipedia.org/wiki/Simple%20precedence%20grammar
A simple precedence grammar is a context-free formal grammar that can be parsed with a simple precedence parser. The concept was first created in 1964 by Claude Pair, and was later rediscovered, from ideas due to Robert Floyd, by Niklaus Wirth and Helmut Weber who published a paper, entitled EULER: a generalization of ALGOL, and its formal definition, published in 1966 in the Communications of the ACM. Formal definition G = (N, Σ, P, S) is a simple precedence grammar if all the production rules in P comply with the following constraints: There are no erasing rules (ε-productions) There are no useless rules (unreachable symbols or unproductive rules) For each pair of symbols X, Y (X, Y (N ∪ Σ)) there is only one Wirth–Weber precedence relation. G is uniquely inversible Examples precedence table Notes References Alfred V. Aho, Jeffrey D. Ullman (1977). Principles of Compiler Design. 1st Edition. Addison–Wesley. William A. Barrett, John D. Couch (1979). Compiler construction: Theory and Practice. Science Research Associate. Jean-Paul Tremblay, P. G. Sorenson (1985). The Theory and Practice of Compiler Writing. McGraw–Hill. External links "Simple Precedence Relations" at Clemson University Formal languages
https://en.wikipedia.org/wiki/Mac%20OS%20X%20Leopard
Mac OS X Leopard (version 10.5) is the sixth major release of macOS, Apple's desktop and server operating system for Macintosh computers. Leopard was released on October 26, 2007 as the successor of Mac OS X Tiger, and is available in two editions: a desktop version suitable for personal computers, and a server version, Mac OS X Server. It retailed for $129 for the desktop version and $499 for Server. Leopard was superseded by Mac OS X Snow Leopard (version 10.6) in 2009. Mac OS X Leopard is the last version of macOS that supports the PowerPC architecture as its successor, Mac OS X Snow Leopard, functions solely on Intel based Macs. According to Apple, Leopard contains over 300 changes and enhancements compared to its predecessor, Mac OS X Tiger, covering core operating system components as well as included applications and developer tools. Leopard introduces a significantly revised desktop, with a redesigned Dock, Stacks, a semitransparent menu bar, and an updated Finder that incorporates the Cover Flow visual navigation interface first seen in iTunes. Other notable features include support for writing 64-bit graphical user interface applications, an automated backup utility called Time Machine, support for Spotlight searches across multiple machines, and the inclusion of Front Row and Photo Booth, which were previously included with only some Mac models. Apple missed Leopard's release time frame as originally announced by Apple's CEO Steve Jobs. When first discussed in June 2005, Jobs had stated that Apple intended to release Leopard at the end of 2006 or early 2007. A year later, this was amended to Spring 2007; however, on April 12, 2007, Apple issued a statement that its release would be delayed until October 2007 because of the development of the iPhone. Mac OS X Leopard is the first version of Mac OS X to run on the MacBook Air. New and changed features End-user features Apple advertised that Mac OS X Leopard has 300+ new features, including: A new and
https://en.wikipedia.org/wiki/Hotspot%20gateway
A hotspot gateway is a device that provides authentication, authorization and accounting for a wireless network. This can keep malicious users off of a private network even in the event that they are able to break the encryption. A wireless hotspot gateway helps solve guest user connectivity problems by offering instant Internet access without the need for configuration changes to the client computer or any resident client-side software. This means that even if client configuration such as network IP address (including Gateway IP, DNS) or HTTP Proxy settings are different from that of the provided network, the client can still get access to the network instantly with their existing network configuration. Some of the prominent hotspot gateway brands are - WiJungle, Nomadix, Wavertech etc. See also Hotspot (Wi-Fi) References Wireless access points
https://en.wikipedia.org/wiki/Computer-aided%20production%20engineering
Computer-aided production engineering (CAPE) is a relatively new and significant branch of engineering. Global manufacturing has changed the environment in which goods are produced. Meanwhile, the rapid development of electronics and communication technologies has required design and manufacturing to keep pace. Description of CAPE CAPE is seen as a new type of computer-aided engineering environment which will improve the productivity of manufacturing/industrial engineers. This environment would be used by engineers to design and implement future manufacturing systems and subsystems. Work is currently underway at the United States National Institute of Standards and Technology (NIST) on CAPE systems. The NIST project is aimed at advancing the development of software environments and tools for the design and engineering of manufacturing systems. CAPE and the Future of Manufacturing The future of manufacturing will be determined by the efficiency with which it can incorporate new technologies. The current process in engineering manufacturing systems is often ad hoc, with computerized tools being used on a limited basis. Given the costs and resources involved in the construction and operation of manufacturing systems, the engineering process must be made more efficient. New computing environments for engineering manufacturing systems could help achieve that objective. Why is CAPE important? In much the same way that product designers need computer-aided design systems, manufacturing and industrial engineers need sophisticated computing capabilities to solve complex problems and manage the vast data associated with the design of a manufacturing system. In order to solve these complex problems and manage design data, computerized tools must be used in the application of scientific and engineering methods to the problem of the design and implementation of manufacturing systems. Engineers must address the entire factory as a system and the interactions of that sy
https://en.wikipedia.org/wiki/QuickTransit
QuickTransit was a cross-platform virtualization program developed by Transitive Corporation. It allowed software compiled for one specific processor and operating system combination to be executed on a different processor and/or operating system architecture without source code or binary changes. QuickTransit was an extension of the Dynamite technology developed by the University of Manchester Parallel Architectures and Languages research group, which now forms part of the university's Advanced Processor Technologies research group. Silicon Graphics announced QuickTransit's first availability in October 2004 on its Prism visualization systems. These systems, based on Itanium 2 processors and the Linux operating system, used QuickTransit to transparently run application binaries compiled for previous SGI systems based on the MIPS processor and IRIX operating system. This technology was also licensed by Apple Computer in its transition from PowerPC to Intel (x86) CPUs, starting in 2006. Apple marketed this technology as "Rosetta". In August 2006, IBM announced a partnership with Transitive to run Linux/x86 binaries on its Power ISA-based Power Systems servers. IBM named this software System p AVE during its beta phase, but it was renamed to PowerVM Lx86 upon release. In November 2006, Transitive launched QuickTransit for Solaris/SPARC-to-Linux/x86-64, which enabled unmodified Solaris applications compiled for SPARC systems to run on 64-bit x86-based systems running Linux. This was followed in October 2007 by QuickTransit for Solaris/SPARC-to-Linux/Itanium, which enabled Solaris/SPARC applications to run on Itanium systems running Linux. A third product, QuickTransit for Solaris/SPARC-to-Solaris/x86-64, was released in December 2007, enabling Solaris/SPARC applications to run on 64-bit x86 systems running Solaris. IBM acquired Transitive in June 2009 and merged the company into its Power Systems division. IBM announced in September 2011 it would discontinue mar
https://en.wikipedia.org/wiki/Flavelle%20Medal
The Flavelle Medal is an award of the Royal Society of Canada "for an outstanding contribution to biological science during the preceding ten years or for significant additions to a previous outstanding contribution to biological science". It is named in honour of Joseph Wesley Flavelle and is awarded bi-annually. The award consists of a gold plated silver medal. Recipients Source: Royal Society of Canada 2022 - Graham Bell, FRSC 2020 - Marla Sokolowski, FRSC 2018 - Frank Plummer, FRSC 2016 - 2014 - Spencer Barrett, FRSC 2012 - Siegfried Hekimi, FRSC 2010 - Kenneth B. Storey, FRSC 2008 - John Smol, FRSC 2006 - Brett Finlay, FRSC 2004 - Brian D. Sykes, FRSC 2002 - Lewis E. Kay 2000 - David R. Jones, FRSC 1998 - Anthony Pawson, FRSC 1996 - Ian C. P. Smith, FRSC 1994 - Robert J. Cedergren, MSRC 1992 - Michael Smith, FRSC 1990 - Peter W. Hochachka, FRSC 1988 - Robert Haynes, FRSC 1986 - Neil Towers, FRSC 1984 - Robert G.E. Murray, FRSC 1982 - Clayton Oscar Person, FRSC 1980 - Gordon Dixon, FRSC 1978 - Louis Siminovitch, FRSC 1976 - Michael Shaw, FRSC 1974 - Juda Hirsh Quastel, FRSC 1972 - Douglas Harold Copp, FRSC 1970 - William Edwin Ricker, FRSC 1968 - Jacques Genest, MSRC 1966 - Erich Baer, FRSC 1965 - William Stewart Hoar, FRSC 1964 - Gleb Krotkov, FRSC 1963 - Robert James Rossiter, FRSC 1962 - Frederick Ernest Joseph Fry, FRSC 1961 - Charles Philippe Leblond, FRSC 1960 - Edmund Murton Walker, FRSC 1959 - Murray L. Barr, FRSC 1958 - Allan Grant Lochhead, FRSC 1957 - Thomas Wright M. Cameron, FRSC 1956 - George Lyman Duff, FRSC 1955 - Charles Samuel Hanes, FRSC 1954 - David Alymer Scott, FRSC 1953 - Everitt George Dunne Murray, FRSC 1952 - Archibald G. Huntsman, FRSC 1951 - Wilder Penfield, FRSC 1950 - Charles Best, FRSC 1949 - W. P. Thompson, FRSC 1948 - Margaret Newton, FRSC 1947 - Guilford Bevil Reed, FRSC 1946 - William Rowan, FRSC 1945 - Robert Boyd Thomson, FRSC 1944 - Velyien Ewart Henderson, FRSC 1943 - B.
https://en.wikipedia.org/wiki/CCNP
A Cisco Certified Network Professional (CCNP) is a person in the IT industry who has achieved the professional level of Cisco Career Certification. Professional certifications Prior to February 2020 there were approximately eight professional-level certification programs within Cisco Career Certifications. CCDP CCNP Cloud CCNP Collaboration CCNP Data Center CCNP Routing and Switching CCNP Security CCNP Service Provider CCNP Wireless Cisco has announced that as of February 2020, the above format has been retired and replaced with the following: CCNP Enterprise (integrating CCNP Routing and Switching, CCDP and CCNP Wireless) CCNP Data Center (integrating CCNP Cloud) CCNP Security CCNP Service Provider CCNP Collaboration Cisco Certified DevNet Professional Migration guides to the newer certification exams are available from Cisco at its CCNP Migration Tools page. Required exams Starting February 2020, no entry-level certification will be required to attempt the CCNP exams. Relevant entry-level certifications need to be passed in advance, if someone wants to attempt the Professionals level exams. The associate-level certification programs are: CCDA, CCNA Cloud, CCNA Collaboration, CCNA Cyber Ops, CCNA Data Center, CCNA Industrial, CCNA Routing and Switching, CCNA Security, CCNA Service Provider and CCNA Wireless. Each area of expertise requires passing the relevant exams for certification with a professional understanding and capability of networking. For example, the CCNP Routing and Switching consists of three exams: Implementing IP Routing (ROUTE), Implementing IP Switched Networks (SWITCH) and Troubleshooting and Maintaining IP Networks (TSHOOT). Validity The validity of CCNP Certification is 3 years. Renewal requires certification holders to register for and pass same or higher level Cisco recertification exam(s) every 3 years. Related certifications Associate-level certification: CCNA Expert-level Certification: CCIE
https://en.wikipedia.org/wiki/CCNA
CCNA (Cisco Certified Network Associate) is an information technology (IT) certification from Cisco Systems. CCNA certification is an associate-level Cisco Career certification. The Cisco exams have changed several times in response to changing IT trends. In 2020, Cisco announced an update to its certification program that "Consolidated and updated associate-level training and certification." Cisco has consolidated the previous different types of Cisco-certified Network Associate with a general CCNA certification. The content of the exams is proprietary. Cisco and its learning partners offer a variety of different training methods, including books published by Cisco Press, and online and classroom courses available under the title "Interconnecting Cisco Network Devices". Exam To achieve a CCNA certification, candidates must earn a passing score on Cisco exam No. 200-301. After the exam, candidates receive a score report along with a score breakdown by exam section and the passing score for the given exam. The exam tests a candidate's knowledge and skills required to install, operate, and troubleshoot a small to medium size enterprise branch network. The exam covers a broad range of fundamentals, including network fundamentals, network access, IP connectivity, IP services, security fundamentals, automation, and programmability. Prerequisites There are no prerequisites to take the CCNA certification exam. There is also a starting point of networking which is the CCT (Cisco Certified Technician). Validity The validity of CCNA Certification is three years. Renewal requires certification holders to register for and pass the same or higher level Cisco re-certification exam(s) every three years. See also Cisco Networking Academy Cisco certifications DevNet Cyber Ops CCNP CCIE Certification References Cisco CCNA 200-301 Online Course UK". Distance Learning Guide External links CCNA Certification References Information technology qualifications Computer sec
https://en.wikipedia.org/wiki/Logical%20security
Logical security consists of software safeguards for an organization's systems, including user identification and password access, authenticating, access rights and authority levels. These measures are to ensure that only authorized users are able to perform actions or access information in a network or a workstation. It is a subset of computer security. Elements Elements of logical security are: User IDs, also known as logins, user names, logons or accounts, are unique personal identifiers for agents of a computer program or network that is accessible by more than one agent. These identifiers are based on short strings of alphanumeric characters, and are either assigned or chosen by the users. Authentication is the process used by a computer program, computer, or network to attempt to confirm the identity of a user. Blind credentials (anonymous users) have no identity, but are allowed to enter the system. The confirmation of identities is essential to the concept of access control, which gives access to the authorized and excludes the unauthorized. Biometrics authentication is the measuring of a user’s physiological or behavioral features to attempt to confirm his/her identity. Physiological aspects that are used include fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements. Behavioral aspects that are used include signature recognition, gait recognition, speaker recognition and typing pattern recognition. When a user registers with the system which he/she will attempt to access later, one or more of his/her physiological characteristics are obtained and processed by a numerical algorithm. This number is then entered into a database, and the features of the user attempting to match the stored features must match up to a certain error rate. Token authentication Token authentication are small devices that authorized users of computer systems or networks carry to assist in identifying that who is logging into a compu
https://en.wikipedia.org/wiki/TinyURL
TinyURL is a URL shortening web service, which provides short aliases for redirection of long URLs. Kevin Gilbertson, a web developer, launched the service in January 2002 as a way to post links in newsgroup postings which frequently had long, cumbersome addresses. TinyURL was the first notable URL shortening service and is one of the oldest still currently operating. Service The TinyURL homepage includes a form which is used to submit a long URL for shortening. For each URL entered, the server adds a new alias in its hashed database and returns a short URL. According to the website, the shortened URLs will never expire. TinyURL offers an API which allows applications to automatically create short URLs. Short URL aliases are seen as useful because they are easier to write down, remember or distribute. They also fit in text boxes with a limited number of characters allowed. Some examples of limited text boxes are IRC channel topics, email signatures, microblogs (such as Twitter, which notably limits all posts to first 140 and later 280 characters), certain printed newspapers (such as .net magazine or even Nature), and email clients that impose line breaks on messages at a certain length. Starting in 2008, TinyURL allowed users to create custom, more meaningful aliases. This means that a user can create descriptive URLs rather than a randomly generated address. For example, https://tinyurl.com/wp-tinyurl leads to the Wikipedia article about the website. Preview short URLs To preview the full URL from the short TinyURL, the user can visit TinyURL first and enable previews as a default browser cookie setting or copy and paste the short URL into the browser address bar, and prepend the short tinyurl.com/x with preview.tinyurl.com/x. Another preview feature is not well documented at the TinyURL site, but the alternative shortened URL with preview capability is also offered to shortcut creators as an option at the time of the creation of the link. Impact Similar s
https://en.wikipedia.org/wiki/Context-based%20access%20control
Context-based access control (CBAC) is a feature of firewall software, which intelligently filters TCP and UDP packets based on application layer protocol session information. It can be used for intranets, extranets and internets. CBAC can be configured to permit specified TCP and UDP traffic through a firewall only when the connection is initiated from within the network needing protection. (In other words, CBAC can inspect traffic for sessions that originate from the external network.) However, while this example discusses inspecting traffic for sessions that originate from the external network, CBAC can inspect traffic for sessions that originate from either side of the firewall. This is the basic function of a stateful inspection firewall. Without CBAC, traffic filtering is limited to access list implementations that examine packets at the network layer, or at most, the transport layer. However, CBAC examines not only network layer and transport layer information but also examines the application-layer protocol information (such as FTP connection information) to learn about the state of the TCP or UDP session. This allows support of protocols that involve multiple channels created as a result of negotiations in the FTP control channel. Most of the multimedia protocols as well as some other protocols (such as FTP, RPC, and SQL*Net) involve multiple control channels. CBAC inspects traffic that travels through the firewall to discover and manage state information for TCP and UDP sessions. This state information is used to create temporary openings in the firewall's access lists to allow return traffic and additional data connections for permissible sessions (sessions that originated from within the protected internal network). CBAC works through deep packet inspection and hence Cisco calls it 'IOS firewall' in their Internetwork Operating System (IOS). CBAC also provides the following benefits: Denial-of-service prevention and detection Real-time alerts and
https://en.wikipedia.org/wiki/Conner%20Peripherals
Conner Peripherals, Inc. (commonly referred to as Conner), was a company that manufactured hard drives for personal computers. Conner Peripherals was founded in 1985 by Seagate Technology co-founder and San Jose State University alumnus Finis Conner (1943– ). In 1986, they merged with CoData, a Colorado start-up founded by MiniScribe founders Terry Johnson and John Squires. CoData was developing a new type of small hard disk that put the capacity of a 5.25-inch drive into the smaller (and now commonplace) 3.5-inch format. The CoData drive was the first Conner Peripherals product. The company was partially financed by Compaq, who was also a major customer for many years. Hard disks Design concepts Conner's drives were notable for eschewing the "tub" type of head-disk assembly, where the disks are inside a large base casting shaped like a square bowl or vault with a flat lid; instead, they preferred the flat base plate approach, which was more resistant to shock and less likely to warp or deform when heated. Their first drives had the base plate carrying the disks, head arm and actuator enclosed inside a long aluminum cartridge, fixed to a bulkhead on the other side with two screws and sealed with a large, square O-ring. Conner's 1/3-height (1-inch thick) drives used a domed, cast aluminum lid with four screws, one on each corner, sealed to the base plate with a rubber gasket. The printed circuit board was bolted to the bottom of the base plate, with the mounting holes for the drive drilled into tabs cast into the sides of the base plate. This design would be Conner's trademark look well into the 1990s. Logically, Conner's drives had some of the characteristics of the original MiniScribe drives (of which John Squires had also been a designer), with a large amount of intelligence built into the drive's central processing unit (CPU); Conner drives used a single Motorola 68HC11 microcontroller, and ran a proprietary real-time operating system that implemented the tr
https://en.wikipedia.org/wiki/Signaling%20protocol
A signaling protocol is a type of communications protocol for encapsulating the signaling between communication endpoints and switching systems to establish or terminate a connection and to identify the state of connection. The following is a list of signaling protocols: ALOHA Digital Subscriber System No. 1 (EDSS1) Dual-tone multi-frequency signaling H.248 H.323 H.225.0 Jingle Media Gateway Control Protocol (MGCP) Megaco Regional System R1 NBAP (Node B Application Part) Signalling System R2 Session Initiation Protocol Signaling System No. 5 Signaling System No. 6 Signaling System No. 7 Skinny Client Control Protocol (SCCP, Skinny) Q.931 QSIG Network protocols Telephony signals
https://en.wikipedia.org/wiki/Redback%20Networks
Redback Networks provided hardware and software used by Internet service providers to manage broadband services. The company's products included the SMS (Subscriber Management System), SmartEdge, and SmartMetro product lines. In January 2007, the company was acquired by Ericsson. History Redback Networks was founded in August 1996 by Gaurav Garg, Asher Waldfogel, and William M. Salkewicz. The company received seed money from Sequoia Capital. In May 1999, during the dot-com bubble, the company became a public company via an initial public offering. After pricing at $23 each, shares soared 266% on the first day of trading. In November 1999, the company acquired Siara Systems, which at the time only had products in the prototype stage, for $4.3 billion in stock. In 2000, its share price peaked at $198 but fell to $0.27 in October 2002, after the burst of the dot-com bubble. In August 2000, the company acquired Abatis Systems. In October 2000, the company opened a regional headquarters in Hong Kong. In January 2007, the company was acquired by Ericsson for $1.9 billion, or $25 per share. References 1996 establishments in California 1999 initial public offerings 2007 mergers and acquisitions Companies formerly listed on the Nasdaq Dot-com bubble
https://en.wikipedia.org/wiki/Music%20scheduling%20system
Music scheduling systems are employed to sequence music at radio stations. Although these systems were originally implemented by manual index card methods, since the late 1970s they have exploited the efficiency and speed of digital computers. They are essential tools for broadcasting by music radio stations. These systems are databases of the songs in active rotation at a radio station, plus an ample set of rules for sequencing them in accordance with specific policies. For example, there may be restrictions on how much time must pass between two songs by the same artist, or whether a song played during noontime today may be heard at noontime tomorrow (or not). There are also rules for what kinds of songs may succeed another according to tempos or other characteristics. Many people believe that disc jockeys at radio stations are responsible for choosing the music which is heard on their shows. In reality, playlists for each hour of the day have usually been generated in advance by a radio station's program director using a music scheduling system. This ensures that the station programming is optimal and adheres to the policies and objectives of the station's management. These policies and objectives are usually designed to please the greatest number of people inside the radio station's demographic target, and garner the best ratings possible for the radio station. However, there are some radio stations, for example those of BBC, which do allow most (but not all) disc jockeys to choose the music themselves without obligations, as these stations cover an eclectic range of genres. In addition, shows from resident/guest disc jockeys (particularly on mainstream stations) also do not need the program director's opinion for their playlists. Music scheduling is simply the function of generating a playlist. Other systems are responsible for actually reproducing the music. The first widely used commercial music scheduler for radio is Selector, originally written by Dr.
https://en.wikipedia.org/wiki/RC%204000%20multiprogramming%20system
The RC 4000 Multiprogramming System (also termed Monitor or RC 4000 depending on reference) is a discontinued operating system developed for the RC 4000 minicomputer in 1969. For clarity, this article mostly uses the term Monitor. Overview The RC 4000 Multiprogramming System is historically notable for being the first attempt to break down an operating system into a group of interacting programs communicating via a message passing kernel. RC 4000 was not widely used, but was highly influential, sparking the microkernel concept that dominated operating system research through the 1970s and 1980s. Monitor was created largely by one programmer, Per Brinch Hansen, who worked at Regnecentralen where the RC 4000 was being designed. Leif Svalgaard participated in implementing and testing Monitor. Brinch Hansen found that no existing operating system was suited to the new machine, and was tired of having to adapt existing systems. He felt that a better solution was to build an underlying kernel, which he referred to as the nucleus, that could be used to build up an operating system from interacting programs. Unix, for instance, uses small interacting programs for many tasks, transferring data through a system called pipelines or pipes. However, a large amount of fundamental code is integrated into the kernel, notably things like file systems and program control. Monitor would relocate such code also, making almost the entire system a set of interacting programs, reducing the kernel (nucleus) to a communications and support system only. Monitor used a pipe-like system of shared memory as the basis of its inter-process communication (IPC). Data to be sent from one process to another was copied into an empty memory data buffer, and when the receiving program was ready, back out again. The buffer was then returned to the pool. Programs had a very simple application programming interface (API) for passing data, using an asynchronous set of four methods. Client applications se
https://en.wikipedia.org/wiki/Septic%20drain%20field
Septic drain fields, also called leach fields or leach drains, are subsurface wastewater disposal facilities used to remove contaminants and impurities from the liquid that emerges after anaerobic digestion in a septic tank. Organic materials in the liquid are catabolized by a microbial ecosystem. A septic drain field, a septic tank, and associated piping compose a septic system. The drain field typically consists of an arrangement of trenches containing perforated pipes and porous material (often gravel) covered by a layer of soil to prevent animals (and surface runoff) from reaching the wastewater distributed within those trenches. Primary design considerations are both hydraulic for the volume of wastewater requiring disposal and catabolic for the long-term biochemical oxygen demand of that wastewater. The land area that is set aside for the septic drain field may be called a septic reserve area (SRA). Sewage farms similarly dispose of wastewater through a series of ditches and lagoons (often with little or no pre-treatment). These are more often found in arid countries as the waterflow on the surface allows for irrigation (and fertilization) of agricultural land. Design Many health departments require a percolation test ("perc" test) to establish the suitability of drain field soil to receive septic tank effluent. An engineer, soil scientist, or licensed designer may be required to work with the local governing agency to design a system that conforms to these criteria. A more progressive way to determine leach field sizing is by direct observation of the soil profile. In this observation, the engineer evaluates many features of the soil such as texture, structure, consistency, pores/roots, etc. The goal of percolation testing is to ensure the soil is permeable enough for septic tank effluent to percolate away from the drain field, but fine grained enough to filter out pathogenic bacteria and viruses before they travel far enough to reach a water well or
https://en.wikipedia.org/wiki/BRLESC
The BRLESC I (Ballistic Research Laboratories Electronic Scientific Computer) was one of the last of the first-generation electronic computers. It was built by the United States Army's Ballistic Research Laboratory (BRL) at Aberdeen Proving Ground with assistance from the National Bureau of Standards (now the National Institute of Standards and Technology), and was designed to take over the computational workload of EDVAC and ORDVAC, which themselves were successors of ENIAC. It began operation in 1962. The Ballistic Research Laboratory became a part of the U.S. Army Research Laboratory in 1992. BRLESC was designed primarily for scientific and military tasks requiring high precision and high computational speed, such as ballistics problems, army logistical problems, and weapons systems evaluations. It contained 1727 vacuum tubes and 853 transistors and had a memory of 4096 72-bit words. BRLESC employed punched cards, magnetic tape, and a magnetic drum as input-output devices, which could be operated simultaneously. It was capable of five million (bitwise) operations per second. A fixed-point addition took 5 microseconds, a floating-point addition took 5 to 10 microseconds, a multiplication (fixed- or floating-point) took 25 microseconds, and a division (fixed- or floating-point) took 65 microseconds. (These times are including the memory access time, which was 4-5 microseconds.) It was the fastest computer in the world until the CDC 6600 was introduced in 1964. BRLESC and its predecessor, ORDVAC, used their own unique notation for hexadecimal numbers. Instead of the sequence A B C D E F universally used today, the digits 10 to 15 were represented by the letters K S N J F L, corresponding to the teletypewriter characters on five-track paper tape. The mnemonic phrase "King Size Numbers Just For Laughs" was used to remember the letter sequence. BRLESC II, using integrated circuits, became operational in November 1967; it was designed to be 200 times faster than
https://en.wikipedia.org/wiki/Ring%20oscillator
A ring oscillator is a device composed of an odd number of NOT gates in a ring, whose output oscillates between two voltage levels, representing true and false. The NOT gates, or inverters, are attached in a chain and the output of the last inverter is fed back into the first. Details Because a single inverter computes the logical NOT of its input, it can be shown that the last output of a chain of an odd number of inverters is the logical NOT of the first input. The final output is asserted a finite amount of time after the first input is asserted and the feedback of the last output to the input causes oscillation. A circular chain composed of an even number of inverters cannot be used as a ring oscillator. The last output in this case is the same as the input. However, this configuration of inverter feedback can be used as a storage element and it is the basic building block of static random access memory or SRAM. The stages of the ring oscillator are often differential stages, that are more immune to external disturbances. This renders available also non-inverting stages. A ring oscillator can be made with a mix of inverting and non-inverting stages, provided the total number of inverting stages is odd. The oscillator period is in all cases equal to twice the sum of the individual delays of all stages. A ring oscillator only requires power to operate. Above a certain voltage, typical well below the threshold voltage of the MOSFETs used, oscillations begin spontaneously. To increase the frequency of oscillation, two methods are commonly used. First, making the ring from a smaller number of inverters results in a higher frequency of oscillation, with about the same power consumption. Second, the supply voltage may be increased. In circuits where this method can be applied, it reduces the propagation delay through the chain of stages, increasing both the frequency of the oscillation and the current consumed. Operation To understand the operation of a ring
https://en.wikipedia.org/wiki/Arpent
An arpent (, sometimes called arpen) is a unit of length and a unit of area. It is a pre-metric French unit based on the Roman actus. It is used in Quebec, some areas of the United States that were part of French Louisiana, and in Mauritius and the Seychelles. Etymology The word arpent is believed to derive from the Late Latin arepennis (equal to half a jugerum), which in turn comes from the Gaulish *are-penno- ("end, extremity of a field"). Unit of length There were various standard arpents. The most common were the arpent used in North America, which was defined as 180 French feet (, of approximately ), and the arpent used in Paris, which was defined as 220 French feet. In North America, 1 arpent = 180 French feet = about 192 English feet = about 58.47 metres In Paris, 1 arpent = 220 French feet = about 234 English feet = about 71.46 metres Unit of area Historically, in North America, 1 (square) arpent (), also known as a French acre, was 180 French feet × 180 French feet = 32,400 French square feet = about 3419 square metres = about 0.845 English acres. Certain U.S. states have official definitions of the arpent which vary slightly: In Louisiana, Mississippi, Alabama, and Florida, the official conversion is 1 arpent = . In Arkansas and Missouri, the official conversion is 1 arpent = . In Paris, the square arpent was 220 French feet × 220 French feet = 48,400 French square feet, about . In Mauritius and Seychelles, an arpent is about 4220.87 square metres, 0.4221 hectares, 1.043 acres. Louisiana In Louisiana, parcels of land known as arpent sections or French arpent land grants also pre-date the Public Land Survey System (PLSS), but are treated as PLSS sections. An arpent can mean a linear measurement of approximately or an area measurement of about . The area measurement is also sometimes referred to as an arpent carré (square arpent) or an arpent de surface. French arpent land divisions are long narrow parcels of land, also called ribbon farms,
https://en.wikipedia.org/wiki/Tiling%20with%20rectangles
A tiling with rectangles is a tiling which uses rectangles as its parts. The domino tilings are tilings with rectangles of side ratio. The tilings with straight polyominoes of shapes such as , and tilings with polyominoes of shapes such as fall also into this category. Congruent rectangles Some tiling of rectangles include: Tilings with non-congruent rectangles The smallest square that can be cut into (m × n) rectangles, such that all m and n are different integers, is the 11 × 11 square, and the tiling uses five rectangles. The smallest rectangle that can be cut into (m × n) rectangles, such that all m and n are different integers, is the 9 × 13 rectangle, and the tiling uses five rectangles. See also Squaring the square Tessellation Tiling puzzle Notes Tessellation
https://en.wikipedia.org/wiki/Unicode%20and%20email
Many email clients now offer some support for Unicode. Some clients will automatically choose between a legacy encoding and Unicode depending on the mail's content, either automatically or when the user requests it. Technical requirements for sending of messages containing non-ASCII characters by email include encoding of certain header fields (subject, sender's and recipient's names, sender's organization and reply-to name) and, optionally, body in a content-transfer encoding encoding of non-ASCII characters in one of the Unicode transforms negotiating the use of UTF-8 encoding in email addresses and reply codes (SMTPUTF8) sending the information about the content-transfer encoding and the Unicode transform used so that the message can be correctly displayed by the recipient (see Mojibake). If the sender's or recipient's email address contains non-ASCII characters, sending of a message requires also encoding of these to a format that can be understood by mail servers. Unicode support in protocols provides a mechanism for allowing non-ASCII email addresses encoded as UTF-8 in an SMTP or LMTP protocol Unicode support in message header To use Unicode in certain email header fields, e.g. subject lines, sender and recipient names, the Unicode text has to be encoded using a MIME "Encoded-Word" with a Unicode encoding as the charset. To use Unicode in domain part of email addresses, IDNA encoding must traditionally be used. Alternatively, SMTPUTF8 allows the use of UTF-8 encoding in email addresses (both in a local part and in domain name) as well as in a mail header section. Various standards had been created to retrofit the handling of non-ASCII data to the originally ASCII-only email protocol: provides support for encoding non-ASCII values such as real names and subject lines in email headers provides support for encoding non-ASCII domain names in the Domain Name System allows the use of UTF-8 in a mail header section Unicode support in message bodies
https://en.wikipedia.org/wiki/Plan%20%28archaeology%29
In archaeological excavation, a plan is a drawn record of features and artifacts in the horizontal plane. Overview Archaeological plan can either take the form of a "multi context" plan, which is drawn with many contexts on it to show relationships between these features as part of some phase, or alternatively a single context plan with a single feature is drawn . Excavated features are drawn in three dimensions with the help of drawing conventions such as hachures. Single context planning developed by the Museum of London has become the professional norm. The basic advantage of single context planning is context plans draw on "transparent perma-trace paper" can be overlaid for re-interpretation at a later date. Multi-context Plans as opposed to single context plans can be made of complete sites, trenches or individual features. In the United Kingdom, the scale of the plans is usually 1:20. They are linked to the site recording system by the inclusion of known grid points and height readings, taken with a dumpy level or a total station (see surveying). Excavation of a site by the removal of human made deposits in the reverse order they were created is the preferred method of excavation and is referred to as stratigraphic area excavation "in plan" as opposed to excavation "in section". Plan and section drawings have an interpretive function as well as being part of the recording system, because the draughts-person makes conscious decisions about what should be included or emphasised. Archaeological plan topics The grid It is common and good practice on excavations to lay out a grid of 5m squares so as to facilitate planning. This grid is marked out on-site with grid pegs that form the baselines for tapes and other planning tools to aid the drawing of plans. It is also common practice that planning is done for each context on a separate piece of perma-trace that conforms to these 5m grid squares. This is part of the single context recording system (see
https://en.wikipedia.org/wiki/Regularization%20%28mathematics%29
In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting. Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique. Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers. Implicit regularization is essentially ubiquitous in modern machine learning approaches, including stochastic gradient descent for training deep neural networks, and ensemble methods (such as random forests and gradient boosted trees). In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more addictive to the data or to enforce generalization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. In machine learning, the data term correspo
https://en.wikipedia.org/wiki/Geosmin
Geosmin ( ) is an irregular sesquiterpenoid, produced from the universal sesquiterpene precursor farnesyl pyrophosphate (also known as farnesyl diphosphate), in a two-step -dependent reaction. Geosmin, along with the irregular monoterpene 2-methylisoborneol, together account for the majority of biologically-caused taste and odor outbreaks in drinking water worldwide. Geosmin has a distinct earthy or musty odor, which most people can easily smell. The geosmin odor detection threshold in humans is very low, ranging from 0.006 to 0.01 micrograms per liter in water. Geosmin is also responsible for the earthy taste of beetroots and a contributor to the strong scent (petrichor) that occurs in the air when rain falls after a spell of dry weather or when soil is disturbed. In chemical terms, geosmin is a bicyclic alcohol with formula , a derivative of decalin. Its name is derived from the Ancient Greek words (), meaning "earth", and (), meaning "smell". The word was coined in 1965 by the American biochemist Nancy N. Gerber (1929–1985) and the French-American biologist Hubert A. Lechevalier (1926–2015). Production Geosmin is produced by various blue-green algae (cyanobacteria) and filamentous bacteria in the class Actinomyces, and also some other prokaryotes and eukaryotes. The main genera in the cyanobacteria that have been shown to produce geosmin include Anabaena, Phormidium, and Planktothrix, while the main genus in the Actinomyces that produces geosmin is Streptomyces. Communities whose water supplies depend on surface water can periodically experience episodes of unpleasant-tasting water when a sharp drop in the population of these bacteria releases geosmin into the local water supply. Under acidic conditions, geosmin decomposes into odorless substances. In 2006, geosmin was biosynthesized by a bifunctional Streptomyces coelicolor enzyme. A single enzyme, geosmin synthase, converts farnesyl diphosphate to geosmin in a two-step reaction. Not all blue-green alga
https://en.wikipedia.org/wiki/Stackless%20Python
Stackless Python, or Stackless, is a Python programming language interpreter, so named because it avoids depending on the C call stack for its own stack. In practice, Stackless Python uses the C stack, but the stack is cleared between function calls. The most prominent feature of Stackless is microthreads, which avoid much of the overhead associated with usual operating system threads. In addition to Python features, Stackless also adds support for coroutines, communication channels, and task serialization. Design With Stackless Python, a running program is split into microthreads that are managed by the language interpreter itself, not the operating system kernel—context switching and task scheduling is done purely in the interpreter (these are thus also regarded as a form of green thread). Microthreads manage the execution of different subtasks in a program on the same CPU core. Thus, they are an alternative to event-based asynchronous programming and also avoid the overhead of using separate threads for single-core programs (because no mode switching between user mode and kernel mode needs to be done, so CPU usage can be reduced). Although microthreads make it easier to deal with running subtasks on a single core, Stackless Python does not remove CPython's Global Interpreter Lock, nor does it use multiple threads and/or processes. So it allows only cooperative multitasking on a shared CPU and not parallelism (preemption was originally not available but is now in some form). To use multiple CPU cores, one would still need to build an interprocess communication system on top of Stackless Python processes. Due to the considerable number of changes in the source, Stackless Python cannot be installed on a preexisting Python installation as an extension or library. It is instead a complete Python distribution in itself. The majority of Stackless's features have also been implemented in PyPy, a self-hosting Python interpreter and JIT compiler. Use Although the whole
https://en.wikipedia.org/wiki/Comparison%20of%20open-source%20and%20closed-source%20software
Free/open-source software – the source availability model used by free and open-source software (FOSS) – and closed source are two approaches to the distribution of software. Background Under the closed-source model source code is not released to the public. Closed-source software is maintained by a team who produces their product in a compiled-executable state, which is what the market is allowed access to. Microsoft, the owner and developer of Windows and Microsoft Office, along with other major software companies, have long been proponents of this business model, although in August 2010, Microsoft interoperability general manager Jean Paoli said Microsoft "loves open source" and its anti-open-source position was a mistake. The FOSS model allows for able users to view and modify a product's source code, but most of such code is not in the public domain. Common advantages cited by proponents for having such a structure are expressed in terms of trust, acceptance, teamwork and quality. A non-free license is used to limit what free software movement advocates consider to be the essential freedoms. A license, whether providing open-source code or not, that does not stipulate the "four software freedoms", are not considered "free" by the free software movement. A closed source license is one that limits only the availability of the source code. By contrast a copyleft license claims to protect the "four software freedoms" by explicitly granting them and then explicitly prohibiting anyone to redistribute the package or reuse the code in it to make derivative works without including the same licensing clauses. Some licenses grant the four software freedoms but allow redistributors to remove them if they wish. Such licenses are sometimes called permissive software licenses. An example of such a license is the FreeBSD License which allows derivative software to be distributed as non-free or closed source, as long as they give credit to the original designers. A miscon
https://en.wikipedia.org/wiki/Bid%20shading
In an auction, bid shading is the practice of a bidder placing a bid that is below what they believe a bid is worth. Bid shading is used for one of two purposes. In a common value auction with incomplete information, bid shading is used to compensate for the winner's curse. In such auctions, the good is worth the same amount to all bidders, but bidders don't know the value of the good and must independently estimate it. Since all bidders value the good equally, the winner will generally be the bidder whose estimate of the value is largest. But if we assume that in general bidders estimate the value accurately, then the highest bidder has overestimated the good's value and will end up paying more than it is worth. In other words, winning the auction carries bad news about a bidder's value estimate. A savvy bidder will anticipate this, and reduce their bid accordingly. Bid shading is also used in first-price auctions, where the winning bidder pays the amount of his bid. If a participant bids an amount equal to their value for the good, they would gain nothing by winning the auction, since they are indifferent between the money and the good. Bidders will optimize their expected value by accepting a lower chance of winning in return for a higher payoff if they win. In a first-price common value auction, a savvy bidder should shade for both of the above purposes. Bid shading is not only a normative theoretical construct, it was detected in the above-mentioned real world auction markets. Previous theoretical work on sequential auctions focused either on bid shading in an exogenous sequence of auctions, or on strategic auctioning to short-lived buyers, who never want to shade their bids. This paper provides the first model of a sequential auction with both endogenous strategic selling and forward-looking longer-lived buyers who can shade their bids. The model’s contribution is the analysis of the best response of the seller to strategic bid shading, and the exp
https://en.wikipedia.org/wiki/Sparsely%20totient%20number
In mathematics, a sparsely totient number is a certain kind of natural number. A natural number, n, is sparsely totient if for all m > n, where is Euler's totient function. The first few sparsely totient numbers are: 2, 6, 12, 18, 30, 42, 60, 66, 90, 120, 126, 150, 210, 240, 270, 330, 420, 462, 510, 630, 660, 690, 840, 870, 1050, 1260, 1320, 1470, 1680, 1890, 2310, 2730, 2940, 3150, 3570, 3990, 4620, 4830, 5460, 5610, 5670, 6090, 6930, 7140, 7350, 8190, 9240, 9660, 9870, ... . The concept was introduced by David Masser and Peter Man-Kit Shiu in 1986. As they showed, every primorial is sparsely totient. Properties If P(n) is the largest prime factor of n, then . holds for an exponent . It is conjectured that . References Integer sequences
https://en.wikipedia.org/wiki/Lists%20of%20prehistoric%20fish
Prehistoric fish are early fish that are known only from fossil records. They are the earliest known vertebrates, and include the first and extinct fish that lived through the Cambrian to the Quaternary. The study of prehistoric fish is called paleoichthyology. A few living forms, such as the coelacanth are also referred to as prehistoric fish, or even living fossils, due to their current rarity and similarity to extinct forms. Fish which have become recently extinct are not usually referred to as prehistoric fish. They were very different from what we have today. They likely were larger and had tougher scales. Lists Lists of various prehistoric fishes include: List of prehistoric jawless fish List of placoderms List of acanthodians List of prehistoric cartilaginous fish List of prehistoric bony fish List of sarcopterygians See also Evolution of fish Prehistoric life Vertebrate paleontology References Further reading Janvier, Philippe (1998 ) Early Vertebrates, Oxford, New York: Oxford University Press. Long, John A. (1996) The Rise of Fishes: 500 Million Years of Evolution, Baltimore: The Johns Hopkins University Press. External links Fossil Fish Paleontology lists
https://en.wikipedia.org/wiki/Steady%20state%20%28biochemistry%29
In biochemistry, steady state refers to the maintenance of constant internal concentrations of molecules and ions in the cells and organs of living systems. Living organisms remain at a dynamic steady state where their internal composition at both cellular and gross levels are relatively constant, but different from equilibrium concentrations. A continuous flux of mass and energy results in the constant synthesis and breakdown of molecules via chemical reactions of biochemical pathways. Essentially, steady state can be thought of as homeostasis at a cellular level. Maintenance of steady state Metabolic regulation achieves a balance between the rate of input of a substrate and the rate that it is degraded or converted, and thus maintains steady state. The rate of metabolic flow, or flux, is variable and subject to metabolic demands. However, in a metabolic pathway, steady state is maintained by balancing the rate of substrate provided by a previous step and the rate that the substrate is converted into product, keeping substrate concentration relatively constant. Thermodynamically speaking, living organisms are open systems, meaning that they constantly exchange matter and energy with their surroundings. A constant supply of energy is required for maintaining steady state, as maintaining a constant concentration of a molecule preserves internal order and thus is entropically unfavorable. When a cell dies and no longer utilizes energy, its internal composition will proceed toward equilibrium with its surroundings. In some occurrences, it is necessary for cells to adjust their internal composition in order to reach a new steady state. Cell differentiation, for example, requires specific protein regulation that allows the differentiating cell to meet new metabolic requirements. ATP The concentration of ATP must be kept above equilibrium level so that the rates of ATP-dependent biochemical reactions meet metabolic demands. A decrease in ATP will result in a decre
https://en.wikipedia.org/wiki/Ground%20expression
In mathematical logic, a ground term of a formal system is a term that does not contain any variables. Similarly, a ground formula is a formula that does not contain any variables. In first-order logic with identity with constant symbols and , the sentence is a ground formula. A ground expression is a ground term or ground formula. Examples Consider the following expressions in first order logic over a signature containing the constant symbols and for the numbers 0 and 1, respectively, a unary function symbol for the successor function and a binary function symbol for addition. are ground terms; are ground terms; are ground terms; and are terms, but not ground terms; and are ground formulae. Formal definitions What follows is a formal definition for first-order languages. Let a first-order language be given, with the set of constant symbols, the set of functional operators, and the set of predicate symbols. Ground term A is a term that contains no variables. Ground terms may be defined by logical recursion (formula-recursion): Elements of are ground terms; If is an -ary function symbol and are ground terms, then is a ground term. Every ground term can be given by a finite application of the above two rules (there are no other ground terms; in particular, predicates cannot be ground terms). Roughly speaking, the Herbrand universe is the set of all ground terms. Ground atom A , or is an atomic formula all of whose argument terms are ground terms. If is an -ary predicate symbol and are ground terms, then is a ground predicate or ground atom. Roughly speaking, the Herbrand base is the set of all ground atoms, while a Herbrand interpretation assigns a truth value to each ground atom in the base. Ground formula A or is a formula without variables. Ground formulas may be defined by syntactic recursion as follows: A ground atom is a ground formula. If and are ground formulas, then , , and are ground formulas. Ground f
https://en.wikipedia.org/wiki/Wirth%E2%80%93Weber%20precedence%20relationship
In computer science, a Wirth–Weber relationship between a pair of symbols is necessary to determine if a formal grammar is a simple precedence grammar. In such a case, the simple precedence parser can be used. The relationship is named after computer scientists Niklaus Wirth and Helmut Weber. The goal is to identify when the viable prefixes have the pivot and must be reduced. A means that the pivot is found, a means that a potential pivot is starting, and a means that a relationship remains in the same pivot. Formal definition Precedence relations computing algorithm We will define three sets for a symbol: The pseudocode for computing relations is: RelationTable := ∅ For each production For each two adjacent symbols in add(RelationTable, ) add(RelationTable, ) add(RelationTable, ) add(RelationTable, ) where is the initial non terminal of the grammar, and $ is a limit marker add(RelationTable, ) where is the initial non terminal of the grammar, and $ is a limit marker Examples Head(a) = ∅ Head(S) = {a, c} Head(b) = ∅ Head(c) = ∅ Tail(a) = ∅ Tail(S) = {b, c} Tail(b) = ∅ Tail(c) = ∅ Head(a) = a Head(S) = {a, c} Head(b) = b Head(c) = c a Next to S S Next to S S Next to b there is only one symbol, so no relation is added. precedence table Further reading Formal languages
https://en.wikipedia.org/wiki/Operator-precedence%20grammar
An operator precedence grammar is a kind of grammar for formal languages. Technically, an operator precedence grammar is a context-free grammar that has the property (among others) that no production has either an empty right-hand side or two adjacent nonterminals in its right-hand side. These properties allow precedence relations to be defined between the terminals of the grammar. A parser that exploits these relations is considerably simpler than more general-purpose parsers such as LALR parsers. Operator-precedence parsers can be constructed for a large class of context-free grammars. Precedence relations Operator precedence grammars rely on the following three precedence relations between the terminals: These operator precedence relations allow to delimit the handles in the right sentential forms: marks the left end, appears in the interior of the handle, and marks the right end. Contrary to other shift-reduce parsers, all nonterminals are considered equal for the purpose of identifying handles. The relations do not have the same properties as their un-dotted counterparts; e. g. does not generally imply , and does not follow from . Furthermore, does not generally hold, and is possible. Let us assume that between the terminals and there is always exactly one precedence relation. Suppose that $ is the end of the string. Then for all terminals we define: and . If we remove all nonterminals and place the correct precedence relation: , , between the remaining terminals, there remain strings that can be analyzed by an easily developed bottom-up parser. Example For example, the following operator precedence relations can be introduced for simple expressions: They follow from the following facts: + has lower precedence than * (hence and ). Both + and * are left-associative (hence and ). The input string after adding end markers and inserting precedence relations becomes Operator precedence parsing Having precedence relations allows to iden
https://en.wikipedia.org/wiki/Karp%27s%2021%20NP-complete%20problems
In computational complexity theory, Karp's 21 NP-complete problems are a set of computational problems which are NP-complete. In his 1972 paper, "Reducibility Among Combinatorial Problems", Richard Karp used Stephen Cook's 1971 theorem that the boolean satisfiability problem is NP-complete (also called the Cook-Levin theorem) to show that there is a polynomial time many-one reduction from the boolean satisfiability problem to each of 21 combinatorial and graph theoretical computational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughout computer science are computationally intractable, and it drove interest in the study of NP-completeness and the P versus NP problem. The problems Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example, Knapsack was shown to be NP-complete by reducing Exact cover to Knapsack. Satisfiability: the boolean satisfiability problem for formulas in conjunctive normal form (often referred to as SAT) 0–1 integer programming (A variation in which only the restrictions must be satisfied, with no optimization) Clique (see also independent set problem) Set packing Vertex cover Set covering Feedback node set Feedback arc set Directed Hamilton circuit (Karp's name, now usually called Directed Hamiltonian cycle) Undirected Hamilton circuit (Karp's name, now usually called Undirected Hamiltonian cycle) Satisfiability with at most 3 literals per clause (equivalent to 3-SAT) Chromatic number (also called the Graph Coloring Problem) Clique cover Exact cover Hitting set Steiner tree 3-dimensional matching Knapsack (Karp's definition of Knapsack is closer to Subset sum) Job sequencing Partition Max cut Approximations As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can
https://en.wikipedia.org/wiki/Password%20synchronization
Password synchronization is a process, usually supported by software such as password managers, through which a user maintains a single password across multiple IT systems. Provided that all the systems enforce mutually-compatible password standards (e.g. concerning minimum and maximum password length, supported characters, etc.), the user can choose a new password at any time and deploy the same password on his or her own login accounts across multiple, linked systems. Where different systems have mutually incompatible standards regarding what can be stored in a password field, the user may be forced to choose more than one (but still fewer than the number of systems) passwords. This may happen, for example, where the maximum password length on one system is shorter than the minimum length in another, or where one system requires use of a punctuation mark but another forbids it. Password synchronization is a function of certain identity management systems and it is considered easier to implement than enterprise single sign-on (SSO), as there is normally no client software deployment or need for active user enrollment. Uses Password synchronization makes it easier for IT users to recall passwords and so manage their access to multiple systems, for example on an enterprise network. Since they only have to remember one or at most a few passwords, users are less likely to forget them or write them down, resulting in fewer calls to the IT Help Desk and less opportunity for coworkers, intruders or thieves to gain improper access. Through suitable security awareness, automated policy enforcement and training activities, users can be encouraged or forced to choose stronger passwords as they have fewer to remember. Security If the single, synchronized password is compromised (for example, if it is guessed, disclosed, determined by cryptanalysis from one of the systems, intercepted on an insecure communications path, or if the user is socially engineered into reset
https://en.wikipedia.org/wiki/Load%20cell
A load cell converts a force such as tension, compression, pressure, or torque into a signal (electrical, pneumatic or hydraulic pressure, or mechanical displacement indicator) that can be measured and standardized. It is a force transducer. As the force applied to the load cell increases, the signal changes proportionally. The most common types of load cells are pneumatic, hydraulic, and strain gauge types for industrial applications. Typical non-electronic bathroom scales are a widespread example of a mechanical displacement indicator where the applied weight (force) is indicated by measuring the deflection of springs supporting the load platform, technically a "load cell". Strain gauge load cell Strain gauge load cells are the kind most often found in industrial settings. It is ideal as it is highly accurate, versatile, and cost-effective. Structurally, a load cell has a metal body to which strain gauges have been secured.  The body is usually made of aluminum, alloy steel, or stainless steel which makes it very sturdy but also minimally elastic. This elasticity gives rise to the term "spring element", referring to the body of the load cell.  When force is exerted on the load cell, the spring element is slightly deformed, and unless overloaded, always returns to its original shape. As the spring element deforms, the strain gauges also change shape. The resulting alteration to the resistance in the strain gauges can be measured as voltage. The change in voltage is proportional to the amount of force applied to the cell, thus the amount of force can be calculated from the load cell's output. Strain Gauges A strain gauge is constructed of very fine wire, or foil, set up in a grid pattern and attached to a flexible backing. When the shape of the strain gauge is altered, a change in its electrical resistance occurs. The wire or foil in the strain gauge is arranged in a way that, when force is applied in one direction, a linear change in resistance results. Tension
https://en.wikipedia.org/wiki/Self-service%20password%20reset
Self-service password reset (SSPR) is defined as any process or technology that allows users who have either forgotten their password or triggered an intruder lockout to authenticate with an alternate factor, and repair their own problem, without calling the help desk. It is a common feature in identity management software and often bundled in the same software package as a password synchronization capability. Typically users who have forgotten their password launch a self-service application from an extension to their workstation login prompt, using their own or another user's web browser, or through a telephone call. Users establish their identity, without using their forgotten or disabled password, by answering a series of personal questions, using a hardware authentication token, responding to a notification e-mail or, less often, by providing a biometric sample such as voice recognition. Users can then either specify a new, unlocked password, or ask that a randomly generated one be provided. Self-service password reset expedites problem resolution for users "after the fact", and thus reduces help desk call volume. It can also be used to ensure that password problems are only resolved after adequate user authentication, eliminating an important weakness of many help desks: social engineering attacks, where an intruder calls the help desk, pretends to be the intended victim user, claims to have forgotten the account password, and asks for a new password. Multi-factor authentication Rather than merely asking users to answer security questions, modern password reset systems may also leverage a sequence of authentication steps: Ask users to complete a CAPTCHA, to demonstrate that they are human. Ask users to enter a PIN which is sent to their personal e-mail address or mobile phone. Require use of another technology, such as a one-time-password token. Leverage biometrics, such as a voice print. An authenticator, such as Google Authenticator or an SMS code.
https://en.wikipedia.org/wiki/Klotski
Klotski (from ) is a sliding block puzzle thought to have originated in the early 20th century. The name may refer to a specific layout of ten blocks, or in a more global sense to refer to a whole group of similar sliding-block puzzles where the aim is to move a specific block to some predefined location. Rules Like other sliding-block puzzles, several different-sized block pieces are placed inside a box, which is normally 4×5 in size. Among the blocks, there is a special one (usually the largest) which must be moved to a special area designated by the game board. The player is not allowed to remove blocks, and may only slide blocks horizontally and vertically. Common goals are to solve the puzzle with a minimum number of moves or in a minimum amount of time. Naming The earliest known reference of the name Klotski originates from the computer version for Windows 3.x by ZH Computing in 1991, which was also included in Microsoft Windows Entertainment Pack. The sliding puzzle had already been trademarked and sold under different names for decades, including Psychoteaze Square Root, Intreeg, and Ego Buster. There was no known widely used name for the category of sliding puzzles described before Klotski appeared. History It is still unknown which version of the puzzle is the original. There are many confusing and conflicting claims, and several countries claim to be the ultimate origin of this game. One game—lacking the 5 × 4 design of Pennant, Klotski, and Chinese models but a likely inspiration—is the 19th century 15-puzzle, where fifteen wooden squares had to be rearranged. It is suggested that unless a 19th-century Asian evidence is found, the most reasonably likely path of transmission is from the late 19th century square designs to the early 20th century rectangular, such as Pennant, thence to Klotski and Huarong Road. United States The 15-puzzle enjoyed immense popularity in western countries during the late 19th century. Around this time, patents appeared f
https://en.wikipedia.org/wiki/SPECint
SPECint is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPECint is the integer performance testing component of the SPEC test suite. The first SPEC test suite, CPU92, was announced in 1992. It was followed by CPU95, CPU2000, and CPU2006. The latest standard is SPEC CPU 2017 and consists of SPECspeed and SPECrate (aka SPECCPU_2017). SPECint 2006 CPU2006 is a set of benchmarks designed to test the CPU performance of a modern server computer system. It is split into two components, the first being CINT2006, the other being CFP2006 (SPECfp), for floating point testing. SPEC defines a base runtime for each of the 12 benchmark programs. For SPECint2006, that number ranges from 1000 to 3000 seconds. The timed test is run on the system, and the time of the test system is compared to the reference time, and a ratio is computed. That ratio becomes the SPECint score for that test. (This differs from the rating in SPECINT2000, which multiplies the ratio by 100.) As an example for SPECint2006, consider a processor which can run 400.perlbench in 2000 seconds. The time it takes the reference machine to run the benchmark is 9770 seconds. Thus the ratio is 4.885. Each ratio is computed, and then the geometric mean of those ratios is computed to produce an overall value. Background For a fee, SPEC distributes source code files to users wanting to test their systems. These files are written in a standard programming language, which is then compiled for each particular CPU architecture and operating system. Thus, the performance measured is that of the CPU, RAM, and compiler, and does not test I/O, networking, or graphics. Two metrics are reported for a particular benchmark, "base" and "peak". Compiler options account for the difference between the two numbers. As the SPEC benchmarks are distributed as source code, it is up to the party performing the test to compile this
https://en.wikipedia.org/wiki/Lead%20time
A lead time is the latency between the initiation and completion of a process. For example, the lead time between the placement of an order and delivery of new cars by a given manufacturer might be between 2 weeks and 6 months, depending on various particularities. One business dictionary defines "manufacturing lead time" as the total time required to manufacture an item, including order preparation time, queue time, setup time, run time, move time, inspection time, and put-away time. For make-to-order products, it is the time between release of an order and the production and shipment that fulfill that order. For make-to-stock products, it is the time taken from the release of an order to production and receipt into finished goods inventory. Supply chain management A conventional definition of lead time in a supply chain management context is the time from the moment the customer places an order (the moment the supplier learns of the requirement) to the moment it is ready for delivery. In the absence of finished goods or intermediate (work in progress) inventory, it is the time it takes to actually manufacture the order without any inventory other than raw materials. The Chartered Institute of Procurement & Supply identifies "total lead time" as a combination of "internal lead time" (the time required for the buying organisation's internal processes to progress from identification of a need to the issue of a purchase order) and "external lead time" (the time required for the supplying organisation's processes, including any development required, manufacture, dispatch and delivery). Manufacturing In the manufacturing environment, lead time has the same definition as that of Supply Chain Management, but it includes the time required to ship the parts from the supplier. Shipping time is included because the manufacturing company needs to know when the parts will be available for material requirements planning purposes. It is also possible to include within lead time
https://en.wikipedia.org/wiki/Project%20Genoa
Project Genoa was a software project commissioned by the United States' DARPA which was designed to analyze large amounts of data and metadata to help human analysts counter terrorism. Program synopsis Genoa's primary function was intelligence analysis in order to assist human analysts. The program was designed to support both top-down and bottom-up approaches; a policy maker could hypothesize a possible attack and use Genoa to look for supporting evidence of such a plot, or it would compile pieces of intelligence into a diagram and suggest possible outcomes. Human analysts would then be able to modify the diagram to test various cases. Companies such as Integral Visuals, Saffron Technology, and Syntek Technologies were involved in Genoa's development. It cost a total of $42 million to complete the program. History Genoa was conceived in late 1995 by retired Rear Admiral John Poindexter, a chief player in the Iran-Contra Affair. At the time, Poindexter was working at Syntek, a company often contracted to do work for the Department of Defense. He proposed a computer system that would help humans crunch large amounts of data in order to more effectively predict potential national security threats. Poindexter brought his ideas to former colleagues working with the United States National Security Council. That year, a team of researchers was assembled for the project and began studying various historical events to which Genoa could be applied. The Tokyo subway sarin attack in March was the primary focus. Instead of analyzing the attack itself, the researchers looked into the history of Aum Shinrikyo, the group that perpetrated the attack, to find evidence that could have suggested their intentions. In order to pitch their ideas, the researchers set up a mock crisis command center in DARPA's main building, full of monitors staffed by actors. An audience would watch as a fictitious scenario would unfold before them, guided along by an animated video segment. Poindex
https://en.wikipedia.org/wiki/Darwin%27s%20tubercle
Darwin's tubercle (or auricular tubercle) is a congenital ear condition which often presents as a thickening on the helix at the junction of the upper and middle thirds. History This atavistic feature is so called because its description was first published by Charles Darwin in the opening pages of The Descent of Man, and Selection in Relation to Sex, as evidence of a vestigial feature indicating common ancestry among primates which have pointy ears. However, Darwin himself named it the Woolnerian tip, after Thomas Woolner, a British sculptor who had depicted it in one of his sculptures and had first theorised that it was an atavistic feature. Prevalence The feature is present in approximately 10.4% of the Spanish adult population, 40% of adults in India, and 58% of Swedish school children. This acuminate nodule represents the point of the mammalian ear. The trait can potentially be bilateral, meaning present on both ears, or unilateral, where it is present on only one ear. There is mixed evidence in regard to whether the bilateral or unilateral expression is related to population, or other factors. Some populations express full bilateral, while others may express either unilateral or bilateral. However, bilateral appears to be more common than unilateral as it pertains to the expression of the trait. Inheritance The gene for Darwin's tubercle was once thought to be inherited in an autosomal dominant pattern with incomplete penetrance, meaning that those who possess the allele (version of a gene) will not necessarily present with the phenotype. However, genetic and family studies have demonstrated that the presence of Darwin's tubercle may be more likely to be influenced by one's environment or developmental accidents than it is by genetics alone. There is no clear argument for whether the trait has significance in sexual dimorphism studies or age related studies. In some studies, there is clear data that Darwin's tubercle is not associated with sex. In contr
https://en.wikipedia.org/wiki/Two-hybrid%20screening
Two-hybrid screening (originally known as yeast two-hybrid system or Y2H) is a molecular biology technique used to discover protein–protein interactions (PPIs) and protein–DNA interactions by testing for physical interactions (such as binding) between two proteins or a single protein and a DNA molecule, respectively. The premise behind the test is the activation of downstream reporter gene(s) by the binding of a transcription factor onto an upstream activating sequence (UAS). For two-hybrid screening, the transcription factor is split into two separate fragments, called the DNA-binding domain (DBD or often also abbreviated as BD) and activating domain (AD). The BD is the domain responsible for binding to the UAS and the AD is the domain responsible for the activation of transcription. The Y2H is thus a protein-fragment complementation assay. History Pioneered by Stanley Fields and Ok-Kyu Song in 1989, the technique was originally designed to detect protein–protein interactions using the Gal4 transcriptional activator of the yeast Saccharomyces cerevisiae. The Gal4 protein activated transcription of a gene involved in galactose utilization, which formed the basis of selection. Since then, the same principle has been adapted to describe many alternative methods, including some that detect protein–DNA interactions or DNA-DNA interactions, as well as methods that use different host organisms such as Escherichia coli or mammalian cells instead of yeast. Basic premise The key to the two-hybrid screen is that in most eukaryotic transcription factors, the activating and binding domains are modular and can function in proximity to each other without direct binding. This means that even though the transcription factor is split into two fragments, it can still activate transcription when the two fragments are indirectly connected. The most common screening approach is the yeast two-hybrid assay. In this approach the researcher knows where each prey is located on the us
https://en.wikipedia.org/wiki/IBM%20702
The IBM 702 was an early generation tube-based digital computer produced by IBM in the early to mid-1950s. It was the company's response to Remington Rand's UNIVAC—the first mainframe computer to use magnetic tapes. As these machines were aimed at the business market, they lacked the leading-edge computational power of the IBM 701 and ERA 1103, which were favored for scientific computing, weather forecasting, the aircraft industry, and the military and intelligence communities. Within IBM, the 702 was notable for adapting the new technology of magnetic-core memory for random-access applications. The 702 was announced September 25, 1953, and withdrawn October 1, 1954, but the first production model was not installed until July 1955. It was superseded by the IBM 705. History Fourteen 702s were built. The first one was used at IBM. Due to problems with the Williams tubes, the decision was made to switch to magnetic-core memory instead. The fourteenth 702 was built using magnetic-core memory, and the others were retrofitted with magnetic-core memory. The successor to the 702 in the 700/7000 series was the IBM 705, which marked the transition to magnetic-core memory. Overview The 702 was designed for business data processing. Therefore, the memory of the computer was oriented toward storing characters. The system used electrostatic storage, consisting of 14, 28, 42, 56, or 70 Williams tubes with a capacity of 1000 bits each for the main memory, giving a memory of 2,000 to 10,000 characters of seven bits each (in increments of 2,000 characters), and 14 Williams tubes with a capacity of 512 bits each for the two 512-character accumulators. A complete system included the following units: IBM 702 Central Processing Unit IBM 712 Card Reader IBM 756 Card Reader Control Unit IBM 717 Printer IBM 757 Printer Control Unit IBM 722 Card Punch IBM 758 Card Punch Control Unit IBM 727 Magnetic Tape Unit IBM 752 Tape Control Unit IBM 732 Magnetic Drum Storage Unit Total weig
https://en.wikipedia.org/wiki/List%20of%20knot%20theory%20topics
Knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. History Knots, links, braids Knot (mathematics) gives a general introduction to the concept of a knot. Two classes of knots: torus knots and pretzel knots Cinquefoil knot also known as a (5, 2) torus knot. Figure-eight knot (mathematics) the only 4-crossing knot Granny knot (mathematics) and Square knot (mathematics) are a connected sum of two Trefoil knots Perko pair, two entries in a knot table that were later shown to be identical. Stevedore knot (mathematics), a prime knot with crossing number 6 Three-twist knot is the twist knot with three-half twists, also known as the 52 knot. Trefoil knot A knot with crossing number 3 Unknot Knot complement, a compact 3 manifold obtained by removing an open neighborhood of a proper embedding of a tame knot from the 3-sphere. Knots and graphs general introduction to knots with mention of Reidemeister moves Notation used in knot theory: Conway notation Dowker–Thistlethwaite notation (DT notation) Gauss code (see also Gauss diagrams) continued fraction regular form General knot types 2-bridge knot Alternating knot; a knot that can be represented by an alternating diagram (i.e. the crossing alternate over and under as one traverses the knot). Berge knot a class of knots related to Lens space surgeries and defined in terms of their properties with respect to a genus 2 Heegaard surface. Cable knot, see Sate
https://en.wikipedia.org/wiki/Operational-level%20agreement
An operational-level agreement (OLA) defines interdependent relationships in support of a service-level agreement (SLA). The agreement describes the responsibilities of each internal support group toward other support groups, including the process and timeframe for delivery of their services. The objective of the OLA is to present a clear, concise and measurable description of the service provider's internal support relationships. OLA is sometimes expanded to other phrases but they all have the same meaning: organizational-level agreement operating-level agreement operations-level agreement. OLA is not a substitute for an SLA. The purpose of the OLA is to help ensure that the underpinning activities that are performed by several support team components are aligned to provide the intended SLA. If the underpinning OLA is not in place, it is often very difficult for organizations to go back and engineer agreements between the support teams to deliver the SLA. OLA has to be seen as the foundation of good practice and common agreement. See also IT service management (ITSM) ITIL Service-level objective (SLO) References External links ITSM Foundation (pdf) Forsythe:ITSM Glossary Industrial engineering
https://en.wikipedia.org/wiki/Extended%20System%20Configuration%20Data
The Extended System Configuration Data (ESCD) is a specification for configuring x86 computers of the ISA PNP era. The specification was developed by Compaq, Intel and Phoenix Technologies. It consists of a method for storing configuration information in nonvolatile BIOS memory and three BIOS functions for working with that data. The ESCD data may at one time have been stored in the latter portion of the 128 byte extended bank of battery-backed CMOS RAM but eventually it became too large and so was moved to BIOS flash. It contains information about ISA PnP devices is stored. It is used by the BIOS to allocate resources for devices like expansion cards. The ESCD data is stored using the data serialization format used for EISA. Its data starts with the "ACFG" signature in ASCII. PCI configuration can also be stored in ESCD, using virtual slots. Typical storage usage for ESCD data is 2–4 KB The BIOS also updates the ESCD each time the hardware configuration changes, after deciding how to re-allocate resources like IRQ and memory mapping ranges. After the ESCD has been updated, the decision need not be made again, which thereafter results in faster startup without conflicts until the next hardware configuration change. References Further reading External links ESCD Support for 2.4.6-ac1/PNPBIOS (was: reading/writing CMOS beyond 256 bytes?) Motherboard BIOS