source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Sound%20level%20meter
A sound level meter (also called sound pressure level meter (SPL)) is used for acoustic measurements. It is commonly a hand-held instrument with a microphone. The best type of microphone for sound level meters is the condenser microphone, which combines precision with stability and reliability. The diaphragm of the microphone responds to changes in air pressure caused by sound waves. That is why the instrument is sometimes referred to as a sound pressure level meter (SPL). This movement of the diaphragm, i.e. the sound pressure (unit pascal, Pa), is converted into an electrical signal (unit volt, V). While describing sound in terms of sound pressure, a logarithmic conversion is usually applied and the sound pressure level is stated instead, in decibels (dB), with 0 dB SPL equal to 20 micropascals. A microphone is distinguishable by the voltage value produced when a known, constant root mean square sound pressure is applied. This is known as microphone sensitivity. The instrument needs to know the sensitivity of the particular microphone being used. Using this information, the instrument is able to accurately convert the electrical signal back to sound pressure, and display the resulting sound pressure level (unit decibel, dB). Sound level meters are commonly used in noise pollution studies for the quantification of different kinds of noise, especially for industrial, environmental, mining and aircraft noise. The current international standard that specifies sound level meter functionality and performances is the IEC 61672-1:2013. However, the reading from a sound level meter does not correlate well to human-perceived loudness, which is better measured by a loudness meter. Specific loudness is a compressive nonlinearity and varies at certain levels and at certain frequencies. These metrics can also be calculated in a number of different ways. The world's first hand-held and transistorized sound level meter, was released in 1960 and developed by the Danish company
https://en.wikipedia.org/wiki/Wow%20and%20flutter%20measurement
Measurement of wow and flutter is carried out on audio tape machines, cassette recorders and players, and other analog recording and reproduction devices with rotary components (e.g. movie projectors, turntables (vinyl recording), etc.) This measurement quantifies the amount of 'frequency wobble' (caused by speed fluctuations) present in subjectively valid terms. Turntables tend to suffer mainly slow wow. In digital systems, which are locked to crystal oscillators, variations in clock timing are referred to as wander or jitter, depending on speed. While the terms wow and flutter used to be used separately (for wobbles at a rate below and above 4 Hz respectively), they tend to be combined now that universal standards exist for measurement which take both into account simultaneously. Listeners find flutter most objectionable when the actual frequency of wobble is 4 Hz, and less audible above and below this rate. This fact forms the basis for the weighting curve shown here. The weighting curve is misleading, inasmuch as it presumes inaudibility of flutters above 200 Hz, when actually faster flutters are quite damaging to the sound. A flutter of 200 Hz at a level of -50db will create 0.3% intermodulation distortion, which would be considered unacceptable in a preamp or amplifier. Measurement techniques Measuring instruments use a frequency discriminator to translate the pitch variations of a recorded tone into a flutter waveform, which is then passed through the weighting filter, before being full-wave rectified to produce a slowly varying signal which drives a meter or recording device. The maximum meter indication should be read as the flutter value. The following standards all specify the weighting filter shown above, together with a special slow-quasi-peak full-wave rectifier designed to register any brief speed excursions. As with many audio standards, these are identical derivatives of a common specification. IEC 386 DIN45507 BS4847 CCIR 409-3 AES6-2008 Mea
https://en.wikipedia.org/wiki/Ruderal%20species
A ruderal species is a plant species that is first to colonize disturbed lands. The disturbance may be natural for example, wildfires or avalanchesor the consequences of human activities, such as construction (of roads, of buildings, mining, etc.) or agriculture (abandoned fields, irrigation, etc.). The term ruderal originates from the Latin word rudus, meaning "rubble". Ruderal species typically dominate the disturbed area for a few years, gradually losing the competition to other native species. However, in extreme disturbance circumstances, such as when the natural topsoil is covered with a foreign substance, a single-species ruderal community may become permanently established. In addition, some ruderal invasive species may have such a competitive advantage over the native species that they, too, may permanently prevent a disturbed area from returning to its original state despite natural topsoil. Features Features contributing to a species' success as ruderal are: Massive seed production Seedlings whose nutritional requirements are modest Fast-growing roots Independence of mycorrhizae Polyploidy Quantification Ecologists have proposed various scales for quantifying ruderality, which can be defined as the "ability to thrive where there is disturbance through partial or total destruction of plant biomass" (Grime, Hodgson & Hunt, 1988). The ruderality scale of Grime presents values that are readily available, and it takes into account disturbance factors as well as other indicators such as the annual or perennial character of the plants. See also Edge effect Hemeroby Pioneer species Restoration ecology Supertramp (ecology) Examples of ruderal species: Cannabis ruderalis (family Cannabaceae) Conyza bonariensis (family Asteraceae) Dittrichia viscosa (Asteraceae) Nicotiana glauca (Solanaceae) References External links St. John TV. 1987. SOIL DISTURBANCE AND THE MINERAL NUTRITION OF NATIVE PLANTS in Proceedings of the 2nd Native Plant Reve
https://en.wikipedia.org/wiki/KEGG
KEGG (Kyoto Encyclopedia of Genes and Genomes) is a collection of databases dealing with genomes, biological pathways, diseases, drugs, and chemical substances. KEGG is utilized for bioinformatics research and education, including data analysis in genomics, metagenomics, metabolomics and other omics studies, modeling and simulation in systems biology, and translational research in drug development. The KEGG database project was initiated in 1995 by Minoru Kanehisa, professor at the Institute for Chemical Research, Kyoto University, under the then ongoing Japanese Human Genome Program. Foreseeing the need for a computerized resource that can be used for biological interpretation of genome sequence data, he started developing the KEGG PATHWAY database. It is a collection of manually drawn KEGG pathway maps representing experimental knowledge on metabolism and various other functions of the cell and the organism. Each pathway map contains a network of molecular interactions and reactions and is designed to link genes in the genome to gene products (mostly proteins) in the pathway. This has enabled the analysis called KEGG pathway mapping, whereby the gene content in the genome is compared with the KEGG PATHWAY database to examine which pathways and associated functions are likely to be encoded in the genome. According to the developers, KEGG is a "computer representation" of the biological system. It integrates building blocks and wiring diagrams of the system—more specifically, genetic building blocks of genes and proteins, chemical building blocks of small molecules and reactions, and wiring diagrams of molecular interaction and reaction networks. This concept is realized in the following databases of KEGG, which are categorized into systems, genomic, chemical, and health information. Systems information PATHWAY: pathway maps for cellular and organismal functions MODULE: modules or functional units of genes BRITE: hierarchical classifications of biological ent
https://en.wikipedia.org/wiki/Dipmeter%20Advisor
The Dipmeter Advisor was an early expert system developed in the 1980s by Schlumberger with the help of artificial-intelligence workers at MIT to aid in the analysis of data gathered during oil exploration. The Advisor was generally not merely an inference engine and a knowledge base of ~90 rules, but generally was a full-fledged workstation, running on one of Xerox's 1100 Dolphin Lisp machines (or in general on Xerox's "1100 Series Scientific Information Processors" line) and written in INTERLISP-D, with a pattern recognition layer which in turn fed a GUI menu-driven interface. It was developed by a number of people, including Reid G. Smith, James D. Baker, and Robert L. Young. It was primarily influential not because of any great technical leaps, but rather because it was so successful for Schlumberger's oil divisions and because it was one of the few success stories of the AI bubble to receive wide publicity before the AI winter. The AI rules of the Dipmeter Advisor were primarily derived from Al Gilreath, a Schlumberger interpretation engineer who developed the "red, green, blue" pattern method of dipmeter interpretation. Unfortunately this method had limited application in more complex geological environments outside the Gulf Coast, and the Dipmeter Advisor was primarily used within Schlumberger as a graphical display tool to assist interpretation by trained geoscientists, rather than as an AI tool for use by novice interpreters. However, the tool pioneered a new approach to workstation-assisted graphical interpretation of geological information. References Other sources The AI Business: The commercial uses of artificial intelligence, ed. Patrick Winston and Karen A. Prendergast. "The Dipmeter Advisor: Interpretation of Geological Signals" – Randall Davis, Howard Austin, Ingrid Carlbom, Bud Frawley, Paul Pruchnik, Rich Sneiderman, J. A. Gilreath. External links "The design of the Dipmeter Advisor system" -(at the ACM's website) Petroleum engineering
https://en.wikipedia.org/wiki/International%20Conference%20on%20Mobile%20Computing%20and%20Networking
MobiCom, the International Conference on Mobile Computing and Networking, is a series of annual conferences sponsored by ACM SIGMOBILE dedicated to addressing the challenges in the areas of mobile computing and wireless and mobile networking. Although no rating system for computer networking conferences exists, MobiCom is generally considered to be the best conference in these areas, and it is the fifth highest-impact venue in all of Computer Science. The quality of papers published in this conference is very high. The acceptance rate of MobiCom typically around 10%, meaning that only one tenth of all submitted papers make it through the tough peer review filter. According to SIGMOBILE, "the MobiCom conference series serves as the premier international forum addressing networks, systems, algorithms, and applications that support the symbiosis of mobile computers and wireless networks. MobiCom is a highly selective conference focusing on all issues in mobile computing and wireless and mobile networking at the link layer and above." MobiCom Conferences have been held at the following locations: MobiCom 2020, London, UK, 14-18 September 2020 MobiCom 2019, Los Cabos, Mexico, 21-25 October 2019 MobiCom 2018, New Delhi, India, 29 October-2 November 2018 MobiCom 2017, Snowbird, United States, 16-20 October 2017 MobiCom 2016, New York City, United States, 3–7 October 2016 MobiCom 2015, Paris, France, 7–11 September 2015 MobiCom 2014, Maui, Hawaii, United States, 7–11 September 2014 MobiCom 2013, Miami, Florida, United States, 30 September-4 October 2013 MobiCom 2012, Istanbul, Turkey, 22–26 August 2012 MobiCom 2011, Las Vegas, Nevada, United States, 19–23 September 2011 MobiCom 2010, Chicago, Illinois, United States, 20–24 September 2010 MobiCom 2009, Beijing, China, 20–25 September 2009 MobiCom 2008, San Francisco, California, United States, 13–19 September 2008 MobiCom 2007, Montreal, Quebec, Canada, 9–14 September 2007 MobiCom 2006, Los Angeles, Califo
https://en.wikipedia.org/wiki/SCR-584%20radar
The SCR-584 (short for Set, Complete, Radio # 584) was an automatic-tracking microwave radar developed by the MIT Radiation Laboratory during World War II. It was one of the most advanced ground-based radars of its era, and became one of the primary gun laying radars used worldwide well into the 1950s. A trailer-mounted mobile version was the SCR-784. In 1937, America's first fire-control radar, the SCR-268 radar, had proven to be insufficiently accurate due in part to its long wavelength. In 1940, Vannevar Bush, heading the National Defense Research Committee, established the "Microwave Committee" (section D-1) and the "Fire Control" division (D-2) to develop a more advanced radar anti-aircraft system in time to assist the British air-defense effort. In September of that year, a British delegation, the Tizard Mission, revealed to US and Canadian researchers that they had developed a magnetron oscillator operating at the top end of the UHF band (10 cm wavelength/3 GHz), allowing greatly increased accuracy. Bush organized the Radiation Laboratory (Rad Lab) at the MIT to develop applications using it. This included a new short-range air-defense radar. Alfred Lee Loomis, running the Rad Lab, advocated the development of an entirely automatic tracking system controlled by servomechanisms. This greatly eased the task of tracking targets and reduced the manpower needed to do it. They were also able to take advantage of a newly developed microwave switch that allowed them to use a single antenna for broadcast and reception, greatly simplifying the mechanical layout. The resulting design fit into a single trailer, could provide all-sky search and single target tracking, and followed the targets automatically. In close contact with the Rad Lab, Bell Telephone Laboratories was developing an electronic analog gun-director that would be used in conjunction with the radar and servo-actuated 90 mm anti-aircraft guns. The radar was intended to be introduced in late 1943, but de
https://en.wikipedia.org/wiki/Deterministic%20pushdown%20automaton
In automata theory, a deterministic pushdown automaton (DPDA or DPA) is a variation of the pushdown automaton. The class of deterministic pushdown automata accepts the deterministic context-free languages, a proper subset of context-free languages. Machine transitions are based on the current state and input symbol, and also the current topmost symbol of the stack. Symbols lower in the stack are not visible and have no immediate effect. Machine actions include pushing, popping, or replacing the stack top. A deterministic pushdown automaton has at most one legal transition for the same combination of input symbol, state, and top stack symbol. This is where it differs from the nondeterministic pushdown automaton. Formal definition A (not necessarily deterministic) PDA can be defined as a 7-tuple: where is a finite set of states is a finite set of input symbols is a finite set of stack symbols is the start state is the starting stack symbol , where is the set of accepting, or final, states is a transition function, where where is the Kleene star, meaning that is "the set of all finite strings (including the empty string ) of elements of ", denotes the empty string, and is the power set of a set . M is deterministic if it satisfies both the following conditions: For any , the set has at most one element. For any , if , then for every There are two possible acceptance criteria: acceptance by empty stack and acceptance by final state. The two are not equivalent for the deterministic pushdown automaton (although they are for the non-deterministic pushdown automaton). The languages accepted by empty stack are those languages that are accepted by final state and are prefix-free: no word in the language is the prefix of another word in the language. The usual acceptance criterion is final state, and it is this acceptance criterion which is used to define the deterministic context-free languages. Languages recognized If is a language accepted b
https://en.wikipedia.org/wiki/Z8%20Encore%21
The Zilog Z8 Encore! is a microcontroller based on the popular Z8 microcontroller. The Z8 Encore! offers a wide range of features for use in embedded applications, most notably the use of three DMA channels to read for example from the analog-to-digital converter (ADC). The Z8 Encore! instruction set is compatible with that of the Z8 but it provides some extensions for use with high-level languages. The Z8 Encore! features a single-pin debugging interface. External links Official site of ZiLOG, Inc. Z8 Encore! page on Zilog site Microcontrollers Zilog microprocessors
https://en.wikipedia.org/wiki/Sun-4
Sun-4 is a series of Unix workstations and servers produced by Sun Microsystems, launched in 1987. The original Sun-4 series were VMEbus-based systems similar to the earlier Sun-3 series, but employing microprocessors based on Sun's own SPARC V7 RISC architecture in place of the 68k family processors of previous Sun models. Sun 4/280 was known as base system that was used for building of first RAID prototype. Models Models are listed in approximately chronological order. {| class="wikitable sortable" |- !Model !Codename !CPU board !CPU !CPU MHz !Max. RAM !Chassis |- |4/260 |Sunrise |Sun 4200 |Fujitsu SF9010 IU,Weitek 1164/1165 FPU |16.67 MHz |128 MB |12-slot VME (deskside) |- |4/280 |Sunrise |Sun 4200 |Fujitsu SF9010 IU,Weitek 1164/1165 FPU |16.67 MHz |128 MB |12-slot VME (rackmount) |- |4/110 |Cobra |Sun 4100 |Fujitsu MB86900 IU,Weitek 1164/1165 FPU(optional) |14.28 MHz |32 MB |3-slot VME (desktop/side) |- |4/150 |Cobra |Sun 4100 |Fujitsu MB86900 IU,Weitek 1164/1165 FPU(optional) |14.28 MHz |32 MB |6-slot VME (deskside) |- |4/310 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |32 MB |3-slot VME (desktop/side) |- |4/330 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |96 MB |3-slot VME w 2 memory slots (deskside) |- |4/350 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |224 MB |5-slot VME (desktop/side) |- |4/360 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |224 MB |12-slot VME (deskside) |- |4/370 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |224 MB |12-slot VME (deskside) |- |4/380 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |224 MB |12-slot VME (rackmount) |- |4/390 |Stingray |Sun 4300 |Cypress Semiconductor CY7C601,Texas Instruments 8847 FPU |25 MHz |224 MB |16-slot VME (rackmount) |- |4/470 |Sunray |Sun 4400 |Cypress Sem
https://en.wikipedia.org/wiki/Taylor%20expansions%20for%20the%20moments%20of%20functions%20of%20random%20variables
In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of X are finite. First moment Given and , the mean and the variance of , respectively, a Taylor expansion of the expected value of can be found via Since the second term vanishes. Also, is . Therefore, . It is possible to generalize this to functions of more than one variable using multivariate Taylor expansions. For example, Second moment Similarly, The above is obtained using a second order approximation, following the method used in estimating the first moment. It will be a poor approximation in cases where is highly non-linear. This is a special case of the delta method. Indeed, we take . With , we get . The variance is then computed using the formula . An example is, The second order approximation, when X follows a normal distribution, is: First product moment To find a second-order approximation for the covariance of functions of two random variables (with the same function applied to both), one can proceed as follows. First, note that . Since a second-order expansion for has already been derived above, it only remains to find . Treating as a two-variable function, the second-order Taylor expansion is as follows: Taking expectation of the above and simplifying—making use of the identities and —leads to . Hence, Random vectors If X is a random vector, the approximations for the mean and variance of are given by Here and denote the gradient and the Hessian matrix respectively, and is the covariance matrix of X. See also Propagation of uncertainty WKB approximation Delta method Notes Further reading Statistical approximations Algebra of random variables Moment (mathematics)
https://en.wikipedia.org/wiki/Plastic%20bending
Plastic bending is a nonlinear behavior particular to members made of ductile materials that frequently achieve much greater ultimate bending strength than indicated by a linear elastic bending analysis. In both the plastic and elastic bending analyses of a straight beam, it is assumed that the strain distribution is linear about the neutral axis (plane sections remain plane). In an elastic analysis this assumption leads to a linear stress distribution but in a plastic analysis the resulting stress distribution is nonlinear and is dependent on the beam's material. The limiting plastic bending strength (see Plastic moment) can generally be thought of as an upper limit to a beam's load–carrying capability as it only represents the strength at a particular cross–section and not the load–carrying capability of the overall beam. A beam may fail due to global or local instability before is reached at any point on its length. Therefore, beams should also be checked for local buckling, local crippling, and global lateral–torsional buckling modes of failure. Note that the deflections necessary to develop the stresses indicated in a plastic analysis are generally excessive, frequently to the point of incompatibility with the function of the structure. Therefore, separate analysis may be required to ensure design deflection limits are not exceeded. Also, since working materials into the plastic range can lead to permanent deformation of the structure, additional analyses may be required at limit load to ensure no detrimental permanent deformations occur. The large deflections and stiffness changes usually associated with plastic bending can significantly change the internal load distribution, particularly in statically indeterminate beams. The internal load distribution associated with the deformed shape and stiffness should be used for calculations. Plastic bending begins when an applied moment causes the outside fibers of a cross-section to exceed the material's yield
https://en.wikipedia.org/wiki/Psophometric%20weighting
Psophometric weighting refers to any weighting curve used in the measurement of noise. In the field of audio engineering it has a more specific meaning, referring to noise weightings used especially in measuring noise on telecommunications circuits. Key standards are ITU-T O.41 and C-message weighting as shown here. Use A major use of noise weighting is in the measurement of residual noise in audio equipment, usually present as hiss or hum in quiet moments of programme material. The purpose of weighting here is to emphasise the parts of the audible spectrum that ears perceive most readily, and attenuate the parts that contribute less to perception of loudness, in order to get a measured figure that correlates well with subjective effect. See also Audio system measurements Equal-loudness contour Fletcher–Munson curves Noise measurement Headroom Psophometric voltage Rumble measurement ITU-R 468 noise weighting A-weighting Weighting filter Weighting Weighting curve References Noise (electronics)
https://en.wikipedia.org/wiki/Electrophoretic%20mobility%20shift%20assay
An electrophoretic mobility shift assay (EMSA) or mobility shift electrophoresis, also referred as a gel shift assay, gel mobility shift assay, band shift assay, or gel retardation assay, is a common affinity electrophoresis technique used to study protein–DNA or protein–RNA interactions. This procedure can determine if a protein or mixture of proteins is capable of binding to a given DNA or RNA sequence, and can sometimes indicate if more than one protein molecule is involved in the binding complex. Gel shift assays are often performed in vitro concurrently with DNase footprinting, primer extension, and promoter-probe experiments when studying transcription initiation, DNA gang replication, DNA repair or RNA processing and maturation, as well as pre-mRNA splicing. Although precursors can be found in earlier literature, most current assays are based on methods described by Garner and Revzin and Fried and Crothers. Principle A mobility shift assay is electrophoretic separation of a protein–DNA or protein–RNA mixture on a polyacrylamide or agarose gel for a short period (about 1.5-2 hr for a 15- to 20-cm gel). The speed at which different molecules (and combinations thereof) move through the gel is determined by their size and charge, and to a lesser extent, their shape (see gel electrophoresis). The control lane (DNA probe without protein present) will contain a single band corresponding to the unbound DNA or RNA fragment. However, assuming that the protein is capable of binding to the fragment, the lane with a protein that binds present will contain another band that represents the larger, less mobile complex of nucleic acid probe bound to protein which is 'shifted' up on the gel (since it has moved more slowly). Under the correct experimental conditions, the interaction between the DNA (or RNA) and protein is stabilized and the ratio of bound to unbound nucleic acid on the gel reflects the fraction of free and bound probe molecules as the binding reaction ent
https://en.wikipedia.org/wiki/VAX-11
The VAX-11 is a discontinued family of 32-bit superminicomputers, running the Virtual Address eXtension (VAX) instruction set architecture (ISA), developed and manufactured by Digital Equipment Corporation (DEC). Development began in 1976. In addition to being powerful machines in their own right, they also offer the additional ability to run user mode PDP-11 code (thus the -11 in VAX-11), offering an upward compatible path for existing customers. The first machine in the series, the VAX-11/780, was announced in October 1977. Its former competitors in the minicomputer space, like Data General and Hewlett-Packard, were unable to successfully respond to the introduction and rapid update of the VAX design. DEC followed the VAX-11/780 with the lower-cost 11/750, and the even lower cost 11/730 and 11/725 models in 1982. More powerful models, initially known as the VAX-11/790 and VAX-11/795, were instead rebranded as the VAX 8600 series. The VAX-11 line was discontinued in 1988, having been supplanted by the MicroVAX family on the low end, and the VAX 8000 family on the high end. The VAX-11/780 is historically one of the most successful and studied computers in history. VAX-11/780 The VAX-11/780, code-named "Star", was introduced on 25 October 1977 at DEC's Annual Meeting of Shareholders. It is the first computer to implement the VAX architecture. The KA780 central processing unit (CPU) is built from Schottky transistor-transistor logic (TTL) devices and has a 200 ns cycle time (5 MHz) and a 2 KB cache. Memory and I/O are accessed via the Synchronous Backplane Interconnect (SBI). The CPU is microprogrammed. The microcode is loaded at boot time from a 8” floppy disk controlled by a front end processor, a PDP-11/03, which was is used to run local and remote diagnostics. The VAX-11/780 originally supported up to 8MB of memory through one or two MS780-C memory controllers, with each controller supporting between 128KB-4MB of memory. The later MS780-E memory controller
https://en.wikipedia.org/wiki/Path%20expression
In query languages, path expressions identify an object by describing how to navigate to it in some graph (possibly implicit) of objects. For example, the path expression p.Manager.Home.City might refer the city of residence of someone's manager. Path expressions have been extended to support regular expression-like flexibility. XPath is an example of a path expression language. In concurrency control, path expressions are a mechanism for expressing permitted sequences of execution. For example, a path expression like " {read}, write" might specify that either multiple simultaneous executions of read or a single execution of write but not both are allowed at any point in time. It is important to know that the path expressions are a mechanism for the synchronization of processes at the monitor level in the software. That provides a clear and structured approach to the description of shared data and the coordination and communication between concurrent processes. This method is flexible in its ability to express timing, and can be used in different ways. In addition, path expressions are useful for process synchronization for two reasons: first, the close relationship between stream expressions and regular expressions that simplify the task of writing and reasoning about programs that use this synchronization mechanism. Second, synchronization in many concurrent programs in a finite state, and therefore can be adequately described by regular expressions. For precisely the same reasons, path expressions are useful for controlling the behavior of complicated asynchronous circuits. In fact, the finite state assumption may be even more reasonable at the hardware level than at the monitor level. Path expressions provide a high level of descriptive synchronization that aids in the prevention and detection of design errors in complex systems and overcomes some of the dangers, such as certain forms of coding errors. See also Object database References Concurr
https://en.wikipedia.org/wiki/Inverse%20demand%20function
In economics, an inverse demand function is the mathematical relationship that expresses price as a function of quantity demanded (is is therefore also known as a price function). Historically, the economists first expressed the price of a good as a function of demand (holding the other economic variables, like income, constant), and plotted the price-demand relationship with demand on the x (horizontal) axis (the demand curve). Later the additional variables, like prices of other goods, came into analysis, and it became more convenient to express the demand as a multivariate function (the demand function): , so the original demand curve now depicts the inverse demand function with extra variables fixed. Definition In mathematical terms, if the demand function is , then the inverse demand function is . The value of the inverse demand function is the highest price that could be charged and still generate the quantity demanded. This is useful because economists typically place price (P) on the vertical axis and quantity (demand, Q) on the horizontal axis in supply-and-demand diagrams, so it is the inverse demand function that depicts the graphed demand curve in the way the reader expects to see. The inverse demand function is the same as the average revenue function, since P = AR. To compute the inverse demand function, simply solve for P from the demand function. For example, if the demand function has the form then the inverse demand function would be . Note that although price is the dependent variable in the inverse demand function, it is still the case that the equation represents how the price determines the quantity demanded, not the reverse. Relation to marginal revenue There is a close relationship between any inverse demand function for a linear demand equation and the marginal revenue function. For any linear demand function with an inverse demand equation of the form P = a - bQ, the marginal revenue function has the form MR = a - 2bQ. The invers
https://en.wikipedia.org/wiki/Mohammad-Ali%20Najafi
Mohammad-Ali Najafi (; born 13 January 1952) is an Iranian mathematician and reformist politician who was the Mayor of Tehran, serving in the post for eight months, until April 2018. He held cabinet portfolios during the 1980s, 1990s and 2010s. He is also a retired professor of mathematics at Sharif University of Technology. Early life and education Najafi was born in Tehran on 13 January 1952. He ranked first in Iranian national university entrance exam and enrolled in Sharif University of Technology (then known as Aryamehr University of Technology). He earned a Bachelor of Science degree in mathematics from the Sharif University of Technology. Following his bachelors, he enrolled in the graduate program at the Massachusetts Institute of Technology. He received his Master of Science degree in mathematics with the final grade of A+ in 1976 but dropped out of PhD program in 1978 during the Iranian revolution to return to Iran. Career Following the Iranian revolution of 1979, Najafi returned to Iran and became a faculty member at Isfahan University of Technology in 1979 and he was the chair of the university from 1980 to 1981. He was a faculty member at department of mathematical sciences in Sharif University of Technology from 1984 to 1988, when he moved to government. At the end of the reformist government of Mohammad Khatami and following Mahmoud Ahmadinejad's election Najafi moved back to university and has been faculty in the department of mathematics at Sharif University of Technology working on representation theory. He served as an advisor to Mostafa Chamran. He was the minister of higher education from 1981 to 1984 in the cabinet of then Prime Minister Mir-Hossein Mousavi. In 1989, he became the minister of education under then President Hashemi Rafsanjani and served until 1997. In 1997, he was appointed vice president and head of the Planning and Budget Organization by President Mohammad Khatami, but after a merge of the organization with another he was
https://en.wikipedia.org/wiki/Correlation%20sum
In chaos theory, the correlation sum is the estimator of the correlation integral, which reflects the mean probability that the states at two different times are close: where is the number of considered states , is a threshold distance, a norm (e.g. Euclidean norm) and the Heaviside step function. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series, the embedding dimension and the time delay. The correlation sum is used to estimate the correlation dimension. See also Recurrence quantification analysis References Chaos theory Dynamical systems Dimension theory
https://en.wikipedia.org/wiki/Building%20information%20modeling
Building information modeling (BIM) is a process involving the generation and management of digital representations of the physical and functional characteristics of places. BIM is supported by various tools, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels. The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Standards developed in the United Kingdom from 2007 onwards have formed the basis of the international standard ISO 19650, launched in January 2019. History The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption. The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the buildi
https://en.wikipedia.org/wiki/IEEE%20802.1ag
IEEE 802.1ag is an amendment to the IEEE 802.1Q networking standard which introduces Connectivity Fault Management (CFM). This defines protocols and practices for the operations, administration, and maintenance (OAM) of paths through 802.1 bridges and local area networks (LANs). The final version was approved by the IEEE in 2007. IEEE 802.1ag is a subset of the earlier ITU-T Recommendation Y.1731, which additionally addresses performance monitoring. The standard: Defines maintenance domains, their constituent maintenance points, and the managed objects required to create and administrate them Defines the relationship between maintenance domains and the services offered by VLAN-aware bridges and provider bridges Describes the protocols and procedures used by maintenance points to maintain and diagnose connectivity faults within a maintenance domain; Provides means for future expansion of the capabilities of maintenance points and their protocols Definitions The document defines various terms: Maintenance Domain (MD) Maintenance Domains are management space on a network, typically owned and operated by a single entity. MDs are configured with Names and Levels, where the eight levels range from 0 to 7. A hierarchical relationship exists between domains based on levels. The larger the domain, the higher the level value. Recommended values of levels are as follows: Customer Domain: Largest (e.g., 7) Provider Domain: In between (e.g., 3) Operator Domain: Smallest (e.g., 1) Maintenance Association (MA) Defined as a "set of MEPs, all of which are configured with the same MAID (Maintenance Association Identifier) and MD Level, each of which is configured with a MEPID unique within that MAID and MD Level, and all of which are configured with the complete list of MEPIDs." Maintenance association End Point (MEP) Points at the edge of the domain, define the boundary for the domain. A MEP sends and receives CFM frames through the relay function, drops all CFM frames of its
https://en.wikipedia.org/wiki/Exogenote
An exogenote is a piece of donor DNA that is involved in the mating of prokaryotic organisms. Transferred DNA of Hfr (high frequency of recombination) is called exogenote and homologous part of F (fertility factor) genophore is called endogenote. An exogenote is genetic material that is released into the environment by prokaryotic cells, usually upon their lysis. This exogenous genetic material is then free to be taken up by other competent bacteria, and used as a template for protein synthesis or broken down for its molecules to be used elsewhere in the cell. Taking up genetic material into the cell from the surrounding environment is a form of bacterial transformation. Exogenotes can also be transferred directly from donor to recipient bacteria as an F'-plasmid in a process known as bacterial conjugation. F'-plasmids only form if the F+ factor is incorrectly translated, and results in a small amount of donor DNA erroneously transferring to the recipient with very high efficiency. References Genomics Prokaryotes
https://en.wikipedia.org/wiki/Shadow%20and%20highlight%20enhancement
Shadow and highlight enhancement refers to an image processing technique used to correct exposure. The use of this technique has been gaining popularity, making its way onto magazine covers, digital media, and photos. It is, however, considered by some to be akin to other destructive Photoshop filters, such as the Watercolor filter, or the Mosaic filter. Shadow recovery A conservative application of the shadow/highlight tool can be very useful in recovering shadows, though it tends to leave a telltale halo around the boundary between highlight and shadow if used incorrectly. A way to avoid this is to use the bracketing technique, although this usually requires a tripod. Highlight recovery Recovering highlights with this tool, however, has mixed results, especially when using it on images with skin in them, and often makes people look like they have been "sprayed with fake tan". Shadow brightening - manual One way to brighten shadows in image editing software such as GIMP or Adobe Photoshop is to duplicate the background layer, invert the copy and set the blend modes of that top layer to "Soft Light". You can also use an inverted black and white copy of the image as a mask on a brightening layer, such as Curves or Levels. Shadow brightening - automatic Several automatic computer image processing-based shadow recovery and dynamic range compression methods can yield a similar effect. Some of these methods include the retinex method and homomorphic range compression. The retinex method is based on work from 1963 by Edwin Land, the founder of Polaroid. Shadow enhancement can also be accomplished using adaptive image processing algorithms such as adaptive histogram equalization or contrast limiting adaptive histogram equalization (CLAHE). See also HDR PhotoStudio — an HDR image editing tool that implements an advanced Shadow/highlight algorithm with halo reduction technique. Tone mapping References External links Shadow/Highlight Tool Overview at Dpr
https://en.wikipedia.org/wiki/Computational%20scientist
A computational scientist is a person skilled in scientific computing. This person is usually a scientist, a statistician, an applied mathematician, or an engineer who applies high-performance computing and sometimes cloud computing in different ways to advance the state-of-the-art in their respective applied discipline; physics, chemistry, social sciences and so forth. Thus scientific computing has increasingly influenced many areas such as economics, biology, law, and medicine to name a few. Because a computational scientist's work is generally applied to science and other disciplines, they are not necessarily trained in computer science specifically, though concepts of computer science are often used. Computational scientists are typically researchers at academic universities, national labs, or tech companies. One of the tasks of a computational scientist is to analyze large amounts of data, often from astrophysics or related fields, as these can often generate huge amounts of data. Computational scientists often have to clean up and calibrate the data to a usable form for an effective analysis. Computational scientists are also tasked with creating artificial data through computer models and simulations. References Computational science Computer occupations Science occupations Computational fields of study
https://en.wikipedia.org/wiki/Longest%20repeated%20substring%20problem
In computer science, the longest repeated substring problem is the problem of finding the longest substring of a string that occurs at least twice. This problem can be solved in linear time and space by building a suffix tree for the string (with a special end-of-string symbol like '$' appended), and finding the deepest internal node in the tree with more than one child. Depth is measured by the number of characters traversed from the root. The string spelled by the edges from the root to such a node is a longest repeated substring. The problem of finding the longest substring with at least occurrences can be solved by first preprocessing the tree to count the number of leaf descendants for each internal node, and then finding the deepest node with at least leaf descendants. To avoid overlapping repeats, you can check that the list of suffix lengths has no consecutive elements with less than prefix-length difference. In the figure with the string "ATCGATCGA$", the longest substring that repeats at least twice is "ATCGA". External links C implementation of Longest Repeated Substring using Suffix Tree Online Demo: Longest Repeated Substring Problems on strings Formal languages Combinatorics
https://en.wikipedia.org/wiki/Epsilon%20number
In mathematics, the epsilon numbers are a collection of transfinite numbers whose defining property is that they are fixed points of an exponential map. Consequently, they are not reachable from 0 via a finite series of applications of the chosen exponential map and of "weaker" operations like addition and multiplication. The original epsilon numbers were introduced by Georg Cantor in the context of ordinal arithmetic; they are the ordinal numbers ε that satisfy the equation in which ω is the smallest infinite ordinal. The least such ordinal is ε0 (pronounced epsilon nought or epsilon zero), which can be viewed as the "limit" obtained by transfinite recursion from a sequence of smaller limit ordinals: where is the supremum function, which is equivalent to set union in the case of the von Neumann representation of ordinals. Larger ordinal fixed points of the exponential map are indexed by ordinal subscripts, resulting in . The ordinal ε0 is still countable, as is any epsilon number whose index is countable (there exist uncountable ordinals, and uncountable epsilon numbers whose index is an uncountable ordinal). The smallest epsilon number ε0 appears in many induction proofs, because for many purposes, transfinite induction is only required up to ε0 (as in Gentzen's consistency proof and the proof of Goodstein's theorem). Its use by Gentzen to prove the consistency of Peano arithmetic, along with Gödel's second incompleteness theorem, show that Peano arithmetic cannot prove the well-foundedness of this ordering (it is in fact the least ordinal with this property, and as such, in proof-theoretic ordinal analysis, is used as a measure of the strength of the theory of Peano arithmetic). Many larger epsilon numbers can be defined using the Veblen function. A more general class of epsilon numbers has been identified by John Horton Conway and Donald Knuth in the surreal number system, consisting of all surreals that are fixed points of the base ω exponential map
https://en.wikipedia.org/wiki/German%20locomotive%20classification
The different railway companies in Germany have used various schemes to classify their rolling stock. From the beginning As widely known the first few locomotives had names. The first locomotive in public service in Germany from 1835 was named Adler. The first railway lines were built by privately owned companies. That changed later when many railway companies were taken over or founded by the respective German states such as Prussia, Bavaria, etc. Different numbering schemes prior to 1924 The fast-growing number of locomotives made a numbering scheme inevitable. Most of the various state-owned German railway companies (called Länderbahnen in German) developed their own schemes, e. g. the Prussian state railways (preußische Staatseisenbahnen sometimes erroneously referred to as the Königlich Preussische Eisenbahn-Verwaltung or KPEV) introduced P for passenger train locomotives (the P 8 was one of the most important locomotive types with a total of over 3,000 units built), S for Schnellzug (express train) locomotives (e. g. the famous S 10), G for Güterzug (freight train) locomotives and T for Tenderlokomotive (tank locomotive). Basically the numbers were used continuously. As the Prussians also standardised technical standards, some of the smaller companies also used the Prussian numbering scheme or a similar one. Bavaria's state-owned railway chose a different way: They also used P, S, or G to indicate the train type, but combined with the numbers of driving axles and of the axles in total, separated by a slash (similar to the Swiss system). E. g., the famous S 3/6 was a 2'C1' or 4-6-2 Pacific, meaning that of a total of 6 axles, 3 were driving axles. These various state-owned companies and thus their numbering schemes were retained after German unification in 1871 and kept until well after World War I. The first uniform scheme The Deutsche Reichsbahn-Gesellschaft DRG was founded in 1924 by the amalgamation of the various state-owned Länderbahnen. One of its
https://en.wikipedia.org/wiki/Turret%20lathe
A turret lathe is a form of metalworking lathe that is used for repetitive production of duplicate parts, which by the nature of their cutting process are usually interchangeable. It evolved from earlier lathes with the addition of the turret, which is an indexable toolholder that allows multiple cutting operations to be performed, each with a different cutting tool, in easy, rapid succession, with no need for the operator to perform set-up tasks in between (such as installing or uninstalling tools) or to control the toolpath. The latter is due to the toolpath's being controlled by the machine, either in jig-like fashion, via the mechanical limits placed on it by the turret's slide and stops, or via digitally-directed servomechanisms for computer numerical control lathes. The name derives from the way early turrets took the general form of a flattened cylindrical block mounted to the lathe's cross-slide, capable of rotating about the vertical axis and with toolholders projecting out to all sides, and thus vaguely resembled a swiveling gun turret. Capstan lathe is the usual name in the UK and Commonwealth, though the two terms are also used in contrast: see below, Capstan versus turret. History Turret lathes became indispensable to the production of interchangeable parts and for mass production. The first turret lathe was built by Stephen Fitch in 1845 to manufacture screws for pistol percussion parts. In the mid-nineteenth century, the need for interchangeable parts for Colt revolvers enhanced the role of turret lathes in achieving this goal as part of the "American system" of manufacturing arms. Clock-making and bicycle manufacturing had similar requirements. Christopher Spencer invented the first fully automated turret lathe in 1873, which led to designs using cam action or hydraulic mechanisms. From the late-19th through mid-20th centuries, turret lathes, both manual and automatic (i.e., screw machines and chuckers), were one of the most important class
https://en.wikipedia.org/wiki/Lattice%20of%20subgroups
In mathematics, the lattice of subgroups of a group is the lattice whose elements are the subgroups of , with the partial order relation being set inclusion. In this lattice, the join of two subgroups is the subgroup generated by their union, and the meet of two subgroups is their intersection. Example The dihedral group Dih4 has ten subgroups, counting itself and the trivial subgroup. Five of the eight group elements generate subgroups of order two, and the other two non-identity elements both generate the same cyclic subgroup of order four. In addition, there are two subgroups of the form Z2 × Z2, generated by pairs of order-two elements. The lattice formed by these ten subgroups is shown in the illustration. This example also shows that the lattice of all subgroups of a group is not a modular lattice in general. Indeed, this particular lattice contains the forbidden "pentagon" N5 as a sublattice. Properties For any A, B, and C subgroups of a group with A ≤ C (A subgroup of C) then AB ∩ C = A(B ∩ C); the multiplication here is the product of subgroups. This property has been called the modular property of groups or (Dedekind's) modular law (, ). Since for two normal subgroups the product is actually the smallest subgroup containing the two, the normal subgroups form a modular lattice. The Lattice theorem establishes a Galois connection between the lattice of subgroups of a group and that of its quotients. The Zassenhaus lemma gives an isomorphism between certain combinations of quotients and products in the lattice of subgroups. In general, there is no restriction on the shape of the lattice of subgroups, in the sense that every lattice is isomorphic to a sublattice of the subgroup lattice of some group. Furthermore, every finite lattice is isomorphic to a sublattice of the subgroup lattice of some finite group . Characteristic lattices Subgroups with certain properties form lattices, but other properties do not. Normal subgroups always form a modul
https://en.wikipedia.org/wiki/List%20of%20contaminated%20cell%20lines
Many cell lines that are widely used for biomedical research have been overgrown by other, more aggressive cells. For example, supposed thyroid lines were actually melanoma cells, supposed prostate tissue was actually bladder cancer, and supposed normal uterine cultures were actually breast cancer. This is a list of cell lines that have been cross-contaminated and overgrown by other cells. Estimates based on screening of leukemia-lymphoma cell lines suggest that about 15% of these cell lines are not representative of what they are usually assumed to be. A project is currently underway to enumerate and rename contaminated cell lines to avoid errors in research caused by misattribution. Contaminated cell lines have been extensively used in research without knowledge of their true character. For example, most if not all research on the endothelium ECV-304 or the megakaryocyte DAMI cell lines has in reality been conducted on bladder carcinoma and erythroleukemia cells, respectively. Thus, all research on endothelium- or megakaryocyte-specific functions utilizing these cell lines has turned out to be misguided, serving more of a warning example. There are two principal ways a cell line can become contaminated: cell cultures are often exchanged between research groups; if, during handling, a sample gets contaminated and then passed on, subsequent exchanges of cells will lead to the contaminating population being established, although parts of the supposed cell line are still genuine. More serious is contamination at the source: during establishment of the original cell line, some contaminating cells are accidentally introduced into the cultures, where they in time outgrow the desired cells. The initial testing, in this case, still suggested that the cell line is genuine and novel, but in reality, it has disappeared soon after being established and all samples of such cell lines are actually the contaminant cells. It requires lengthy research to determine the precise poi
https://en.wikipedia.org/wiki/Linear%20group
In mathematics, a matrix group is a group G consisting of invertible matrices over a specified field K, with the operation of matrix multiplication. A linear group is a group that is isomorphic to a matrix group (that is, admitting a faithful, finite-dimensional representation over K). Any finite group is linear, because it can be realized by permutation matrices using Cayley's theorem. Among infinite groups, linear groups form an interesting and tractable class. Examples of groups that are not linear include groups which are "too big" (for example, the group of permutations of an infinite set), or which exhibit some pathological behavior (for example, finitely generated infinite torsion groups). Definition and basic examples A group G is said to be linear if there exists a field K, an integer d and an injective homomorphism from G to the general linear group GLd(K) (a faithful linear representation of dimension d over K): if needed one can mention the field and dimension by saying that G is linear of degree d over K. Basic instances are groups which are defined as subgroups of a linear group, for example: The group GLn(K) itself; The special linear group SLn(K) (the subgroup of matrices with determinant 1); The group of invertible upper (or lower) triangular matrices If gi is a collection of elements in GLn(K) indexed by a set I, then the subgroup generated by the gi is a linear group. In the study of Lie groups, it is sometimes pedagogically convenient to restrict attention to Lie groups that can be faithfully represented over the field of complex numbers. (Some authors require that the group be represented as a closed subgroup of the GLn(C).) Books that follow this approach include Hall (2015) and Rossmann (2002). Classes of linear groups Classical groups and related examples The so-called classical groups generalize the examples 1 and 2 above. They arise as linear algebraic groups, that is, as subgroups of GLn defined by a finite number of equations.
https://en.wikipedia.org/wiki/Omega%20language
In formal language theory within theoretical computer science, an infinite word is an infinite-length sequence (specifically, an ω-length sequence) of symbols, and an ω-language is a set of infinite words. Here, ω refers to the first ordinal number, the set of natural numbers. Formal definition Let Σ be a set of symbols (not necessarily finite). Following the standard definition from formal language theory, Σ* is the set of all finite words over Σ. Every finite word has a length, which is a natural number. Given a word w of length n, w can be viewed as a function from the set {0,1,...,n−1} → Σ, with the value at i giving the symbol at position i. The infinite words, or ω-words, can likewise be viewed as functions from to Σ. The set of all infinite words over Σ is denoted Σω. The set of all finite and infinite words over Σ is sometimes written Σ∞ or Σ≤ω. Thus an ω-language L over Σ is a subset of Σω. Operations Some common operations defined on ω-languages are: Intersection and union Given ω-languages L and M, both and are ω-languages. Left concatenation Let L be an ω-language, and K be a language of finite words only. Then K can be concatenated on the left, and only on the left, to L to yield the new ω-language KL. Omega (infinite iteration) As the notation hints, the operation is the infinite version of the Kleene star operator on finite-length languages. Given a formal language L, Lω is the ω-language of all infinite sequences of words from L; in the functional view, of all functions . Prefixes Let w be an ω-word. Then the formal language Pref(w) contains every finite prefix of w. Limit Given a finite-length language L, an ω-word w is in the limit of L if and only if is an infinite set. In other words, for an arbitrarily large natural number n, it is always possible to choose some word in L, whose length is greater than n, and which is a prefix of w. The limit operation on L can be written Lδ or . Distance between ω-words The set Σω can be m
https://en.wikipedia.org/wiki/Ensonido
Ensonido is a real-time post processing algorithm that allows users to play back MP3 Surround files in standard headphones. Ensonido was developed by the Fraunhofer Society. It simulates the natural reception of surround sound by the human ear, which usually receives tones from surrounding loudspeakers and from reflections and echoes of the listening room. The out-of-head localization achieved that way increases the listening comfort noticeably in contrast to conventional stereo headphone listening with its in-head localization of all sounds. In version 3.0 of the Fraunhofer IIS MP3 Surround Player, Ensonido is replaced with newer mp3HD External links all4mp3.com Software, demos, information, and various mp3 resources mp3surround.com - Demo content, information and evaluation software The Register news story Press Releases mp3surrounded.com - First Blog in the internet about MP3 Surround-MP3 Surround Samples Audio codecs Digital audio
https://en.wikipedia.org/wiki/Sun-3
Sun-3 is a series of UNIX computer workstations and servers produced by Sun Microsystems, launched on September 9, 1985. The Sun-3 series are VMEbus-based systems similar to some of the earlier Sun-2 series, but using the Motorola 68020 microprocessor, in combination with the Motorola 68881 floating-point co-processor (optional on the Sun 3/50) and a proprietary Sun MMU. Sun-3 systems were supported in SunOS versions 3.0 to 4.1.1_U1 and also have current support in NetBSD and Linux. Sun-3 models Models are listed in approximately chronological order. {| class="wikitable" !Model !Codename !CPU board !CPU MHz !Max. RAM !Chassis |- | 3/75 | Carrera | Sun 3004 | 16.67 MHz | 8 MB | 2-slot VME (desktop) |- | 3/140 | Carrera | Sun 3004 | 16.67 MHz | 16 MB | 3-slot VME (desktop/side) |- | 3/160 | Carrera | Sun 3004 | 16.67 MHz | 16 MB | 12-slot VME (deskside) |- | 3/180 | Carrera | Sun 3004 | 16.67 MHz | 16 MB | 12-slot VME (rackmount) |- | 3/150 | Carrera | Sun 3004 | 16.67 MHz | 16 MB | 6-slot VME (deskside) |- | 3/50 | Model 25 | — | 15.7 MHz | 4 MB | "wide Pizza-box" desktop |- | 3/110 | Prism | — | 16.67 MHz | 12 MB | 3-slot VME (desktop/side) |- | 3/260 | Sirius | Sun 3200 | 25 MHz (CPU), 20 MHz (FPU) | 32 MB | 12-slot VME (deskside) |- | 3/280 | Sirius | Sun 3200 | 25 MHz (CPU), 20 MHz (FPU) | 32 MB | 12-slot VME (rackmount) |- | 3/60 | Ferrari | — | 20 MHz | 24 MB | "wide Pizza-box" desktop |- | 3/E | Polaris | Sun 3/E | 20 MHz | 16 MB | none (6U VME board) |} (Max. RAM sizes may be greater when third-party memory boards are used.) Keyboard The Sun Type 3 keyboard is split into three blocks: special keys main block numeric pad It shipped with Sun-3 systems. Sun-3x In 1989, coincident with the launch of the SPARCstation 1, Sun launched three new Sun-3 models, the 3/80, 3/470 and 3/480. Unlike previous Sun-3s, these use a Motorola 68030 processor, 68882 floating-point unit, and the 68030's integral MMU. This 68030-based architecture is called
https://en.wikipedia.org/wiki/Ripple%20%28electrical%29
Ripple (specifically ripple voltage) in electronics is the residual periodic variation of the DC voltage within a power supply which has been derived from an alternating current (AC) source. This ripple is due to incomplete suppression of the alternating waveform after rectification. Ripple voltage originates as the output of a rectifier or from generation and commutation of DC power. Ripple (specifically ripple current or surge current) may also refer to the pulsed current consumption of non-linear devices like capacitor-input rectifiers. As well as these time-varying phenomena, there is a frequency domain ripple that arises in some classes of filter and other signal processing networks. In this case the periodic variation is a variation in the insertion loss of the network against increasing frequency. The variation may not be strictly linearly periodic. In this meaning also, ripple is usually to be considered an incidental effect, its existence being a compromise between the amount of ripple and other design parameters. Ripple is wasted power, and has many undesirable effects in a DC circuit: it heats components, causes noise and distortion, and may cause digital circuits to operate improperly. Ripple may be reduced by an electronic filter, and eliminated by a voltage regulator. Voltage ripple A non-ideal DC voltage waveform can be viewed as a composite of a constant DC component (offset) with an alternating (AC) voltage—the ripple voltage—overlaid. The ripple component is often small in magnitude relative to the DC component, but in absolute terms, ripple (as in the case of HVDC transmission systems) may be thousands of volts. Ripple itself is a composite (non-sinusoidal) waveform consisting of harmonics of some fundamental frequency which is usually the original AC line frequency, but in the case of switched-mode power supplies, the fundamental frequency can be tens of kilohertz to megahertz. The characteristics and components of ripple depend on its so
https://en.wikipedia.org/wiki/Broadcast%20Markup%20Language
Broadcast Markup Language, or BML, is an XML-based standard developed by Japan's Association of Radio Industries and Businesses as a data broadcasting specification for digital television broadcasting. It is a data-transmission service allowing text to be displayed on a 1seg TV screen. The text contains news, sports, weather forecasts, emergency warnings such as Earthquake Early Warning, etc. free of charge. It was finalized in 1999, becoming ARIB STD-B24 Data Coding and Transmission Specification for Digital Broadcasting. The STD-B24 specification is derived from an early draft of XHTML 1.0 strict, which it extends and alters. Some subset of CSS 1 and 2 is supported, as well as ECMAScript. Example BML header: <?xml version="1.0" encoding="EUC-JP" ?> <!DOCTYPE bml PUBLIC "+//ARIB STD-B24:1999//DTD BML Document//JA" "bml_1_0.dtd"> <?bml bml-version="1.0" ?> Since version 1.0 in 1999, BML standard has gone through several revisions, and , it is on version 5.0. However, due to a large installed user base of receivers which only support the original 1.0 specification, broadcasters are not able to introduce new features defined in later revisions. See also ARIB STD B24 character set Integrated Services Digital Broadcasting 1seg Ginga (SBTVD Middleware) Further reading Broadcast Markup Language (BML) at OASIS External links Official changelog for ARIB STD-B24 STD-B24 and others, List of ARIB Standards in the Field of Broadcasting (ARIB) Broadcast engineering Digital television High-definition television Industry-specific XML-based standards Interactive television ISDB Satellite television Japanese inventions Telecommunications-related introductions in 1999
https://en.wikipedia.org/wiki/Unimate
Unimate was the first industrial robot, which worked on a General Motors assembly line at the Inland Fisher Guide Plant in Ewing Township, New Jersey, in 1961. It was invented by George Devol in the 1950s using his original patent filed in 1954 and granted in 1961 (). The patent begins: The present invention relates to the automatic operation of machinery, particularly the handling apparatus, and to automatic control apparatus suited for such machinery. Devol, together with Joseph Engelberger, his business associate, started the world's first robot manufacturing company, Unimation. The machine undertook the job of transporting die castings from an assembly line and welding these parts on auto bodies, a dangerous task for workers, who might be poisoned by toxic fumes or lose a limb if they were not careful. The original Unimate consisted of a large computer-like box, joined to another box and was connected to an arm, with systematic tasks stored in a drum memory. The Unimate also appeared on The Tonight Show hosted by Johnny Carson on which it knocked a golf ball into a cup, poured a beer, waved the orchestra conductor's baton and grasped an accordion and waved it around. In 2003 the Unimate was inducted into the Robot Hall of Fame. In popular culture Fictional robots called Unimate, designed by the character Alan von Neumann, Jr., appeared in comic books from DC Comics. References External links Electronic robot 'Unimate' works in a building in Connecticut, United States. Newsreel footage Industrial robots Historical robots 1956 robots Robotics at Unimation
https://en.wikipedia.org/wiki/Logical%20truth
Logical truth is one of the most fundamental concepts in logic. Broadly speaking, a logical truth is a statement which is true regardless of the truth or falsity of its constituent propositions. In other words, a logical truth is a statement which is not only true, but one which is true under all interpretations of its logical components (other than its logical constants). Thus, logical truths such as "if p, then p" can be considered tautologies. Logical truths are thought to be the simplest case of statements which are analytically true (or in other words, true by definition). All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence. Logical truths are generally considered to be necessarily true. This is to say that they are such that no situation could arise in which they could fail to be true. The view that logical statements are necessarily true is sometimes treated as equivalent to saying that logical truths are true in all possible worlds. However, the question of which statements are necessarily true remains the subject of continued debate. Treating logical truths, analytic truths, and necessary truths as equivalent, logical truths can be contrasted with facts (which can also be called contingent claims or synthetic claims). Contingent truths are true in this world, but could have turned out otherwise (in other words, they are false in at least one possible world). Logically true propositions such as "If p and q, then p" and "All married people are married" are logical truths because they are true due to their internal structure and not because of any facts of the world (whereas "All married people are happy", even if it were true, could not be true solely in virtue of its logical structure). Rationalist philosophers have suggested that the existence of logical truths cannot be explained by empiricism, because they hold that it is impossible to account for our knowledge of logical tru
https://en.wikipedia.org/wiki/Syndetic%20set
In mathematics, a syndetic set is a subset of the natural numbers having the property of "bounded gaps": that the sizes of the gaps in the sequence of natural numbers is bounded. Definition A set is called syndetic if for some finite subset of where . Thus syndetic sets have "bounded gaps"; for a syndetic set , there is an integer such that for any . See also Ergodic Ramsey theory Piecewise syndetic set Thick set References Semigroup theory Ergodic theory
https://en.wikipedia.org/wiki/Useful%20conversions%20and%20formulas%20for%20air%20dispersion%20modeling
Various governmental agencies involved with environmental protection and with occupational safety and health have promulgated regulations limiting the allowable concentrations of gaseous pollutants in the ambient air or in emissions to the ambient air. Such regulations involve a number of different expressions of concentration. Some express the concentrations as ppmv and some express the concentrations as mg/m3, while others require adjusting or correcting the concentrations to reference conditions of moisture content, oxygen content or carbon dioxide content. This article presents a set of useful conversions and formulas for air dispersion modeling of atmospheric pollutants and for complying with the various regulations as to how to express the concentrations obtained by such modeling. Converting air pollutant concentrations The conversion equations depend on the temperature at which the conversion is wanted (usually about 20 to 25 degrees Celsius). At an ambient air pressure of 1 atmosphere (101.325 kPa), the general equation is: and for the reverse conversion: Notes: Pollution regulations in the United States typically reference their pollutant limits to an ambient temperature of 20 to 25 °C as noted above. In most other nations, the reference ambient temperature for pollutant limits may be 0 °C or other values. 1 percent by volume = 10,000 ppmv (i.e., parts per million by volume). atm = absolute atmospheric pressure in atmospheres mol = gram mole Correcting concentrations for altitude Atmospheric pollutant concentrations expressed as mass per unit volume of atmospheric air (e.g., mg/m3, µg/m3, etc.) at sea level will decrease with increasing altitude because the atmospheric pressure decreases with increasing altitude. The change of atmospheric pressure with altitude can be obtained from this equation: Given an atmospheric pollutant concentration at an atmospheric pressure of 1 atmosphere (i.e., at sea level altitude), the concentration at other alt
https://en.wikipedia.org/wiki/Parallel%20SCSI
Parallel SCSI (formally, SCSI Parallel Interface, or SPI) is the earliest of the interface implementations in the SCSI family. SPI is a parallel bus; there is one set of electrical connections stretching from one end of the SCSI bus to the other. A SCSI device attaches to the bus but does not interrupt it. Both ends of the bus must be terminated. SCSI is a peer-to-peer peripheral interface. Every device attaches to the SCSI bus in a similar manner. Depending on the version, up to 8 or 16 devices can be attached to a single bus. There can be multiple hosts and multiple peripheral devices but there should be at least one host. The SCSI protocol defines communication from host to host, host to a peripheral device, and peripheral device to a peripheral device. The Symbios Logic 53C810 chip is an example of a PCI host interface that can act as a SCSI target. SCSI-1 and SCSI-2 have the option of parity bit error checking. Starting with SCSI-U160 (part of SCSI-3) all commands and data are error checked by a cyclic redundancy check. History The first two formal SCSI standards, SCSI-1 and SCSI-2, described parallel SCSI. The SCSI-3 standard then split the framework into separate layers which allowed the introduction of other data interfaces beyond parallel SCSI. The original SCSI-1 version of the parallel bus was 8 bits wide (plus a ninth parity bit). The SCSI-2 standard allowed for faster operation (10 MHz) and wider buses (16-bit or 32-bit). The 16-bit option became the most popular. At 10 MHz with a bus width of 16 bits it is possible to achieve a data rate of 20 MB/s. Subsequent extensions to the SCSI standard allowed for faster speeds: 20 MHz, 40 MHz, 80 MHz, 160 MHz and finally 320 MHz. At 320 MHz x 16 bits there is a theoretical maximum peak data rate of 640 MB/s. Due to the technical constraints of a parallel bus system, SCSI has since evolved into faster serial interfaces, mainly Serial Attached SCSI and Fibre Channel. The iSCSI protocol doesn't describe
https://en.wikipedia.org/wiki/The%20Music%20of%20the%20Primes
The Music of the Primes (British subtitle: Why an Unsolved Problem in Mathematics Matters; American subtitle: Searching to Solve the Greatest Mystery in Mathematics) is a 2003 book by Marcus du Sautoy, a professor in mathematics at the University of Oxford, on the history of prime number theory. In particular he examines the Riemann hypothesis, the proof of which would revolutionize our understanding of prime numbers. He traces the prime number theorem back through history, highlighting the work of some of the greatest mathematical minds along the way. The cover design for the hardback version of the book contains several pictorial depictions of prime numbers, such as the number 73 bus. It also has an image of a clock, referring to clock arithmetic, which is a significant theme in the text. References 2003 non-fiction books Mathematics books Analytic number theory Prime numbers Fourth Estate books
https://en.wikipedia.org/wiki/Course-of-values%20recursion
In computability theory, course-of-values recursion is a technique for defining number-theoretic functions by recursion. In a definition of a function f by course-of-values recursion, the value of f(n) is computed from the sequence . The fact that such definitions can be converted into definitions using a simpler form of recursion is often used to prove that functions defined by course-of-values recursion are primitive recursive. Contrary to course-of-values recursion, in primitive recursion the computation of a value of a function requires only the previous value; for example, for a 1-ary primitive recursive function g the value of g(n+1) is computed only from g(n) and n. Definition and examples The factorial function n! is recursively defined by the rules This recursion is a primitive recursion because it computes the next value (n+1)! of the function based on the value of n and the previous value n! of the function. On the other hand, the function Fib(n), which returns the nth Fibonacci number, is defined with the recursion equations In order to compute Fib(n+2), the last two values of the Fib function are required. Finally, consider the function g defined with the recursion equations To compute g(n+1) using these equations, all the previous values of g must be computed; no fixed finite number of previous values is sufficient in general for the computation of g. The functions Fib and g are examples of functions defined by course-of-values recursion. In general, a function f is defined by course-of-values recursion if there is a fixed primitive recursive function h such that for all n, where is a Gödel number encoding the indicated sequence. In particular provides the initial value of the recursion. The function h might test its first argument to provide explicit initial values, for instance for Fib one could use the function defined by where s[i] denotes extraction of the element i from an encoded sequence s; this is easily seen to be a primitive
https://en.wikipedia.org/wiki/161%20%28number%29
161 (one hundred [and] sixty-one) is the natural number following 160 and preceding 162. In mathematics 161 is the sum of five consecutive prime numbers: 23, 29, 31, 37, and 41 161 is a hexagonal pyramidal number. 161 is a semiprime. Since its prime factors 7 and 23 are both Gaussian primes, 161 is a Blum integer. 161 is a palindromic number is a commonly used rational approximation of the square root of 5 and is the closest fraction with denominator <300 to that number. In the military was a U.S. Navy Type T2 tanker during World War II was a U.S. Navy during World War II was a U.S. Navy Trefoil-class concrete barge during World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy wooden yacht during World War I was a U.S. Navy during World War II was a U.S. Navy Achomawi-class fleet ocean tug following World War II was a U.S. Navy fourth-group S-class submarine between 1920 and 1931 is a fictional U.S. Navy diesel engine submarine featured in the 1996 film Down Periscope The 161st Intelligence Squadron unit of the Kansas Air National Guard. Its parent unit is the 184th Intelligence Wing In music The Bose 161 Speaker System (2001) The Kay K-161 ThinTwin guitar In transportation MTA Maryland commuter bus 161 New Jersey Bus Route 161 London Bus route 161 In other fields 161 is also: The year AD 161 or 161 BC 161 AH is a year in the Islamic calendar that corresponds to 777 – 778 CE 161 Athor is an M-type Main belt asteroid E.161 is an ITU-T assigns letters to the 12-key telephone keypad Fiorina Fury 161 is a foundry facility and penal colony from the film Alien 3 161 is used by Anti Fascist Action as a code for AFA (A=1, F=6, by order of the alphabet), sometimes used in 161>88 (88 is code for Heil Hitler among neo-nazis, as H=8) See also Anti-Fascist Action List of highways numbered 161 United Nations Security Council Resolution 161 Unite
https://en.wikipedia.org/wiki/Madge%20Networks
Madge Networks NV was a networking technology company founded by Robert Madge, and is best known for its work with Token Ring. It was a global leader and pioneer of high-speed networking solutions in the mid-1990s, and also made significant contributions to technologies such as Asynchronous Transfer Mode (ATM) and Ethernet. The company filed for bankruptcy in April 2003. The operational business of the company is currently trading as Madge Ltd. in the UK. Under a deal with Network Technology PLC, the company acquired the rights and copyright to Madge's products, brand and website, as well as the remaining inventory. The assets will be absorbed by Network Technology PLC subsidiary Ringdale Limited, making them the world's largest supplier of Token Ring technology. Technology development Madge Networks was once one of the world's leading suppliers of networking hardware. Headquartered in Wexham, England, Madge Networks developed Token Ring, Ethernet, ATM, ISDN, and other products providing extensive networking solutions. The company's products ranged from ISA/PCI network adapters for personal computers to work group switching hubs, routers, and ISDN backbone carriers. Madge focus was to provide convergence solutions in Ethernet, Token Ring, ISDN, and the then emerging ATM networking technologies. In addition to its Wexham headquarters, Madge operated main offices in Eatontown, New Jersey and San Jose, California, as well as offices in more than 25 countries throughout the world. Founded in 1986, Madge Networks was a pioneer in the networking market, the emergence of which went on to define internal and external communications among corporations in every industry. Madge Networks was one of the world's leading proponents of Token Ring technology, producing ISA, PCI, and PC card adapters, switches, stacks, and other devices required for its implementation of Token Ring technology. In the late 1990s Madge Networks had taken a leading role in developing the standards
https://en.wikipedia.org/wiki/Stringent%20response
The stringent response, also called stringent control, is a stress response of bacteria and plant chloroplasts in reaction to amino-acid starvation, fatty acid limitation, iron limitation, heat shock and other stress conditions. The stringent response is signaled by the alarmone (p)ppGpp, and modulates transcription of up to 1/3 of all genes in the cell. This in turn causes the cell to divert resources away from growth and division and toward amino acid synthesis in order to promote survival until nutrient conditions improve. Response In Escherichia coli, (p)ppGpp production is mediated by the ribosomal protein L11 (rplK resp. relC) and the ribosome-associated (p)ppGpp synthetase I, RelA; deacylated tRNA bound in the ribosomal A-site is the primary induction signal. RelA converts GTP and ATP into pppGpp by adding the pyrophosphate from ATP onto the 3' carbon of the ribose in GTP, releasing AMP. pppGpp is converted to ppGpp by the gpp gene product, releasing Pi. ppGpp is converted to GDP by the spoT gene product, releasing pyrophosphate (PPi). GDP is converted to GTP by the ndk gene product. Nucleoside triphosphate (NTP) provides the Pi, and is converted to Nucleoside diphosphate (NDP). In other bacteria, the stringent response is mediated by a variety of RelA/SpoT Homologue (RSH) proteins, with some having only synthetic, or hydrolytic or both (Rel) activities. During the stringent response, (p)ppGpp accumulation affects the resource-consuming cell processes replication, transcription, and translation. (p)ppGpp is thought to bind RNA polymerase and alter the transcriptional profile, decreasing the synthesis of translational machinery (such as rRNA and tRNA), and increasing the transcription of biosynthetic genes. Additionally, the initiation of new rounds of replication is inhibited and the cell cycle arrests until nutrient conditions improve. Translational GTPases involved in protein biosynthesis are also affected by ppGpp, with Initiation Factor 2 (IF2) being
https://en.wikipedia.org/wiki/CPU%20modes
CPU modes (also called processor modes, CPU states, CPU privilege levels and other names) are operating modes for the central processing unit of some computer architectures that place restrictions on the type and scope of operations that can be performed by certain processes being run by the CPU. This design allows the operating system to run with more privileges than application software. Ideally, only highly trusted kernel code is allowed to execute in the unrestricted mode; everything else (including non-supervisory portions of the operating system) runs in a restricted mode and must use a system call (via interrupt) to request the kernel perform on its behalf any operation that could damage or compromise the system, making it impossible for untrusted programs to alter or damage other programs (or the computing system itself). In practice, however, system calls take time and can hurt the performance of a computing system, so it is not uncommon for system designers to allow some time-critical software (especially device drivers) to run with full kernel privileges. Multiple modes can be implemented—allowing a hypervisor to run multiple operating system supervisors beneath it, which is the basic design of many virtual machine systems available today. Mode types The unrestricted mode is often called kernel mode, but many other designations exist (master mode, supervisor mode, privileged mode, etc.). Restricted modes are usually referred to as user modes, but are also known by many other names (slave mode, problem state, etc.). Kernel In kernel mode, the CPU may perform any operation allowed by its architecture; any instruction may be executed, any I/O operation initiated, any area of memory accessed, and so on. In the other CPU modes, certain restrictions on CPU operations are enforced by the hardware. Typically, certain instructions are not permitted (especially those—including I/O operations—that could alter the global state of the machine), some memory are
https://en.wikipedia.org/wiki/Boot%20ROM
The boot ROM is a type of ROM that is used for booting a computer system. There are two types: a mask boot ROM that cannot be changed afterwards and a boot EEPROM, which can contain an UEFI implementation. Purpose Upon power up, hardware usually starts uninitialized. To continue booting, the system may need to read a bootloader from some peripheral device. It is often easier to implement routines for reading from external storage devices in software than in hardware. A boot ROM provides a place to store this initial loading code, at a fixed location immediately available to the processor when execution starts. Operation The boot ROM is mapped into memory at a fixed location, and the processor is designed to start executing from this location after reset. Usually, it is placed on the same die as the CPU, but it can also be an external ROM chip, as is common in older systems. The boot ROM will then initialize the hardware busses and peripherals needed to boot. In some cases the boot ROM is capable of initializing RAM, and in other cases it is up to the bootloader to do that. At the end of the hardware initialization, the boot ROM will try to load a bootloader from external peripheral(s) (like an eMMC, a microSD card, an external EEPROM, and so on) or through specific protocol(s) on a bus for data transmission (like USB, UART, etc). In many systems on a chip, the peripherals or buses from which the boot ROM tries to load the bootloader (such as eMMC for embedded bootloader, or external EEPROM for UEFI implementation), and the order in which they are loaded, can be configured. This configuration can be done by blowing some electronic fuses inside the system on a chip to encode that information, or by having specific pins or jumpers of the system on a chip high or low. Some boot ROMs are capable of checking the digital signature of the bootloader and will refuse to run the bootloader and stop the boot if the signature is not valid or has not been signed with an
https://en.wikipedia.org/wiki/Oldest%20people
This is a list of tables of the oldest people in the world in ordinal ranks. To avoid including false or unconfirmed claims of old age, names here are restricted to those people whose ages have been validated by an international body dealing in longevity research, such as the Gerontology Research Group (GRG) or Guinness World Records (GWR), and others who have otherwise been reliably sourced. The longest documented and verified human lifespan is that of Jeanne Calment of France (1875–1997), a woman who lived to age 122 years and 164 days. She claimed to have met Vincent van Gogh when she was 12 or 13. She received news media attention in 1985, after turning 110. Calment's claim was investigated and authenticated by Jean-Marie Robine and Dr Michel Allard for the GRG. Her longevity claim was put into question in 2018, but the original assessing team stood by their judgement. As females live longer than males on average, women predominate in combined records. The longest lifespan for a man is that of Jiroemon Kimura of Japan (1897–2013), who lived to age 116 years and 54 days. The oldest living person in the world whose age has been validated is -year-old Maria Branyas of Spain, born 4 March 1907. The world's oldest known living man is -year-old Juan Vicente Pérez of Venezuela, born 27 May 1909. Academics have hypothesized the existence of a number of blue zones around the world where people live longer than average. Ten oldest verified people ever Systematic verification of longevity has only been practiced since the 1950s and only in certain parts of the world. All ten oldest verified people ever are female. Oldest people (all women) Oldest men a Mortensen was born in Denmark. b Kristal was born in Maleniec, Końskie County, then part of the Russian Empire, now in Poland. Ten oldest living people Oldest living people c Branyas was born in the United States. Oldest living men Chronological list of the oldest known living person since 1954 This table l
https://en.wikipedia.org/wiki/Argus%20%E2%80%93%20Audit%20Record%20Generation%20and%20Utilization%20System
Argus – the Audit Record Generation and Utilization System is the first implementation of network flow monitoring, and is an ongoing open source network flow monitor project. Started by Carter Bullard in 1984 at Georgia Tech, and developed for cyber security at Carnegie Mellon University in the early 1990s, Argus has been an important contributor to Internet cyber security technology over its 30 years. . The Argus Project is focused on developing all aspects of large scale network situational awareness and network audit trail establishment in support of Network Operations (NetOps), Performance and Security Management. Motivated by the telco Call detail record (CDR), Argus attempts to generate network metadata that can be used to perform a large number of network management tasks. Argus is used by many universities, corporations and government entities including US DISA, DoD, DHS, FFRDCs, GLORIAD and is a Top 100 Internet Security Tool. Argus is designed to be a real-time situational awareness system, and its data can be used to track, alarm and alert on wire-line network conditions. The data can also be used to establish a comprehensive audit of all network traffic, as described in the Red Book, US DoD NCSC-TG-005, supplementing traditional Intrusion detection system (IDS) based network security. The audit trail is traditionally used as historical network traffic measurement data for network forensics and Network Behavior Anomaly Detection (NBAD). Argus has been used extensively in cybersecurity, end-to-end performance analysis, and more recently, software-defined networking (SDN) research. Argus has also been a topic in network management standards development. RMON (1995) and IPFIX (2001). Argus is composed of an advanced comprehensive network flow data generator, the Argus monitor, which processes packets (either capture files or live packet data) and generates detailed network traffic flow status reports of all the flows in the packet stream. Arg
https://en.wikipedia.org/wiki/The%20Sacred%20Armour%20of%20Antiriad
The Sacred Armour of Antiriad is an action-adventure game published by Palace Software in 1986 for Amstrad CPC, Commodore 64, DOS, TRS-80, and ZX Spectrum. In North America, the game was published by Epyx as Rad Warrior. The original game came with a 16-page comic book created by graphic artist Daniel Malone. The game is an early example of the Metroidvania genre, being developed without knowledge of and concurrently with Metroid. Plot In 2086, civilization destroys itself in a nuclear Armageddon, as two factions who both develop an anti-radiation battlesuit completely immune to conventional weapons go to war against each other when diplomatic peace talks break down. In the following millennia, the survivors develop into a hardy but peaceful race, living a quiet agricultural existence. One day, mysterious alien forces emerge from an old volcano containing a pre-war military base and attack, quickly conquering and enslaving the new breed of humans, and forcing the populace to work in mines. Many rebel against the mysterious overlords and one of these rebels, Tal, is instructed by his elders to seek out a legendary armoured suit - the Sacred Armour of Antiriad (the last word being a corruption of "anti-radiation"), which is in fact one of the pre-war battlesuits whose development originally instigated the diplomatic crisis that started the nuclear war. This armour is rumoured to render the wearer impervious to attack and, with its help, Tal hopes to defeat and overthrow the alien rulers of Earth. However, the armour requires other equipment to be added to it in order to make it function fully. These include anti-gravity boots, a particle negator, a pulsar beam, and an implosion mine. The last add-on is the most important as it is the one needed to destroy the volcano the enemy uses as its base. Gameplay The Sacred Armour of Antiriad is a mixture of a platform and maze game. The player controls Tal who, at the start, is simply a man dressed in a loincloth with t
https://en.wikipedia.org/wiki/Network%20Computer%20Reference%20Profile
Network Computer Reference Profile (NC reference profile, NCRP) was a specification for a network computer put forward by Oracle Corporation, endorsed by Sun Microsystems, IBM, Apple Computer, and Netscape, and finalized in 1996. NC1 The first version of this specification was known as the NC1 Reference Profile. NCRP specified minimum hardware requirements and software protocols. Among the software requirements were support of IP-based protocols (TCP/IP, FTP, etc.), www standards (HTTP, HTML, Java), email protocols, multimedia file formats, security standards. Operating systems used were NCOS or JavaOS. The minimum hardware requirements were: minimum screen resolution of 640 x 480 (VGA) or equivalent pointing device text input ability audio output Although this initial NC standard was intended to promote the diskless workstation model of computing, it did not preclude computers with additional features, such as the ability to operate either as a diskless workstation or a conventional fat client. Thus, an ordinary personal computer (PC) having all the required features, could technically be classified as a Network Computer; indeed, Sun noted that contemporary PCs did indeed meet the NC reference requirements. StrongARM The reference profile was subsequently revised to use the StrongARM processor. Intel After a trip by Ellison to Acer Group headquarters in 1996, he realised the importance to industry of having products based on Intel (x86-compatible) processors. NCI president Jerry Baker noted that "nobody [corporate users] had ever heard of the ARM chip". Options Many NCs operated via protocols such as BOOTP, DHCP, RARP and NFS. Both for ISP-bound and LAN-based reference implementation NCs, a smartcard option was available. This allowed user authentication to be performed in a secure manner, with SSL providing transport security. The smartcard also provided minimal local storage for ISP dialup configuration settings. This configuration data was not
https://en.wikipedia.org/wiki/Enclosure%20Services%20Interface
The Enclosure Services Interface (ESI) is a computer protocol used in SCSI enclosures. This is part of a chain of connections that allows a host computer to communicate with the enclosure to access its power, cooling, and other non-data characteristics. This overall approach is called SCSI attached enclosure services: The host computer communicates with the disks in the enclosure via a Serial SCSI interface (which may be either FC-AL or SAS). One of the disk devices located in the enclosure is set up to allow SCSI Enclosure Services (SES) communication through a LUN. The disk-drive then communicates with the SES processor in the enclosure via ESI. The data sent over the ESI interface is simply the contents of a SCSI command and the response to that command. In fault-tolerant enclosures, more than one disk-drive slot has ESI enabled to allow SES communications to continue even after the failure of any of the disk-drives. ESI electrical interface The ESI interface was designed to make use of the seven existing "SEL_n" address signals which are used at power-on time for establishing the address (ALPA) of a disk-drive. An extra eighth signal called "-PARALLEL ESI" is used to switch the function of the SEL_n signals. ESI command sequence A SCSI Send Diagnostic command or Receive Diagnostic Results command is sent from the host computer to the disk-drive to initiate an SES transfer. The Disk-drive then asserts "-PARALLEL ESI" to begin this sequence of ESI bus phases: Finally, the disk-drive deasserts "-PARALLEL ESI". The above sequence is just a simple implementation of a 4-bit wide parallel interface which is used to execute a SCSI transaction. If the CDB is for a Send Diagnostic command then the data is sent to a SCSI diagnostic page in the enclosure. If the CDB is for a SCSI Receive Diagnostic Results command then the data is received from a SCSI diagnostic page. No other CDB types are allo
https://en.wikipedia.org/wiki/Alt%20code
On personal computers with numeric keypads that use Microsoft operating systems, such as Windows, many characters that do not have a dedicated key combination on the keyboard may nevertheless be entered using the Alt code (the Alt numpad input method). This is done by pressing and holding the key, then typing a number on the keyboard's numeric keypad that identifies the character and then releasing . History and description MS DOS On IBM PC compatible personal computers from the 1980s, the BIOS allowed the user to hold down the key and type a decimal number on the keypad. It would place the corresponding code into the keyboard buffer so that it would look (almost) as if the code had been entered by a single keystroke. Applications reading keystrokes from the BIOS would behave according to what action they associate with that code. Some would interpret the code as a command, but often it would be interpreted as an 8-bit character from the current code page that was inserted into the text the user was typing. On the original IBM PC the code page was CP437. Some Eastern European, Arabic and Asian computers used other hardware code pages, and MS-DOS was able to switch between them at runtime with commands like KEYB, CHCP or MODE. This causes the Alt combinations to produce different characters (as well as changing the display of any previously-entered text in the same manner). A common choice in locales using variants of the Latin alphabet was CP850, which provided more Latin character variants. (There were, however, many more code pages; for a more complete list, see code page). PC keyboards designed for non-English use included other methods of inserting these characters, such as national keyboard layouts, the AltGr key or dead keys, but the Alt key was the only method of inserting some characters, and the only method that was the same on all machines, so it remained very popular. This input method is emulated by many pieces of software (such as later versions
https://en.wikipedia.org/wiki/AMPRNet
The AMPRNet (AMateur Packet Radio Network) or Network 44 is used in amateur radio for packet radio and digital communications between computer networks managed by amateur radio operators. Like other amateur radio frequency allocations, an IP range of was provided in 1981 for Amateur Radio Digital Communications (a generic term) and self-administered by radio amateurs. In 2001, undocumented and dual-use of as a network telescope began, recording the spread of the Code Red II worm in July 2001. In mid-2019, part of IPv4 range was sold off for conventional use, due to IPv4 address exhaustion. Amateur Radio Digital Communications (mode) Beginning on 1 May 1978, the Canadian authorities allowed radio amateurs on the 1.25-meter band (220 MHz) to use packet radio, and later in 1978 announced the "Amateur Digital Radio Operator's Certificate". Discussion on digital communication amateur radio modes, using the Internet protocol suite and IPv4 addresses followed subsequently. By 1988, one thousand assignments of address space had been made. approximately 1% of inbound traffic volume to the network was legitimate radio amateur traffic that could be routed onwards, with the remaining 2‒100 gigabyte per day of Internet background noise being diverted and logged by the University of California San Diego (UCSD) internet telescope for research purposes. By 2016, the European-based High-speed Amateur-radio Multimedia NETwork (HAMNET) offered a multi-megabit Internet Protocol network with 4,000 nodes, covering central Europe. History and design The use of the Internet protocols TCP/IP on amateur (ham) radio occurred early in Internet history, preceding the public Internet by over a decade. In 1981, Hank Magnuski obtained the class A netblock of 16.7 million IP addresses for amateur radio users worldwide. This was prior to Internet flag day (1 January 1983) when the ARPANET Network Control Protocol (NCP) was replaced by the Transmission Control Protocol (TCP). The initial
https://en.wikipedia.org/wiki/Carrier-to-noise%20ratio
In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation. For example, with FM radio, the strength of the 100 MHz carrier with modulations would be considered for CNR, whereas the audio frequency analogue message signal would be for SNR; in each case, compared to the apparent noise. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition. Digitally modulated signals (e.g. QAM or PSK) are basically made of two CW carriers (the I and Q components, which are out-of-phase carriers). In fact, the information (bits or symbols) is carried by given combinations of phase and/or amplitude of the I and Q components. It is for this reason that, in the context of digital modulations, digitally modulated signals are usually referred to as carriers. Therefore, the term carrier-to-noise-ratio (CNR), instead of signal-to-noise-ratio (SNR), is preferred to express the signal quality when the signal has been digitally modulated. High C/N ratios provide good quality of reception, for example low bit error rate (BER) of a digital message signal, or high SNR of an analog message signal. Definition The carrier-to-noise ratio is defined as the ratio of the received modulated carrier signal power C to the received noise power N after the receiver filters: . When both carrier and noise are measured across the same impedance, this ratio can equivalently be given as: , where and are the root mean square (RMS) voltage levels of the carrier signal and noise respectively. C/N ratios are often specified in decibels (dB): or in term of voltage: Measurements and estimation The C/N ratio is measured in a manner similar to the way the signal-to-noise ratio (S/N) is measured, and both specifications give an indication of
https://en.wikipedia.org/wiki/Piecewise%20syndetic%20set
In mathematics, piecewise syndeticity is a notion of largeness of subsets of the natural numbers. A set is called piecewise syndetic if there exists a finite subset G of such that for every finite subset F of there exists an such that where . Equivalently, S is piecewise syndetic if there is a constant b such that there are arbitrarily long intervals of where the gaps in S are bounded by b. Properties A set is piecewise syndetic if and only if it is the intersection of a syndetic set and a thick set. If S is piecewise syndetic then S contains arbitrarily long arithmetic progressions. A set S is piecewise syndetic if and only if there exists some ultrafilter U which contains S and U is in the smallest two-sided ideal of , the Stone–Čech compactification of the natural numbers. Partition regularity: if is piecewise syndetic and , then for some , contains a piecewise syndetic set. (Brown, 1968) If A and B are subsets of with positive upper Banach density, then is piecewise syndetic. Other notions of largeness There are many alternative definitions of largeness that also usefully distinguish subsets of natural numbers: Cofiniteness IP set member of a nonprincipal ultrafilter positive upper density syndetic set thick set See also Ergodic Ramsey theory Notes References Semigroup theory Ergodic theory Ramsey theory Combinatorics
https://en.wikipedia.org/wiki/Richard%20Klein%20%28paleoanthropologist%29
Richard G. Klein (born April 11, 1941) is a Professor of Biology and Anthropology at Stanford University. He is the Anne T. and Robert M. Bass Professor in the School of Humanities and Sciences. He earned his PhD at the University of Chicago in 1966, and was elected to the National Academy of Sciences in April 2003. His research interests include paleoanthropology, Africa and Europe. His primary thesis is that modern humans evolved in East Africa, perhaps 100,000 years ago and, starting 50,000 years ago, began spreading throughout the non-African world, replacing archaic human populations over time. He is a critic of the idea that behavioral modernity arose gradually over the course of tens of thousands, hundreds of thousands of years or millions of years, instead supporting the view that modern behavior arose suddenly in the transition from the Middle Stone Age to the Later Stone Age around 50-40,000 years ago. Early life and education Klein was born in 1941 in Chicago, and went to college at the University of Michigan, Ann Arbor. In 1962, he enrolled as a graduate student at the University of Chicago to study with the Neanderthal expert, Francis Clark Howell. Of the two theories in vogue then, that Neanderthals had evolved into the Cro-Magnons of Europe or that they had been replaced by the Cro-Magnons, Klein favored the replacement theory. Klein completed a master's degree in 1964, and then studied at the University of Bordeaux with François Bordes, who specialized in prehistory. There he visited the La Quina and La Ferrassie caves in southwest France, containing Cro-Magnon artifacts layered on top of Neanderthal ones. These visits influenced him into believing the shift from Neanderthal to modern humans 40,000 to 35,000 years ago was sudden rather than gradual. Klein also visited Russia to examine artifacts. Klein briefly held positions at the University of Wisconsin–Milwaukee, Northwestern University in Evanston, Illinois, and the University of Washington, S
https://en.wikipedia.org/wiki/Thick%20set
In mathematics, a thick set is a set of integers that contains arbitrarily long intervals. That is, given a thick set , for every , there is some such that . Examples Trivially is a thick set. Other well-known sets that are thick include non-primes and non-squares. Thick sets can also be sparse, for example: Generalisations The notion of a thick set can also be defined more generally for a semigroup, as follows. Given a semigroup and , is said to be thick if for any finite subset , there exists such that It can be verified that when the semigroup under consideration is the natural numbers with the addition operation , this definition is equivalent to the one given above. See also Cofinal (mathematics) Cofiniteness Ergodic Ramsey theory Piecewise syndetic set Syndetic set References J. McLeod, "Some Notions of Size in Partial Semigroups", Topology Proceedings, Vol. 25 (Summer 2000), pp. 317-332. Vitaly Bergelson, "Minimal Idempotents and Ergodic Ramsey Theory", Topics in Dynamics and Ergodic Theory 8-39, London Math. Soc. Lecture Note Series 310, Cambridge Univ. Press, Cambridge, (2003) Vitaly Bergelson, N. Hindman, "Partition regular structures contained in large sets are abundant", Journal of Combinatorial Theory, Series A 93 (2001), pp. 18-36 N. Hindman, D. Strauss. Algebra in the Stone-Čech Compactification. p104, Def. 4.45. Semigroup theory Ergodic theory
https://en.wikipedia.org/wiki/WDC%2065C21
The W65C21S is a very flexible Peripheral Interface Adapter (PIA) for use with WDC’s 65xx and other 8-bit microprocessor families. It is produced by Western Design Center (WDC). The W65C21S provides programmed microprocessor control of up to two peripheral devices (Port A and Port B). Peripheral device control is accomplished through two 8-bit bidirectional I/O Ports, with individually designed Data Direction Registers. The Data Direction Registers provide selection of data flow direction (input or output) at each respective I/O Port. Data flow direction may be selected on a line-by-line basis with intermixed input and output lines within the same port. The “handshake” interrupt control feature is provided by four peripheral control lines. This capability provides enhanced control over data transfer functions between the microprocessor and peripheral devices, as well as bidirectional data transfer between W65C21S Peripheral Interface Adapters in multiprocessor systems. The PIA interfaces to the 65xx microprocessor family with a reset line, a ϕ2 clock line, a read/write line, two interrupt request lines, two register select lines, three chip select lines and an 8-bit bidirectional data bus. The PIA interfaces to the peripheral devices with four interrupt/control lines and two 8-bit bidirectional buses. The W65C21S PIA is organized into two independent sections referred to as the A Side and the B Side. Each section consists of Control Register (CRA, CRB), Data Direction Register (DDRA, DDRB), Output Register (ORA, ORB), Interrupt Status Control (ISCA, ISCB) and the buffers necessary to drive the Peripheral Interface buses. Data Bus Buffers (DBB) interface data from the two sections to the data bus, while the Date Input Register (DIR) interfaces data from the DBB to the PIA registers. Chip Select and RWB control circuitry interface to the processor bus control lines. Features of the W65C21S Low power CMOS N-well silicon gate technology High speed/Low power replace
https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s%20principle
In mathematics, Vopěnka's principle is a large cardinal axiom. The intuition behind the axiom is that the set-theoretical universe is so large that in every proper class, some members are similar to others, with this similarity formalized through elementary embeddings. Vopěnka's principle was first introduced by Petr Vopěnka and independently considered by H. Jerome Keisler, and was written up by . According to , Vopěnka's principle was originally intended as a joke: Vopěnka was apparently unenthusiastic about large cardinals and introduced his principle as a bogus large cardinal property, planning to show later that it was not consistent. However, before publishing his inconsistency proof he found a flaw in it. Definition Vopěnka's principle asserts that for every proper class of binary relations (each with set-sized domain), there is one elementarily embeddable into another. This cannot be stated as a single sentence of ZFC as it involves a quantification over classes. A cardinal κ is called a Vopěnka cardinal if it is inaccessible and Vopěnka's principle holds in the rank Vκ (allowing arbitrary S ⊂ Vκ as "classes"). Many equivalent formulations are possible. For example, Vopěnka's principle is equivalent to each of the following statements. For every proper class of simple directed graphs, there are two members of the class with a homomorphism between them. For any signature Σ and any proper class of Σ-structures, there are two members of the class with an elementary embedding between them. For every predicate P and proper class S of ordinals, there is a non-trivial elementary embedding j:(Vκ, ∈, P) → (Vλ, ∈, P) for some κ and λ in S. The category of ordinals cannot be fully embedded in the category of graphs. Every subfunctor of an accessible functor is accessible. (In a definable classes setting) For every natural number n, there exists a C(n)-extendible cardinal. Strength Even when restricted to predicates and proper classes definable in first o
https://en.wikipedia.org/wiki/WDC%2065C22
The W65C22 versatile interface adapter (VIA) is an input/output device for use with the 65xx series microprocessor family. Overview Designed by the Western Design Center, the W65C22 is made in two versions, both of which are rated for 14 megahertz operation, and available in DIP-40 or PLCC-44 packages. W65C22N: This device is fully compatible with the NMOS 6522 produced by MOS Technology and others, and includes current-limiting resistors on its output lines. The W65C22N has an open-drain interrupt output (the pin) that is compatible with a wired-OR interrupt circuit. Hence the DIP-40 version is a "drop-in" replacement for the NMOS part. W65C22S: This device is fully software– and partially hardware–compatible with the NMOS part. The W65C22S' output is a totem pole configuration, and thus cannot be directly connected to a wired-OR interrupt circuit. As with the NMOS 6522, the W65C22 includes functions for programmed control of two peripheral ports (ports A and B). Two program–controlled 8-bit bi-directional peripheral I/O ports allow direct interfacing between the microprocessor and selected peripheral units. Each port has input data latching capability. Two programmable data direction registers (A and B) allow selection of data direction (input or output) on an individual I/O line basis. Also provided are two programmable 16-bit interval timer/counters with latches. Timer 1 may be operated in a one-shot or free-run mode. In either mode, a timer can generate an interrupt when it has counted down to zero. Timer 2 functions as an interval counter or a pulse counter. If operating as an interval counter, timer 2 is driven by the microprocessor's PHI2 clock source. As a pulse counter, timer 2 is triggered by an external pulse source on the chip's line. Serial data transfers are provided by a serial to parallel/parallel to serial shift register, with bit transfers synchronized with the PHI2 clock. Application versatility is further increased by various con
https://en.wikipedia.org/wiki/WDC%2065C51
The CMOS W65C51 Asynchronous Communications Interface Adapter (ACIA) provides an easily implemented, program controlled interface between microprocessor based systems and serial communication data sets and modems. It is produced by Western Design Center (WDC) and is a drop-in replacement for the MOS Technology 6551. The ACIA has an internal baud rate generator, eliminating the need for multiple component support circuits. The transmitter rate can be selected under program control to be 1 of 15 different rates from 50 to 19,200 bits per second, or at 1/16 times an external clock rate. The receiver rate may be selected under program control to be either the Transmitter rate or at 1/16 times the external clock rate. The ACIA has programmable word lengths of 5, 6, 7 or 8 bits; even, odd or no parity 1, 1½ or 2 stop bits. The ACIA is designed for maximum programmed control from the microprocessor (MPU) to simplify hardware implementation. Three separate registers permit an MPU to easily select the W65C51 operating modes, data checking parameters and determine operational status. The command register controls parity, receiver echo mode, transmitter interrupt control, the state of the RTS line, receiver interrupt control and the state of the DTR line. The control register controls the number of stop bits, the word length, receiver clock source and transmit/receive rate. The status register indicates the status of the IRQ, DSR and DCD lines, transmitter and receiver data Registers, and overrun, framing and parity error conditions. Transmitter and receiver data Registers are used for temporary data storage by the transmit and receiver circuits, each able to hold one byte. Known bugs The N version datasheet has a note regarding the Transmitter Data Register Empty flag: "The W65C51N loads the Transmitter Data Register (TDR) and Transmitter Shift Register (TSR) at the same time. A delay should be used to insure that the shift register is empty before the TDR/TSR is rel
https://en.wikipedia.org/wiki/Mercator%20series
In mathematics, the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm: In summation notation, The series converges to the natural logarithm (shifted by 1) whenever . History The series was discovered independently by Johannes Hudde and Isaac Newton. It was first published by Nicholas Mercator, in his 1668 treatise Logarithmotechnia. Derivation The series can be obtained from Taylor's theorem, by inductively computing the nth derivative of at , starting with Alternatively, one can start with the finite geometric series () which gives It follows that and by termwise integration, If , the remainder term tends to 0 as . This expression may be integrated iteratively k more times to yield where and are polynomials in x. Special cases Setting in the Mercator series yields the alternating harmonic series Complex series The complex power series is the Taylor series for , where log denotes the principal branch of the complex logarithm. This series converges precisely for all complex number . In fact, as seen by the ratio test, it has radius of convergence equal to 1, therefore converges absolutely on every disk B(0, r) with radius r < 1. Moreover, it converges uniformly on every nibbled disk , with δ > 0. This follows at once from the algebraic identity: observing that the right-hand side is uniformly convergent on the whole closed unit disk. See also John Craig References Anton von Braunmühl (1903) Vorlesungen über Geschichte der Trigonometrie, Seite 134, via Internet Archive Eriksson, Larsson & Wahde. Matematisk analys med tillämpningar, part 3. Gothenburg 2002. p. 10. Some Contemporaries of Descartes, Fermat, Pascal and Huygens from A Short Account of the History of Mathematics (4th edition, 1908) by W. W. Rouse Ball Mathematical series Logarithms
https://en.wikipedia.org/wiki/Sun-2
The Sun-2 series of UNIX workstations and servers was launched by Sun Microsystems in November 1983. As the name suggests, the Sun-2 represented the second generation of Sun systems, superseding the original Sun-1 series. The Sun-2 series used a 10 MHz Motorola 68010 microprocessor with a proprietary Sun-2 Memory Management Unit (MMU), which enabled it to be the first Sun architecture to run a full virtual memory UNIX implementation, SunOS 1.0, based on 4.1BSD. Early Sun-2 models were based on the Intel Multibus architecture, with later models using VMEbus, which continued to be used in the successor Sun-3 and Sun-4 families. Sun-2 systems were supported in SunOS until version 4.0.3. A port to support Multibus Sun-2 systems in NetBSD was begun in January 2001 from the Sun-3 support in the NetBSD 1.5 release. Code supporting the Sun-2 began to be merged into the NetBSD tree in April 2001. sun2 is considered a tier 2 support platform as of NetBSD 7.0.1. Sun-2 models Models are listed in approximately chronological order. A desktop disk and tape sub-system was introduced for the Sun-2/50 desktop workstation. It could hold a 5 ¼" disk drive and 5 ¼" tape drive. It used DD-50 (sometimes erroneously referred to as DB-50) connectors for its SCSI cables, a Sun specific design. It was often referred to as a "Sun Shoebox". Sun-1 systems upgraded with Sun-2 Multibus CPU boards were sometimes referred to as the 2/100U (upgraded Sun-100) or 2/150U (upgraded Sun-150). A typical configuration of a monochrome 2/120 with 4 MB of memory, 71 MB SCSI disk and 20 MB 1/4" SCSI tape cost $29,300 (1986 US price list). A color 2/160 with 8 MB of memory, two 71 MB SCSI disks and 60 MB 1/4" SCSI tape cost $48,800 (1986 US price list). A Sun 2/170 server with 4 MB of memory, no display, two Fujitu Eagle 380 MB disk drive, one Xylogics 450 SMD disk controller, a 6250 bpi 1/2 inch tape drive and a 72" rack cost $79,500 (1986 US price list). Sun-2 hardware Sun 2 Multibus systems
https://en.wikipedia.org/wiki/RIAA%20equalization
RIAA equalization is a specification for the recording and playback of phonograph records, established by the Recording Industry Association of America (RIAA). The purposes of the equalization are to permit greater recording times (by decreasing the mean width of each groove), to improve sound quality, and to reduce the groove damage that would otherwise arise during playback. The RIAA equalization curve was intended to operate as a de facto global industry standard for records since 1954, but when the change actually took place is difficult to determine. Before then, especially from 1940, each record company applied its own equalization; over 100 combinations of turnover and rolloff frequencies were in use, the main ones being Columbia-78, Decca-U.S., European (various), Victor-78 (various), Associated, BBC, NAB, Orthacoustic, World, Columbia LP, FFRR-78 and microgroove, and AES. The obvious consequence was that different reproduction results were obtained if the recording and playback filtering were not matched. The RIAA curve RIAA equalization is a form of pre-emphasis on recording and de-emphasis on playback. A recording is made with the low frequencies reduced and the high frequencies boosted, and on playback, the opposite occurs. The net result is a flat frequency response, but with attenuation of high-frequency noise such as hiss and clicks that arise from the recording medium. Reducing the low frequencies also limits the excursions the cutter needs to make when cutting a groove. Groove width is thus reduced, allowing more grooves to fit into a given surface area, permitting longer recording times. This also reduces physical stresses on the stylus, which might otherwise cause distortion or groove damage during playback. A potential drawback of the system is that rumble from the playback turntable's drive mechanism is amplified by the low-frequency boost that occurs on playback. Players must, therefore, be designed to limit rumble, more so than if RIAA eq
https://en.wikipedia.org/wiki/MicroMUSE
MicroMUSE is a MUD started in 1990. It is based on the TinyMUSE system, which allows members to interact in a virtual environment called Cyberion City, as well as to create objects and modify their environment. MicroMUSE was conceived as an environment to allow people in far-flung locations to interact with each other, particularly college students with Internet access. A core group of users remain active. History 1990 MicroMUSE was founded as MicroMUSH by the user known as "Jin" in the summer of 1990. Based upon TinyMUSH, MicroMUSH was centered around Cyberion City, a space station orbiting Earth of the 24th century. The initial MicroMUSH database was largely due to the efforts of Jin and the Wizards who went by the online aliases "Trout_Complex", "Coyote", "Opera_Ghost", "Snooze", "Wai", "Star" and "Mama.Bear". Larry "Leet" Foard and "Bard" (later known as "Michael") were, along with Jin, the primary programmers. The focus, at the time, primarily was communication and creativity. Users were encouraged to build "objects" and were given extensive leeway to create and communicate with other members. At times, it could be compared to a high-tech version of the wild west. 1991 Typical problems of growth and success, over time, led to issues with computing resources. In April 1991, MicroMUSH moved to MIT. The name was officially changed to MicroMUSE during this same time period. 1992 Through 1992, the focus of MicroMUSE continued to change, though not very noticeably to existing users. New users were given a smaller "quota" of object which they could build. The game was extremely popular at this point. Users could log in at almost any time of day and find at least thirty active people. 1993 By the end of 1993, the space engine, which had been developed within the original theme of MicroMUSE, was moved out of MicroMUSE. The focus was shifting; it became less about creativity and communication between random people across the internet, and more about bring
https://en.wikipedia.org/wiki/Max%20q
The max q, or maximum dynamic pressure, condition is the point when an aerospace vehicle's atmospheric flight reaches the maximum difference between the fluid dynamics total pressure and the ambient static pressure. For an airplane, this occurs at the maximum speed at minimum altitude corner of the flight envelope. For a space vehicle launch, this occurs at the crossover point between dynamic pressure increasing with speed and static pressure decreasing with increasing altitude. This is an important design factor of aerospace vehicles, since the aerodynamic structural load on the vehicle is proportional to dynamic pressure. Dynamic pressure Dynamic pressure q is defined in incompressible fluid dynamics as where ρ is the local air density, and v is the vehicle's velocity. The dynamic pressure can be thought of as the kinetic energy density of the air with respect to the vehicle, and for incompressible flow equals the difference between total pressure and static pressure. This quantity appears notably in the lift and drag equations. For a car traveling at at sea level (where the air density is about ,) the dynamic pressure on the front of the car is , about 0.38% of the static pressure ( at sea level). For an airliner cruising at at an altitude of (where the air density is about ), the dynamic pressure on the front of the plane is , about 41% of the static pressure (). In rocket launches For a launch of a space vehicle from the ground, dynamic pressure is: zero at lift-off, when the air density ρ is high but the vehicle's speed v = 0; zero outside the atmosphere, where the speed v is high, but the air density ρ = 0; always non-negative, given the quantities involved. During the launch, the vehicle speed increases but the air density decreases as the vehicle rises. Therefore, by Rolle's theorem, there is a point where the dynamic pressure is maximal. In other words, before reaching max q, the dynamic pressure increase due to increasing velocity is greater
https://en.wikipedia.org/wiki/IP%20set
In mathematics, an IP set is a set of natural numbers which contains all finite sums of some infinite set. The finite sums of a set D of natural numbers are all those numbers that can be obtained by adding up the elements of some finite nonempty subset of D. The set of all finite sums over D is often denoted as FS(D). Slightly more generally, for a sequence of natural numbers (ni), one can consider the set of finite sums FS((ni)), consisting of the sums of all finite length subsequences of (ni). A set A of natural numbers is an IP set if there exists an infinite set D such that FS(D) is a subset of A. Equivalently, one may require that A contains all finite sums FS((ni)) of a sequence (ni). Some authors give a slightly different definition of IP sets: They require that FS(D) equal A instead of just being a subset. The term IP set was coined by Hillel Furstenberg and Benjamin Weiss to abbreviate "infinite-dimensional parallelepiped". Serendipitously, the abbreviation IP can also be expanded to "idempotent" (a set is an IP if and only if it is a member of an idempotent ultrafilter). Hindman's theorem If is an IP set and , then at least one is an IP set. This is known as Hindman's theorem or the finite sums theorem. In different terms, Hindman's theorem states that the class of IP sets is partition regular. Since the set of natural numbers itself is an IP set and partitions can also be seen as colorings, one can reformulate a special case of Hindman's theorem in more familiar terms: Suppose the natural numbers are "colored" with n different colors; each natural number gets one and only one color. Then there exists a color c and an infinite set D of natural numbers, all colored with c, such that every finite sum over D also has color c. Hindman's theorem is named for mathematician Neil Hindman, who proved it in 1974. The Milliken–Taylor theorem is a common generalisation of Hindman's theorem and Ramsey's theorem. Semigroups The definition of being IP has be
https://en.wikipedia.org/wiki/Partition%20regularity
In combinatorics, a branch of mathematics, partition regularity is one notion of largeness for a collection of sets. Given a set , a collection of subsets is called partition regular if every set A in the collection has the property that, no matter how A is partitioned into finitely many subsets, at least one of the subsets will also belong to the collection. That is, for any , and any finite partition , there exists an i ≤ n such that belongs to . Ramsey theory is sometimes characterized as the study of which collections are partition regular. Examples The collection of all infinite subsets of an infinite set X is a prototypical example. In this case partition regularity asserts that every finite partition of an infinite set has an infinite cell (i.e. the infinite pigeonhole principle.) Sets with positive upper density in : the upper density of is defined as (Szemerédi's theorem) For any ultrafilter on a set , is partition regular: for any , if , then exactly one . Sets of recurrence: a set R of integers is called a set of recurrence if for any measure-preserving transformation of the probability space (Ω, β, μ) and of positive measure there is a nonzero so that . Call a subset of natural numbers a.p.-rich if it contains arbitrarily long arithmetic progressions. Then the collection of a.p.-rich subsets is partition regular (Van der Waerden, 1927). Let be the set of all n-subsets of . Let . For each n, is partition regular. (Ramsey, 1930). For each infinite cardinal , the collection of stationary sets of is partition regular. More is true: if is stationary and for some , then some is stationary. The collection of -sets: is a -set if contains the set of differences for some sequence . The set of barriers on : call a collection of finite subsets of a barrier if: and for all infinite , there is some such that the elements of X are the smallest elements of I; i.e. and . This generalizes Ramsey's theorem, as each is a barrier. (
https://en.wikipedia.org/wiki/Luminescent%20bacteria
Luminescent bacteria emit light as the result of a chemical reaction during which chemical energy is converted to light energy. Luminescent bacteria exist as symbiotic organisms carried within a larger organism, such as many deep sea organisms, including the Lantern Fish, the Angler fish, certain jellyfish, certain clams and the Gulper eel. The light is generated by an enzyme-catalyzed chemoluminescence reaction, wherein the pigment luciferin is oxidised by the enzyme luciferase. The expression of genes related to bioluminescence is controlled by an operon called the lux operon. Some species of luminescent bacteria possess quorum sensing, the ability to determine local population by the concentration of chemical messengers. Species which have quorum sensing can turn on and off certain chemical pathways, commonly luminescence; in this way, once population levels reach a certain point the bacteria switch on light-production Characteristics of the phenomenon Bioluminescence is a form of luminescence, or "cold light" emission; less than 20% of the light generates thermal radiation. It should not be confused with fluorescence, phosphorescence or refraction of light. Most forms of bioluminescence are brighter (or only exist) at night, following a circadian rhythm. See also Dinoflagellates Vibrionaceae (e.g. Vibrio fischeri, Vibrio harveyi, Vibrio phosphoreum) References External links Bioluminescence Lecture Notes Bioluminescence Webpage Isolation of Vibrio phosphoreum Luminescent Bacteria Scripps Institution of Oceanography: Bioluminescence ' Bacteria Bioluminescent bacteria
https://en.wikipedia.org/wiki/Grigore%20Moisil
Grigore Constantin Moisil (; 10 January 1906 – 21 May 1973) was a Romanian mathematician, computer pioneer, and titular member of the Romanian Academy. His research was mainly in the fields of mathematical logic (Łukasiewicz–Moisil algebra), algebraic logic, MV-algebra, and differential equations. He is viewed as the father of computer science in Romania. Moisil was also a member of the Academy of Sciences of Bologna and of the International Institute of Philosophy. In 1996, the IEEE Computer Society awarded him posthumously the Computer Pioneer Award. Biography Grigore Moisil was born in 1906 in Tulcea into an intellectual family. His great-grandfather, Grigore Moisil (1814–1891), a clergyman, was one of the founders of the first Romanian high school in Năsăud. His father, Constantin Moisil (1876–1958), was a history professor, archaeologist and numismatist; as a member of the Romanian Academy, he filled the position of Director of the Numismatics Office of the Academy. His mother, Elena (1863–1949), was a teacher in Tulcea, later the director of "Maidanul Dulapului" school in Bucharest (now "Ienăchiță Văcărescu" school). Grigore Moisil attended primary school in Bucharest, then high school in Vaslui and Bucharest (at ) between 1916 and 1922. In 1924 he was admitted to the Civil Engineering School of the Polytechnic University of Bucharest, and also the Mathematics School of the University of Bucharest. He showed a stronger interest in mathematics, so he quit the Polytechnic University in 1929, despite already having passed all the third-year exams. In 1929 he defended his Ph.D. thesis, La mécanique analytique des systemes continus (Analytical mechanics of continuous systems), before a commission led by Gheorghe Țițeica, with Dimitrie Pompeiu and Anton Davidoglu as members. The thesis was published the same year by the Gauthier-Villars publishing house in Paris, and received favourable comments from Vito Volterra, Tullio Levi-Civita, and Paul Lévy. In 1930 Mo
https://en.wikipedia.org/wiki/Power-on%20reset
A power-on reset (PoR, POR) generator is a microcontroller or microprocessor peripheral that generates a reset signal when power is applied to the device. It ensures that the device starts operating in a known state. PoR generator In VLSI devices, the power-on reset (PoR) is an electronic device incorporated into the integrated circuit that detects the power applied to the chip and generates a reset impulse that goes to the entire circuit placing it into a known state. A simple PoR uses the charging of a capacitor, in series with a resistor, to measure a time period during which the rest of the circuit is held in a reset state. A Schmitt trigger may be used to deassert the reset signal cleanly, once the rising voltage of the RC network passes the threshold voltage of the Schmitt trigger. The resistor and capacitor values should be determined so that the charging of the RC network takes long enough that the supply voltage will have stabilised by the time the threshold is reached. One of the issues with using RC network to generate PoR pulse is the sensitivity of the R and C values to the power-supply ramp characteristics. When the power supply ramp is rapid, the R and C values can be calculated so that the time to reach the switching threshold of the schmitt trigger is enough to apply a long enough reset pulse. When the power supply ramp itself is slow, the RC network tends to get charged up along with the power-supply ramp up. So when the input schmitt stage is all powered up and ready, the input voltage from the RC network would already have crossed the schmitt trigger point. This means that there might not be a reset pulse supplied to the core of the VLSI. Power-on reset on IBM mainframes On an IBM mainframe, a power-on reset (POR) is a sequence of actions that the processor performs either due to a POR request from the operator or as part of turning on power. The operator requests a POR for configuration changes that cannot be recognized by a simple Syste
https://en.wikipedia.org/wiki/Logarithmic%20growth
In mathematics, logarithmic growth describes a phenomenon whose size or cost can be described as a logarithm function of some input. e.g. y = C log (x). Any logarithm base can be used, since one can be converted to another by multiplying by a fixed constant. Logarithmic growth is the inverse of exponential growth and is very slow. A familiar example of logarithmic growth is a number, N, in positional notation, which grows as logb (N), where b is the base of the number system used, e.g. 10 for decimal arithmetic. In more advanced mathematics, the partial sums of the harmonic series grow logarithmically. In the design of computer algorithms, logarithmic growth, and related variants, such as log-linear, or linearithmic, growth are very desirable indications of efficiency, and occur in the time complexity analysis of algorithms such as binary search. Logarithmic growth can lead to apparent paradoxes, as in the martingale roulette system, where the potential winnings before bankruptcy grow as the logarithm of the gambler's bankroll. It also plays a role in the St. Petersburg paradox. In microbiology, the rapidly growing exponential growth phase of a cell culture is sometimes called logarithmic growth. During this bacterial growth phase, the number of new cells appearing is proportional to the population. This terminological confusion between logarithmic growth and exponential growth may be explained by the fact that exponential growth curves may be straightened by plotting them using a logarithmic scale for the growth axis. See also (an even slower growth model) References Logarithms
https://en.wikipedia.org/wiki/Biogenic%20silica
Biogenic silica (bSi), also referred to as opal, biogenic opal, or amorphous opaline silica, forms one of the most widespread biogenic minerals. For example, microscopic particles of silica called phytoliths can be found in grasses and other plants. Silica is an amorphous metalloid oxide formed by complex inorganic polymerization processes. This is opposed to the other major biogenic minerals, comprising carbonate and phosphate, which occur in nature as crystalline iono-covalent solids (e.g. salts) whose precipitation is dictated by solubility equilibria. Chemically, bSi is hydrated silica (SiO2·nH2O), which is essential to many plants and animals. Diatoms in both fresh and salt water extract dissolved silica from the water to use as a component of their cell walls. Likewise, some holoplanktonic protozoa (Radiolaria), some sponges, and some plants (leaf phytoliths) use silicon as a structural material. Silicon is known to be required by chicks and rats for growth and skeletal development. Silicon is in human connective tissues, bones, teeth, skin, eyes, glands and organs. Silica in marine environments Silicate, or silicic acid (H4SiO4), is an important nutrient in the ocean. Unlike the other major nutrients such as phosphate, nitrate, or ammonium, which are needed by almost all marine plankton, silicate is an essential chemical requirement for very specific biota, including diatoms, radiolaria, silicoflagellates, and siliceous sponges. These organisms extract dissolved silicate from open ocean surface waters for the buildup of their particulate silica (SiO2), or opaline, skeletal structures (i.e. the biota's hard parts). Some of the most common siliceous structures observed at the cell surface of silica-secreting organisms include: spicules, scales, solid plates, granules, frustules, and other elaborate geometric forms, depending on the species considered. Marine sources of silica Five major sources of dissolved silica to the marine environment can be distingu
https://en.wikipedia.org/wiki/Sun-1
Sun-1 was the first generation of UNIX computer workstations and servers produced by Sun Microsystems, launched in May 1982. These were based on a CPU board designed by Andy Bechtolsheim while he was a graduate student at Stanford University and funded by DARPA. The Sun-1 systems ran SunOS 0.9, a port of UniSoft's UniPlus V7 port of Seventh Edition UNIX to the Motorola 68000 microprocessor, with no window system. Affixed to the case of early Sun-1 workstations and servers is a red bas relief emblem with the word SUN spelled using only symbols shaped like the letter U. This is the original Sun logo, rather than the more familiar purple diamond shape used later. The first Sun-1 workstation was sold to Solo Systems in May 1982. The Sun-1/100 was used in the original Lucasfilm EditDroid non-linear editing system. Models Hardware The Sun-1 workstation was based on the Stanford University SUN workstation designed by Andy Bechtolsheim (advised by Vaughan Pratt and Forest Baskett), a graduate student and co-founder of Sun Microsystems. At the heart of this design were the Multibus CPU, memory, and video display cards. The cards used in the Sun-1 workstation were a second-generation design with a private memory bus allowing memory to be expanded to 2 MB without performance degradation. The Sun 68000 board introduced in 1982 was a powerful single-board computer. It combined a 10 MHz Motorola 68000 microprocessor, a Sun-designed memory management unit (MMU), 256 KB of zero wait state memory with parity, up to 32 KB of EPROM memory, two serial ports, a 16-bit parallel port and an Intel Multibus (IEEE 796 bus) interface in a single , Multibus form factor. By using the Motorola 68000 processor tightly coupled with the Sun-1 MMU, the Sun 68000 CPU board was able to support a multi-tasking operating system such as UNIX. It included an advanced Sun-designed multi-process two-level MMU with facilities for memory protection, code sharing and demand paging of memory. The Sun
https://en.wikipedia.org/wiki/Dynamic%20synchronous%20transfer%20mode
Dynamic synchronous transfer mode (DTM) is an optical networking technology standardized by the European Telecommunications Standards Institute (ETSI) in 2001 beginning with specification ETSI ES 201 803-1. DTM is a time-division multiplexing and a circuit-switching network technology that combines switching and transport. It is designed to provide a guaranteed quality of service (QoS) for streaming video services, but can be used for packet-based services as well. It is marketed for professional media networks, mobile TV networks, digital terrestrial television (DTT) networks, in content delivery networks and in consumer oriented networks, such as "triple play" networks. History The DTM architecture was conceived in 1985 and developed at the Royal Institute of Technology (KTH) in Sweden. It was published in February 1996. The research team was split into two spin-off companies, reflecting two different approaches to use the technology. One of these companies remains active in the field and delivers commercial products based on the DTM technology. Its name is Net Insight. See also Broadband Integrated Services Digital Network References Further reading External links IHS web page listing for ETSI ES 201 803- 6 Paper from the founder of the Topology (in postscript format) Network protocols Link protocols
https://en.wikipedia.org/wiki/Multiscale%20modeling
Multiscale modeling or multiscale mathematics is the field of solving problems that have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids, solids, polymers, proteins, nucleic acids as well as various physical and chemical phenomena (like adsorption, chemical reactions, diffusion). An example of such problems involve the Navier–Stokes equations for incompressible fluid flow. In a wide variety of applications, the stress tensor is given as a linear function of the gradient . Such a choice for has been proven to be sufficient for describing the dynamics of a broad range of fluids. However, its use for more complex fluids such as polymers is dubious. In such a case, it may be necessary to use multiscale modeling to accurately model the system such that the stress tensor can be extracted without requiring the computational cost of a full microscale simulation. History Horstemeyer 2009, 2012 presented a historical review of the different disciplines (mathematics, physics, and materials science) for solid materials related to multiscale materials modeling. The aforementioned DOE multiscale modeling efforts were hierarchical in nature. The first concurrent multiscale model occurred when Michael Ortiz (Caltech) took the molecular dynamics code, Dynamo, (developed by Mike Baskes at Sandia National Labs) and with his students embedded it into a finite element code for the first time. Martin Karplus, Michael Levitt, Arieh Warshel 2013 were awarded a Nobel Prize in Chemistry for the development of a multiscale model method using both classical and quantum mechanical theory which were used to model large complex chemical systems and reactions. Areas of research In physics and chemistry, multiscale modeling is aimed at the calculation of material properties or system behavior on one level using information or models from different levels. On each level, particular approaches are used for the description of
https://en.wikipedia.org/wiki/Quotient%20category
In mathematics, a quotient category is a category obtained from another category by identifying sets of morphisms. Formally, it is a quotient object in the category of (locally small) categories, analogous to a quotient group or quotient space, but in the categorical setting. Definition Let C be a category. A congruence relation R on C is given by: for each pair of objects X, Y in C, an equivalence relation RX,Y on Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if are related in Hom(X, Y) and are related in Hom(Y, Z), then g1f1 and g2f2 are related in Hom(X, Z). Given a congruence relation R on C we can define the quotient category C/R as the category whose objects are those of C and whose morphisms are equivalence classes of morphisms in C. That is, Composition of morphisms in C/R is well-defined since R is a congruence relation. Properties There is a natural quotient functor from C to C/R which sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is a full functor). Every functor F : C → D determines a congruence on C by saying f ~ g iff F(f) = F(g). The functor F then factors through the quotient functor C → C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories. Examples Monoids and groups may be regarded as categories with one object. In this case the quotient category coincides with the notion of a quotient monoid or a quotient group. The homotopy category of topological spaces hTop is a quotient category of Top, the category of topological spaces. The equivalence classes of morphisms are homotopy classes of continuous maps. Let k be a field and consider the abelian category Mod(k) of all vector spaces over k with k-linear maps as morphisms. To "kill" all finite-dimensional spaces, we can call two linear maps f,g : X → Y congruent iff their difference has finite-dimensional image. In the resulting quo
https://en.wikipedia.org/wiki/Belarusian%20State%20Technological%20University
Belarusian State Technological University (; ) is a university in Minsk, Belarus specialized in engineering and technology. It was established in Gomel in 1930 as the Forestry Institute. In 1941, it was evacuated to Sverdlovsk, now Yekaterinburg. Returned to Gomel in 1944, but in 1946 relocated to Minsk as the Belarusian Institute of Technology. Structure 47 departments 11 faculties Dean's office for foreign students Pre-University courses Negoreloe forestry experimental station Botanical garden Meteorological station Educational-production forest-processing complex Borisov educational-scientific experimental station Technological gymnasium Prominent alumni Yuri Puntus - former BATE Borisov and Belarus national football team coach. References Universities and institutes established in the Soviet Union Universities in Minsk Universities and colleges established in 1930 1930 establishments in Belarus Engineering universities and colleges
https://en.wikipedia.org/wiki/Transactional%20memory
In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems. Motivation In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs such as locks are pessimistic and prohibit threads that are outside a critical section from running the code protected by the critical section. The process of applying and releasing locks often functions as an additional overhead in workloads with little conflict among threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The goal of transactional memory systems is to transparently support regions of code marked as transactions by enforcing atomicity, consistency and isolation. A transaction is a collection of operations that can execute and commit changes as long as a conflict is not present. When a conflict is detected, a transaction will revert to its initial state (prior to any changes) and will rerun until all conflicts are removed. Before a successful commit, the outcome of any operation is purely speculative inside a transaction. In contrast to lock-based synchronization where operations are serialized to prevent data corruption, transactions allow for additional parallelism as long as few operations attempt to modify a shared resource. Since the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that utilize t
https://en.wikipedia.org/wiki/Gene%20expression%20profiling
In the field of molecular biology, gene expression profiling is the measurement of the activity (the expression) of thousands of genes at once, to create a global picture of cellular function. These profiles can, for example, distinguish between cells that are actively dividing, or show how the cells react to a particular treatment. Many experiments of this sort measure an entire genome simultaneously, that is, every gene present in a particular cell. Several transcriptomics technologies can be used to generate the necessary data to analyse. DNA microarrays measure the relative activity of previously identified target genes. Sequence based techniques, like RNA-Seq, provide information on the sequences of genes in addition to their expression level. Background Expression profiling is a logical next step after sequencing a genome: the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing at a point in time. Genes contain the instructions for making messenger RNA (mRNA), but at any moment each cell makes mRNA from only a fraction of the genes it carries. If a gene is used to produce mRNA, it is considered "on", otherwise "off". Many factors determine whether a gene is on or off, such as the time of day, whether or not the cell is actively dividing, its local environment, and chemical signals from other cells. For instance, skin cells, liver cells and nerve cells turn on (express) somewhat different genes and that is in large part what makes them different. Therefore, an expression profile allows one to deduce a cell's type, state, environment, and so forth. Expression profiling experiments often involve measuring the relative amount of mRNA expressed in two or more experimental conditions. This is because altered levels of a specific sequence of mRNA suggest a changed need for the protein coded by the mRNA, perhaps indicating a homeostatic response or a pathological condition. For example, higher leve
https://en.wikipedia.org/wiki/Immunogenicity
Immunogenicity is the ability of a foreign substance, such as an antigen, to provoke an immune response in the body of a human or other animal. It may be wanted or unwanted: Wanted immunogenicity typically relates to vaccines, where the injection of an antigen (the vaccine) provokes an immune response against the pathogen, protecting the organism from future exposure. Immunogenicity is a central aspect of vaccine development. Unwanted immunogenicity is an immune response by an organism against a therapeutic antigen. This reaction leads to production of anti-drug-antibodies (ADAs), inactivating the therapeutic effects of the treatment and potentially inducing adverse effects. A challenge in biotherapy is predicting the immunogenic potential of novel protein therapeutics. For example, immunogenicity data from high-income countries are not always transferable to low-income and middle-income countries. Another challenge is considering how the immunogenicity of vaccines changes with age. Therefore, as stated by the World Health Organization, immunogenicity should be investigated in a target population since animal testing and in vitro models cannot precisely predict immune response in humans. Antigenicity is the capacity of a chemical structure (either an antigen or hapten) to bind specifically with a group of certain products that have adaptive immunity: T cell receptors or antibodies (a.k.a. B cell receptors). Antigenicity was more commonly used in the past to refer to what is now known as immunogenicity, and the two are still often used interchangeably. However, strictly speaking, immunogenicity refers to the ability of an antigen to induce an adaptive immune response. Thus an antigen might bind specifically to a T or B cell receptor, but not induce an adaptive immune response. If the antigen does induce a response, it is an 'immunogenic antigen', which is referred to as an immunogen. Antigenic immunogenic potency Many lipids and nucleic acids are relatively s
https://en.wikipedia.org/wiki/Viral%20eukaryogenesis
Viral eukaryogenesis is the hypothesis that the cell nucleus of eukaryotic life forms evolved from a large DNA virus in a form of endosymbiosis within a methanogenic archaeon or a bacterium. The virus later evolved into the eukaryotic nucleus by acquiring genes from the host genome and eventually usurping its role. The hypothesis was first proposed by Philip Bell in 2001 and was further popularized with the discovery of large, complex DNA viruses (such as Mimivirus) that are capable of protein biosynthesis. Viral eukaryogenesis has been controversial for several reasons. For one, it is sometimes argued that the posited evidence for the viral origins of the nucleus can be conversely used to suggest the nuclear origins of some viruses. Secondly, this hypothesis has further inflamed the longstanding debate over whether viruses are living organisms. Hypothesis The viral eukaryogenesis hypothesis posits that eukaryotes are composed of three ancestral elements: a viral component that became the modern nucleus; a prokaryotic cell (an archaeon according to the eocyte hypothesis) which donated the cytoplasm and cell membrane of modern cells; and another prokaryotic cell (here bacterium) that, by endocytosis, became the modern mitochondrion or chloroplast. In 2006, researchers suggested that the transition from RNA to DNA genomes first occurred in the viral world. A DNA-based virus may have provided storage for an ancient host that had previously used RNA to store its genetic information (such host is called ribocell or ribocyte). Viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host cells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. In the original paper it was also an RNA cell at the origin of eukaryotes, but eventu
https://en.wikipedia.org/wiki/Edinburgh%20Parallel%20Computing%20Centre
EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce. The University has supported high performance computing (HPC) services since 1982. , through EPCC, it supports the UK's national high-end computing system, ARCHER (Advanced Research Computing High End Resource), and the UK Research Data Facility (UK-RDF). Overview EPCC's activities include: consultation and software development for industry and academia; research into high-performance computing; hosting advanced computing facilities and supporting their users; training and education . The Centre offers two Masters programmes: MSc in High-Performance Computing and MSc in High-Performance Computing with Data Science . It is a member of the Globus Alliance and, through its involvement with the OGSA-DAI project, it works with the Open Grid Forum DAIS-WG. Around half of EPCC's annual turnover comes from collaborative projects with industry and commerce. In addition to privately funded projects with businesses, EPCC receives funding from Scottish Enterprise, the Engineering and Physical Sciences Research Council and the European Commission. History EPCC was established in 1990, following on from the earlier Edinburgh Concurrent Supercomputer Project and chaired by Jeffery Collins from 1991. From 2002 to 2016 EPCC was part of the University's School of Physics & Astronomy, becoming an independent Centre of Excellence within the University's College of Science and Engineering in August 2016. It was extensively involved in all aspects of Grid computing including: developing Grid middleware and architecture tools to facilitate the uptake of e-Science; developing business applications and collaborating in scientific applications and demonstration projects. The Centre was a founder member o
https://en.wikipedia.org/wiki/F-score
In statistical analysis of binary classification, the F-score or F-measure is a measure of a test's accuracy. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all positive results, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification. The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic score applies additional weights, valuing one of precision or recall more than the other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero. Etymology The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992). Definition The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall: . Fβ score A more general F score, , that uses a positive real factor , where is chosen such that recall is considered times as important as precision, is: . In terms of Type I and type II errors this becomes: . Two commonly used values for are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision. The F-measure was derived so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on Van Rijsbergen's effectiveness measure . Their relationship is where . Diagnostic testing This is related to the field of binary classification where recall i
https://en.wikipedia.org/wiki/Pseudomonas%20virus%20phi6
Φ6 (Phi 6) is the best-studied bacteriophage of the virus family Cystoviridae. It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae). It has a three-part, segmented, double-stranded RNA genome, totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleocapsid, a rare trait among bacteriophages. It is a lytic phage, though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state". Proteins The genome of Φ6 codes for 12 proteins. P1 is a major capsid protein which is responsible of forming the skeleton of the polymerase complex. In the interior of the shell formed by P1 is the P2 viral replicase and transcriptase protein. The spikes binding to receptors on the Φ6 virion are formed by the protein P3. P4 is a nucleoside-triphosphatase which is required for the genome packaging and transcription. P5 is a lytic enzyme. The spike protein P3 is anchored to a fusogenic envelope protein in P6. P7 is a minor capsid protein, P8 is responsible of forming the nucleocapsid surface shell and P9 is a major envelope protein. P12 is a non-structural morphogenic protein shown to be a part of the envelope assembly. P10 and P13 are proteins coding genes that are associated with the viral envelope and P14 is a non-structural protein. Life cycle Φ6 typically attaches to the Type IV pilus of P. syringae with its attachment protein, P3. It is thought that the cell then retracts its pilus, pulling the phage toward the bacterium. Fusion of the viral envelope with the bacterial outer membrane is facilitated by the phage protein, P6. The muralytic (peptidoglycan-digesting) enzyme, P5, then digests a portion of the cell wall, and the nucleocapsid enters the cell coated with the bacterial outer membrane. A copy of the sense strand of the large genome segment (6374 bases) is then synthesized (transcription) on the vertices of the capsid, with the RNA-dependent RNA polymerase, P2,
https://en.wikipedia.org/wiki/Normalization%20%28image%20processing%29
In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching or histogram stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion. The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale. Normalization transforms an n-dimensional grayscale image with intensity values in the range , into a new image with intensity values in the range . The linear normalization of a grayscale digital image is performed according to the formula For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255. Normalization might also be non linear, this happens when there isn't a linear relationship between and . An example of non-linear normalization is when the normalization follows a sigmoid function, in that case, the normalized image is computed according to the formula Where defines the width of the input intensity range, and defines the intensity around which the range is centered. Auto-normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format. See also Audio normalization, audio analog Histogram equalization References Ext
https://en.wikipedia.org/wiki/Systematic%20reconnaissance%20flight
Systematic Reconnaissance Flight (SRF) is a scientific method in wildlife survey for assessing the distribution and abundance of wild animals. It is widely used in Africa, Australia and North America for assessment of plains and woodland wildlife and other species. The method involves systematic or random flight lines (transects) over the target area at a constant height above ground, with at least one observer recording wildlife in a calibrated strip on at least one side of the aircraft. The method has been often criticised for low accuracy and precision, but is considered to be the best option for relatively inexpensive coverage of large game areas. References Surveying Wildlife
https://en.wikipedia.org/wiki/Frans%C3%A9n%E2%80%93Robinson%20constant
The Fransén–Robinson constant, sometimes denoted F, is the mathematical constant that represents the area between the graph of the reciprocal Gamma function, , and the positive x axis. That is, Other expressions The Fransén–Robinson constant has numerical value , and continued fraction representation [2; 1, 4, 4, 1, 18, 5, 1, 3, 4, 1, 5, 3, 6, ...] . The constant is somewhat close to Euler's number This fact can be explained by approximating the integral by a sum: and this sum is the standard series for e. The difference is or equivalently The Fransén–Robinson constant can also be expressed using the Mittag-Leffler function as the limit It is however unknown whether F can be expressed in closed form in terms of other known constants. Calculation history A fair amount of effort has been made to calculate the numerical value of the Fransén–Robinson constant with high accuracy. The value was computed to 36 decimal places by Herman P. Robinson using 11 point Newton–Cotes quadrature, to 65 digits by A. Fransén using Euler–Maclaurin summation, and to 80 digits by Fransén and S. Wrigge using Taylor series and other methods. William A. Johnson computed 300 digits, and Pascal Sebah was able to compute 1025 digits using Clenshaw–Curtis integration. References Mathematical constants Gamma and related functions
https://en.wikipedia.org/wiki/Packet%20crafting
Packet crafting is a technique that allows network administrators to probe firewall rule-sets and find entry points into a targeted system or network. This is done by manually generating packets to test network devices and behaviour, instead of using existing network traffic. Testing may target the firewall, IDS, TCP/IP stack, router or any other component of the network. Packets are usually created by using a packet generator or packet analyzer which allows for specific options and flags to be set on the created packets. The act of packet crafting can be broken into four stages: Packet Assembly, Packet Editing, Packet Play and Packet Decoding. Tools exist for each of the stages - some tools are focused only on one stage while others such as Ostinato try to encompass all stages. Packet assembly Packet Assembly is the creation of the packets to be sent. Some popular programs used for packet assembly are Hping, Nemesis, Ostinato, Cat Karat packet builder, Libcrafter, libtins, PcapPlusPlus, Scapy, Wirefloss and Yersinia. Packets may be of any protocol and are designed to test specific rules or situations. For example, a TCP packet may be created with a set of erroneous flags to ensure that the target machine sends a RESET command or that the firewall blocks any response. Packet editing Packet Editing is the modification of created or captured packets. This involves modifying packets in manners which are difficult or impossible to do in the Packet Assembly stage, such as modifying the payload of a packet. Programs such as Scapy, Ostinato, Netdude allow a user to modify recorded packets' fields, checksums and payloads quite easily. These modified packets can be saved in packet streams which may be stored in pcap files to be replayed later. Packet play Packet Play or Packet Replay is the act of sending a pre-generated or captured series of packets. Packets may come from Packet Assembly and Editing or from captured network attacks. This allows for testing of a given us
https://en.wikipedia.org/wiki/Coordinative%20definition
A coordinative definition is a postulate which assigns a partial meaning to the theoretical terms of a scientific theory by correlating the mathematical objects of the pure or formal/syntactical aspects of a theory with physical objects in the world. The idea was formulated by the logical positivists and arises out of a formalist vision of mathematics as pure symbol manipulation. Formalism In order to get a grasp on the motivations which inspired the development of the idea of coordinative definitions, it is important to understand the doctrine of formalism as it is conceived in the philosophy of mathematics. For the formalists, mathematics, and particularly geometry, is divided into two parts: the pure and the applied. The first part consists in an uninterpreted axiomatic system, or syntactic calculus, in which terms such as point, straight line and between (the so-called primitive terms) have their meanings assigned to them implicitly by the axioms in which they appear. On the basis of deductive rules eternally specified in advance, pure geometry provides a set of theorems derived in a purely logical manner from the axioms. This part of mathematics is therefore a priori but devoid of any empirical meaning, not synthetic in the sense of Kant. It is only by connecting these primitive terms and theorems with physical objects such as rulers or rays of light that, according to the formalist, pure mathematics becomes applied mathematics and assumes an empirical meaning. The method of correlating the abstract mathematical objects of the pure part of theories with physical objects consists in coordinative definitions. It was characteristic of logical positivism to consider a scientific theory to be nothing more than a set of sentences, subdivided into the class of theoretical sentences, the class of observational sentences, and the class of mixed sentences. The first class contains terms which refer to theoretical entities, that is to entities not directly observabl
https://en.wikipedia.org/wiki/KYK-13
The KYK-13 Electronic Transfer Device is a common fill device designed by the United States National Security Agency for the transfer and loading of cryptographic keys with their corresponding check word. The KYK-13 is battery powered and uses the DS-102 protocol for key transfer. Its National Stock Number is 5810-01-026-9618. Even though the KYK-13 was first introduced in 1976 and was supposed to have been made obsolete by the AN/CYZ-10 Data Transfer Device, it is still widely used because of its simplicity and reliability. A simpler device than the CYZ-10, the KIK-30 "Really Simple Key Loader" (RASKL) is now planned to replace the KYK-13s, with up to $200 million budgeted to procure them in quantity. Components P1 and J1 Connectors - Electrically the same connection. Used to connect to a fill cable, COMSEC device, KOI-18, KYX-15, another KYK-13, or AN/CYZ-10. Battery Compartment - Holds battery which powers KYK-13. Mode Switch - Three-position rotary switch used to select operation modes. "Z" - Used to zeroize selected keys. ON - Used to fill and transfer keys. OFF CHECK - Used to conduct parity checks. Parity Lamp - Blinks when parity is checked or fill is transferred. Initiate Push Button - Push this button when loading or zeroizing the KYK-13. Address Select Switch - Seven-position rotary switch. "Z" ALL - Zeroizes all six storage registers when Mode Switch is set to "Z." 1 THROUGH 6 - Six storage registers for storing keys in KYK-13. References Key management National Security Agency encryption devices Military equipment introduced in the 1970s
https://en.wikipedia.org/wiki/Password%20notification%20email
Password notification email is a common password recovery technique used by websites. If a user forgets their password then a password notification email is sent containing enough information for the user to access their account again. This method of password retrieval relies on the assumption that only the legitimate owner of the account has access to the inbox for that particular email address. The process is often initiated by the user clicking on a forgotten password link on the website where, after entering their username or email address, the password notification email would be automatically sent to the inbox of the account holder. This email may contain a temporary password or a URL that can be followed to enter a new password for that account. The new password or the URL often contain a randomly generated string of text that can only be obtained by reading that particular email. Another method used is to send all or part of the original password in the email. Sending only a few characters of the password, can help the user to remember their original password, without having to reveal the whole password to them. Security Concerns The main issue is that the contents of the password notification email can be easily discovered by anyone with access to the inbox of the account owner. This could be as a result of shoulder surfing or if the inbox itself is not password protected. The contents could then be used to compromise the security of the account. The user would therefore have the responsibility of either securely deleting the email or ensuring that its contents are not revealed to anyone else. A partial solution to this problem, is to cause any links contained within the email to expire after a period of time, making the email useless if it is not used quickly after it is sent. Any method that sends part of the original password means that the password is stored in plain text and leaves the password open to an attack from hackers. This is why it is typi
https://en.wikipedia.org/wiki/Filamentation
Filamentation is the anomalous growth of certain bacteria, such as Escherichia coli, in which cells continue to elongate but do not divide (no septa formation). The cells that result from elongation without division have multiple chromosomal copies. In the absence of antibiotics or other stressors, filamentation occurs at a low frequency in bacterial populations (4–8% short filaments and 0–5% long filaments in 1- to 8-hour cultures). The increased cell length can protect bacteria from protozoan predation and neutrophil phagocytosis by making ingestion of cells more difficult. Filamentation is also thought to protect bacteria from antibiotics, and is associated with other aspects of bacterial virulence such as biofilm formation. The number and length of filaments within a bacterial population increases when the bacteria are exposed to different physical, chemical and biological agents (e.g. UV light, DNA synthesis-inhibiting antibiotics, bacteriophages). This is termed conditional filamentation. Some of the key genes involved in filamentation in E. coli include sulA, minCD and damX. Filament formation Antibiotic-induced filamentation Some peptidoglycan synthesis inhibitors (e.g. cefuroxime, ceftazidime) induce filamentation by inhibiting the penicillin binding proteins (PBPs) responsible for crosslinking peptidoglycan at the septal wall (e.g. PBP3 in E. coli and P. aeruginosa). Because the PBPs responsible for lateral wall synthesis are relatively unaffected by cefuroxime and ceftazidime, cell elongation proceeds without any cell division and filamentation is observed. DNA synthesis-inhibiting and DNA damaging antibiotics (e.g. metronidazole, mitomycin C, the fluoroquinolones, novobiocin) induce filamentation via the SOS response. The SOS response inhibits septum formation until the DNA can be repaired, this delay stopping the transmission of damaged DNA to progeny. Bacteria inhibit septation by synthesizing protein SulA, an FtsZ inhibitor that halts Z-ri
https://en.wikipedia.org/wiki/Charge-transfer%20amplifier
The charge-transfer amplifier (CTA) is an electronic amplifier circuit. Also known as transconveyance amplifiers, CTAs amplify electronic signals by dynamically conveying charge between capacitive nodes in proportion to the size of a differential input voltage. By appropriately selecting the relative node capacitances, voltage amplification occurs by the charge-voltage relationship of capacitors. CTAs are clocked, or sampling, amplifiers. They consume zero static power and can be designed to consume (theoretically) arbitrarily low dynamic power, proportional to the size of input signals being sampled. CMOS technology is most commonly used for implementation. CTAs were introduced in memory circuits in the 1970s, and more recently have been applied in multi-bit analog-to-digital converters (ADCs). They are also used in dynamic voltage comparator circuits. See also Comparator Mixed-signal integrated circuit Charge amplifier References Electronic amplifiers
https://en.wikipedia.org/wiki/Sicherman%20dice
Sicherman dice are a pair of 6-sided dice with non-standard numbers–one with the sides 1, 2, 2, 3, 3, 4 and the other with the sides 1, 3, 4, 5, 6, 8. They are notable as the only pair of 6-sided dice that are not normal dice , bear only positive integers, and have the same probability distribution for the sum as normal dice. They were invented in 1978 by George Sicherman of Buffalo, New York. Mathematics A standard exercise in elementary combinatorics is to calculate the number of ways of rolling any given value with a pair of fair six-sided dice (by taking the sum of the two rolls). The table shows the number of such ways of rolling a given value : Crazy dice is a mathematical exercise in elementary combinatorics, involving a re-labeling of the faces of a pair of six-sided dice to reproduce the same frequency of sums as the standard labeling. The Sicherman dice are crazy dice that are re-labeled with only positive integers. (If the integers need not be positive, to get the same probability distribution, the number on each face of one die can be decreased by k and that of the other die increased by k, for any natural number k, giving infinitely many solutions.) The table below lists all possible totals of dice rolls with standard dice and Sicherman dice. One Sicherman die is coloured for clarity: 1–2–2–3–3–4, and the other is all black, 1–3–4–5–6–8. History The Sicherman dice were discovered by George Sicherman of Buffalo, New York and were originally reported by Martin Gardner in a 1978 article in Scientific American. The numbers can be arranged so that all pairs of numbers on opposing sides sum to equal numbers, 5 for the first and 9 for the second. Later, in a letter to Sicherman, Gardner mentioned that a magician he knew had anticipated Sicherman's discovery. For generalizations of the Sicherman dice to more than two dice and noncubical dice, see Broline (1979), Gallian and Rusin (1979), Brunson and Swift (1997/1998), and Fowler and Swift (1999). M
https://en.wikipedia.org/wiki/Reliable%20Server%20Pooling
Reliable Server Pooling (RSerPool) is a computer protocol framework for management of and access to multiple, coordinated (pooled) servers. RSerPool is an IETF standard, which has been developed by the IETF RSerPool Working Group and documented in RFC 5351, RFC 5352, RFC 5353, RFC 5354, RFC 5355 and RFC 5356. Introduction In the terminology of RSerPool a server is denoted as a Pool Element (PE). In its Pool, it is identified by its Pool Element Identifier (PE ID), a 32-bit number. The PE ID is randomly chosen upon a PE's registration to its pool. The set of all pools is denoted as the Handlespace. In older literature, it may be denoted as Namespace. This denomination has been dropped in order to avoid confusion with the Domain Name System (DNS). Each pool in a handlespace is identified by a unique Pool Handle (PH), which is represented by an arbitrary byte vector. Usually, this is an ASCII or Unicode name of the pool, e.g. "DownloadPool" or "WebServerPool". Each handlespace has a certain scope (e.g. an organization or company), denoted as Operation Scope. It is explicitly not a goal of RSerPool to manage the global Internet's pools within a single handlespace. Due to the localisation of Operation Scopes, it is possible to keep the handlespace "flat". That is, PHs do not have any hierarchy in contrast to the Domain Name System with its top-level and sub-domains. This constraint results in a significant simplification of handlespace management. Within an operation scope, the handlespace is managed by redundant Pool Registrars (PR), also denoted as ENRP servers or Name Servers (NS). PRs have to be redundant in order to avoid a PR to become a Single Point of Failure (SPoF). Each PR of an operation scope is identified by its Registrar ID (PR ID), which is a 32-bit random number. It is not necessary to ensure uniqueness of PR IDs. A PR contains a complete copy of the operation scope's handlespace. PRs of an operation scope synchronize their view of the handlespace usi
https://en.wikipedia.org/wiki/Aggregate%20Server%20Access%20Protocol
As a communications protocol the Aggregate Server Access Protocol is used by the Reliable server pooling (RSerPool) framework for the communication between Pool Elements and Pool Registrars (Application Layer) Pool Users and Pool Registrars (Application Layer) Pool Users and Pool Elements (Session Layer) Standards Documents Aggregate Server Access Protocol (ASAP) Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters Threats Introduced by Reliable Server Pooling (RSerPool) and Requirements for Security in Response to Threats Reliable Server Pooling Policies External links Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page IETF RSerPool Working Group Internet protocols Internet Standards Session layer protocols
https://en.wikipedia.org/wiki/Endpoint%20Handlespace%20Redundancy%20Protocol
The Endpoint Handlespace Redundancy Protocol is used by the Reliable server pooling (RSerPool) framework for the communication between Pool Registrars to maintain and synchronize a handlespace. It is allocated on the application layer like the Aggregate Server Access Protocol. It is a work in progress within the IETF. External links Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page IETF RSerPool Working Group Endpoint Handlespace Redundancy Protocol (ENRP) Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters Threats Introduced by Reliable Server Pooling (RSerPool) and Requirements for Security in Response to Threats Reliable Server Pooling Policies Internet protocols Internet Standards Session layer protocols