source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Coherent%20control
Coherent control is a quantum mechanics-based method for controlling dynamic processes by light. The basic principle is to control quantum interference phenomena, typically by shaping the phase of laser pulses. The basic ideas have proliferated, finding vast application in spectroscopy mass spectra, quantum information processing, laser cooling, ultracold physics and more. Brief History The initial idea was to control the outcome of chemical reactions. Two approaches were pursued: in the time domain, a "pump-dump" scheme where the control is the time delay between pulses in the frequency domain, interfering pathways controlled by one and three photons. The two basic methods eventually merged with the introduction of optimal control theory. Experimental realizations soon followed in the time domain and in the frequency domain. Two interlinked developments accelerated the field of coherent control: experimentally, it was the development of pulse shaping by a spatial light modulator and its employment in coherent control. The second development was the idea of automatic feedback control and its experimental realization. Controllability Coherent control aims to steer a quantum system from an initial state to a target state via an external field. For given initial and final (target) states, the coherent control is termed state-to-state control. A generalization is steering simultaneously an arbitrary set of initial pure states to an arbitrary set of final states i.e. controlling a unitary transformation. Such an application sets the foundation for a quantum gate operation. Controllability of a closed quantum system has been addressed by Tarn and Clark. Their theorem based in control theory states that for a finite-dimensional, closed-quantum system, the system is completely controllable, i.e. an arbitrary unitary transformation of the system can be realized by an appropriate application of the controls if the control operators and the unperturbed Hamiltonian
https://en.wikipedia.org/wiki/Robinson%20arithmetic
In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable. Axioms The background logic of Q is first-order logic with identity, denoted by infix '='. The individuals, called natural numbers, are members of a set called N with a distinguished member 0, called zero. There are three operations over N: A unary operation called successor and denoted by prefix S; Two binary operations, addition and multiplication, denoted by infix + and ·, respectively. The following axioms for Q are Q1–Q7 in (cf. also the axioms of first-order arithmetic). Variables not bound by an existential quantifier are bound by an implicit universal quantifier. Sx ≠ 0 0 is not the successor of any number. (Sx = Sy) → x = y If the successor of x is identical to the successor of y, then x and y are identical. (1) and (2) yield the minimum of facts about N (it is an infinite set bounded by 0) and S (it is an injective function whose domain is N) needed for non-triviality. The converse of (2) follows from the properties of identity. y=0 ∨ ∃x (Sx = y) Every number is either 0 or the successor of some number. The axiom schema of mathematical induction present in arithmetics stronger than Q turns this axiom into a theorem. x + 0 = x x + Sy = S(x + y) (4) and (5) are the recursive definition of addition. x·0 = 0 x·Sy = (x·y) + x (6) and (7) are the recursive definition of multiplication. Variant axiomatizations The axioms in are (1)–(13) in . The first 6 of Robinson's 13 axioms are required only when, unlike here, the background logic does not include identity. The usual stri
https://en.wikipedia.org/wiki/Capacitance%20multiplier
A capacitance multiplier is designed to make a capacitor function like a much larger capacitor. This can be achieved in at least two ways. An active circuit, using a device such as a transistor or operational amplifier A passive circuit, using autotransformers. These are typically used for calibration standards. The General Radio / IET labs 1417 is one such example. Capacitor multipliers make low-frequency filters and long-duration timing circuits possible that would be impractical with actual capacitors. Another application is in DC power supplies where very low ripple voltage (under load) is of paramount importance, such as in class-A amplifiers. Transistor-based Here the capacitance of capacitor C1 is multiplied by approximately the transistor's current gain (β). Without Q, R2 would be the load on the capacitor. With Q in place, the loading imposed upon C1 is simply the load current reduced by a factor of (β + 1). Consequently, C1 appears multiplied by a factor of (β + 1) when viewed by the load. Another way is to look at this circuit as an emitter follower with capacitor C1 holding voltage at base constant with load of input impedance of Q1: R2 multiplied by (1 + β), so the output current is stabilized much more against power line voltage noise. Operational amplifier based Here, the capacitance of capacitor C1 is multiplied by the ratio of resistances: C = C1 * R1 / R2 at the Vi node. The synthesized capacitance also brings a series resistance approximately equal to R2, and a leakage current appears across the capacitance because of the input offsets of OP. These problems can be avoided by a circuit with two op amps. In this circuit the input to OP1 can be a.c.-coupled if necessary, and the capacitance can be made variable by making the ratio of R1 to R2 variable. C = C1 * (1 + (R2 / R1)). In the circuits described above the capacitance is grounded, but floating capacitance multipliers are possible. A negative capacitance multiplier can be create
https://en.wikipedia.org/wiki/Evolutionarily%20stable%20state
A population can be described as being in an evolutionarily stable state when that population's "genetic composition is restored by selection after a disturbance, provided the disturbance is not too large" (Maynard Smith, 1982). This population as a whole can be either monomorphic or polymorphic. This is now referred to as convergent stability. History & connection to evolutionary stable strategy While related to the concept of an evolutionarily stable strategy (ESS), evolutionarily stable states are not identical and the two terms cannot be used interchangeably. An ESS is a strategy that, if adopted by all individuals within a population, cannot be invaded by alternative or mutant strategies. This strategy becomes fixed in the population because alternatives provide no fitness benefit that would be selected for. In comparison, an evolutionarily stable state describes a population that returns as a whole to its previous composition even after being disturbed. In short: the ESS refers to the strategy itself, uninterrupted and supported through natural selection, while the evolutionarily stable state refers more broadly to a population-wide balance of one or more strategies that may be subjected to temporary change. The term ESS was first used by John Maynard Smith in an essay from the 1972 book On Evolution. Maynard Smith developed the ESS drawing in part from game theory and Hamilton’s work on the evolution of sex ratio. The ESS was later expanded upon in his book Evolution and the Theory of Games in 1982, which also discussed the evolutionarily stable state. Mixed v. single strategies There has been variation in how the term is used and exploration of under what conditions an evolutionarily stable state might exist. In 1984, Benhard Thomas compared “discrete” models in which all individuals use only one strategy to “continuous” models in which individuals employ mixed strategies. While Maynard Smith had originally defined an ESS as being a single “uninvadable
https://en.wikipedia.org/wiki/English%20in%20computing
The English language is sometimes described as the lingua franca of computing. In comparison to other sciences, where Latin and Greek are often the principal sources of vocabulary, computer science borrows more extensively from English. In the past, due to the technical limitations of early computers, and the lack of international standardization on the Internet, computer users were limited to using English and the Latin alphabet. However, this historical limitation is less present today, due to innovations in internet infrastructure and increases in computer speed. Most software products are localized in numerous languages and the invention of the Unicode character encoding has resolved problems with non-Latin alphabets. Some limitations have only been changed recently, such as with domain names, which previously allowed only ASCII characters. English is seen as having this role due to the prominence of the United States and the United Kingdom, both English-speaking countries, in the development and popularization of computer systems, computer networks, software and information technology. History Computer Science has an ultimately mathematical foundation which was laid by non-English speaking cultures. The first mathematically literate societies in the Ancient Near East recorded methods for solving mathematical problems in steps, The word 'algorithm' comes from the name of a famous medieval Arabic mathematician who contributed to the spread of Hindu-Arabic numerals, al-Khwārizmī, and the first systematic treatment of binary numbers was completed by Leibniz, a German mathematician. Leibniz wrote his treatise on the topic in French, the lingua franca of science at the time, and innovations in what is now called Computer hardware occurred outside of an English tradition, with Pascal inventing the first mechanical calculator, and Leibniz improving it. Interest in building computing machines first emerged in the 19th century, with the coming of the Second Industrial
https://en.wikipedia.org/wiki/National%20Archive%20of%20Computerized%20Data%20on%20Aging
The National Archive of Computerized Data on Aging (NACDA), located within ICPSR, is funded by the National Institute on Aging (NIA). NACDA's mission is to advance research on aging by helping researchers to profit from the under-exploited potential of a broad range of datasets. NACDA acquires and preserves data relevant to gerontological research, processing as needed to promote effective research use, disseminates them to researchers, and facilitates their use. By preserving and making available the largest library of electronic data on aging in the United States, NACDA offers opportunities for secondary analysis on major issues of scientific and policy relevance. Description A program within the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. The NACDA collection consists of over sixteen hundred datasets relevant to gerontological research and represents the world's largest collection of publicly available research data on the aging lifecourse. History The NACDA Program on Aging began over 45 years ago under the sponsorship of the United States Administration on Aging (AoA). At that time NACDA was seen as a novel experiment - neither the concept of a research archive devoted to aging issues nor the idea of making research data freely available to the public were well established. Over the years, NACDA’s mission has changed both in scope and in direction. Originally conceived as a storehouse for data, NACDA has aggressively pursued a role of increasing involvement in the research community by actively promoting and distributing data. In 1984, the NIA became the sponsor of the National Archive of Computerized Data on Aging, and NACDA has flourished under its support. Over the years, NACDA has evolved and grown in response to changes in technology. In many instances, leading the pace of change in methodology related to the storage, protection, and distribution of data. NACDA was one of the first organizations
https://en.wikipedia.org/wiki/Heyting%20arithmetic
In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it. Axiomatization Heyting arithmetic can be characterized just like the first-order theory of Peano arithmetic , except that it uses the intuitionistic predicate calculus for inference. In particular, this means that the double-negation elimination principle, as well as the principle of the excluded middle , do not hold. Note that to say does not hold exactly means that the excluded middle statement is not automatically provable for all propositions - indeed many such statements are still provable in and the negation of any such disjunction is inconsistent. is strictly stronger than in the sense that all -theorems are also -theorems. Heyting arithmetic comprises the axioms of Peano arithmetic and the intended model is the collection of natural numbers . The signature includes zero "" and the successor "", and the theories characterize addition and multiplication. This impacts the logic: With , it is a metatheorem that can be defined as and so that is for every proposition . The negation of is of the form and thus a trivial proposition. For numbers, write for . For a fixed number , the equality is true by reflexivity and a proposition is equivalent to . It may be shown that can then be defined as . This formal elimination of disjunctions was not possible in the quantifier-free primitive recursive arithmetic . The theory may be extended with function symbols for any primitive recursive function, making also a fragment of this theory. For a total function , one often considers predicates of the form . Theorems Double negations With explosion valid in any intuitionistic theory, if is a theorem for some , then by definition is provable if and only if the theory is inconsistent. Indeed, in Heyting arithmetic the double-negation explicitly expresses . For a predicate ,
https://en.wikipedia.org/wiki/Luminous%20energy
In photometry, luminous energy is the perceived energy of light. This is sometimes called the quantity of light. Luminous energy is not the same as radiant energy, the corresponding objective physical quantity. This is because the human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions (photopic vision), the eye is most sensitive to light at a wavelength of 555 nm. Light with a given amount of radiant energy will have more luminous energy if the wavelength is 555 nm than if the wavelength is longer or shorter. Light whose wavelength is well outside the visible spectrum has a luminous energy of zero, regardless of the amount of radiant energy present. The SI unit of luminous energy is the lumen second, which is unofficially known as the talbot in honor of William Henry Fox Talbot. In other systems of units, luminous energy may be expressed in basic units of energy. Explanation Luminous energy is related to radiant energy by the expression Here is the wavelength of light, and is the luminosity function, which represents the eye's sensitivity to different wavelengths of light. Luminous energy is the integrated luminous flux in a given period of time: See also Coefficient of utilization Radiant energy References Physical quantities Photometry
https://en.wikipedia.org/wiki/AMTOR
AMTOR (Amateur Teleprinting Over Radio) is a type of telecommunications system that consists of two or more electromechanical teleprinters in different locations that send and receive messages to one another. AMTOR is a specialized form of RTTY protocol. The term is an acronym for Amateur Teleprinting Over Radio and is derived from ITU-R recommendation 476-1 and is known commercially as SITOR (Simplex Telex Over radio) developed primarily for maritime use in the 1970s. AMTOR was developed in 1978 by Peter Martinez, G3PLX, with the first contact taking place in September 1978 with G3YYD on the 2m Amateur band. It was developed on homemade Motorola 6800-based microcomputers in assembler code. It was used extensively by amateur radio operators in the 1980s and 1990s but has now fallen out of use as improved PC-based data modes are now used and teleprinters became out of fashion. AMTOR improves on RTTY by incorporating error detection or error correction techniques. The protocol remains relatively uncomplicated and AMTOR performs well even in poor and noisy HF conditions. AMTOR operates in one of two modes: an error detection mode and an automatic repeat request (ARQ) mode. The AMTOR protocol utilizes a 7-bit code for each character, with each code-word having four mark and three space bits. If the received code does not match a four-to-three (4:3) ratio, the receiver assumes an error has occurred. In error detection mode, the code word will be dropped; in automatic repeat request mode, the receiver requests that the original data be resent. AMTOR also supports FEC in which simple bit-errors can be corrected. AMTOR utilizes FSK, with a frequency shift of 170 Hz, and a symbol rate of 100 Baud. AMTOR is rarely used today, as other protocols such as PSK31 are becoming favoured by amateur operators for real-time text communications. The ARRL has announced that as of August 17, 2009, it will be dropping AMTOR bulletin service in favor of the more popular MFSK16 and P
https://en.wikipedia.org/wiki/Zero-sum%20problem
In number theory, zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group. Concretely, given a finite abelian group G and a positive integer n, one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0. The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv. They proved that for the group of integers modulo n, Explicitly this says that any multiset of 2n − 1 integers has a subset of size n the sum of whose elements is a multiple of n, but that the same is not true of multisets of size 2n − 2. (Indeed, the lower bound is easy to see: the multiset containing n − 1 copies of 0 and n − 1 copies of 1 contains no n-subset summing to a multiple of n.) This result is known as the Erdős–Ginzburg–Ziv theorem after its discoverers. It may also be deduced from the Cauchy–Davenport theorem. More general results than this theorem exist, such as Olson's theorem, Kemnitz's conjecture (proved by Christian Reiher in 2003), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005). See also Davenport constant Subset sum problem References External links PlanetMath Erdős, Ginzburg, Ziv Theorem Sun, Zhi-Wei, "Covering Systems, Restricted Sumsets, Zero-sum Problems and their Unification" Further reading Zero-sum problems - A survey (open-access journal article) Zero-Sum Ramsey Theory: Graphs, Sequences and More (workshop homepage) Arie Bialostocki, "Zero-sum trees: a survey of results and open problems" N.W. Sauer (ed.) R.E. Woodrow (ed.) B. Sands (ed.), Finite and Infinite Combinatorics in Sets and Logic, Nato ASI Ser., Kluwer Acad. Publ. (1993) pp. 19–29 Y. Caro, "Zero-sum problems: a survey" Discrete Math., 152 (1996) pp. 93–113 Ramsey theory Combinatorics Paul Erdős Mathematical problems
https://en.wikipedia.org/wiki/I-drive
i-drive was a file hosting service that operated from 1998 to 2002. The name derived from the words "Internet drive". History Based in San Francisco, the company was founded in 1998 with seed investors and launched its first product, an online file storage service in August 1999. The idea originated from an early company Jeff Bonforte co-founded in 1996 called ShellServer.net, which provided 10 MB of space for IRC users. Bonforte compiled the founding team, which included Chris Lindland, Patrick Fenton, Tim Craycroft, Rich MacAlmon, John Reddig and Lou Perrelli (the last three were also the company's first angel investors). Originally presented as i-drive.com, the company acquired the domain idrive.com around October 1999. The initial product offered a limited amount of free file storage space, and later enhanced the offering with 'sideloading' – storing files such as MP3 files collected on the World Wide Web without the need for the user to download them to their individual computer. In January 2000, the company began offering unlimited storage space and an application called Filo. In 2001 the company transitioned from offering the free storage service and transformed the underlying software architecture into a middleware storage mechanism and product, seeking to sell into various markets including the 3G marketplace, targeting companies such as DoCoMo and Earthlink. In January 2002 the company name was changed to Anuvio Technologies. i-drive's assets were acquired by the EMC Corporation in 2002. Certain assets (including the idrive.com domain name) were acquired by Pro Softnet Corp which also offered online storage services. At its height, i-drive hosted over 10 million registered users, employed 110 people, and held partnerships with MP3.com, ZDnet.com, and 40 major universities. The service was rated as a "Top 5 Web Application" by CNET in 2000 and one of the "3 Top Technologies to Watch" by Fortune Magazine in 2000. The company raised over US$30 million fro
https://en.wikipedia.org/wiki/Chromalveolata
Chromalveolata was a eukaryote supergroup present in a major classification of 2005, then regarded as one of the six major groups within the eukaryotes. It was a refinement of the kingdom Chromista, first proposed by Thomas Cavalier-Smith in 1981. Chromalveolata was proposed to represent the organisms descended from a single secondary endosymbiosis involving a red alga and a bikont. The plastids in these organisms are those that contain chlorophyll c. However, the monophyly of the Chromalveolata has been rejected. Thus, two papers published in 2008 have phylogenetic trees in which the chromalveolates are split up, and recent studies continue to support this view. Groups and classification Historically, many chromalveolates were considered plants, because of their cell walls, photosynthetic ability, and in some cases their morphological resemblance to the land plants (Embryophyta). However, when the five-kingdom system (proposed in 1969) took prevalence over the animal–plant dichotomy, most of what we now call chromalveolates were put into the kingdom Protista, but the water molds and slime nets were put into the kingdom Fungi, while the brown algae stayed in the plant kingdom. These various organisms were later grouped together and given the name Chromalveolata by Cavalier-Smith. He believed them to be a monophyletic group, but this is not the case. In 2005, in a classification reflecting the consensus at the time, the Chromalveolata were regarded as one of the six major clades of eukaryotes. Although not given a formal taxonomic status in this classification, elsewhere the group had been treated as a Kingdom. The Chromalveolata were divided into four major subgroups: Cryptophyta Haptophyta Stramenopiles (or Heterokontophyta) Alveolata Other groups that may be included within, or related to, chromalveolates, are: Centrohelids Katablepharids Telonemia Though several groups, such as the ciliates and the water molds, have lost the ability to photosynthesize,
https://en.wikipedia.org/wiki/Blood%20volume
Blood volume (volemia) is the volume of blood (blood cells and plasma) in the circulatory system of any individual. Humans A typical adult has a blood volume of approximately 5 liters, with females and males having approximately the same blood percentage by weight (approx 7 to 8%) Blood volume is regulated by the kidneys. Blood volume (BV) can be calculated given the hematocrit (HC; the fraction of blood that is red blood cells) and plasma volume (PV), with the hematocrit being regulated via the blood oxygen content regulator: Blood volume measurement may be used in people with congestive heart failure, chronic hypertension, kidney failure and critical care. The use of relative blood volume changes during dialysis is of questionable utility. Total Blood Volume can be measured manually via the Dual Isotope or Dual Tracer Technique, a classic technique, available since the 1950s. This technique requires double labeling of the blood; that is 2 injections and 2 standards (51Cr-RBC for tagging red blood cells and I-HAS for tagging plasma volume) as well as withdrawing and re-infusing patients with their own blood for blood volume analysis results. This method may take up to 6 hours for accurate results. The blood volume is 70 ml/kg body weight in adult males, 65 ml/kg in adult females and 70-75 ml/kg in children (1 year old and over). Semi-automated system Blood volume may also be measured semi-automatically. The BVA-100, a product of Daxor Corporation, is an FDA-cleared diagnostic used at leading medical centers in the United States which consists of an automated well counter interfaced with a computer. It is able to report with 98% accuracy within 60 minutes the Total Blood Volume (TBV), Plasma Volume (PV) and Red Cell Volume (RCV) using the indicator dilution principle, microhematocrit centrifugation and the Ideal Height and Weight Method. The indicator, or tracer, is an I-131 albumin injection. An equal amount of the tracer is injected into a known and unknown
https://en.wikipedia.org/wiki/Graphplan
Graphplan is an algorithm for automated planning developed by Avrim Blum and Merrick Furst in 1995. Graphplan takes as input a planning problem expressed in STRIPS and produces, if one is possible, a sequence of operations for reaching a goal state. The name graphplan is due to the use of a novel planning graph, to reduce the amount of search needed to find the solution from straightforward exploration of the state space graph. In the state space graph: the nodes are possible states, and the edges indicate reachability through a certain action. On the contrary, in Graphplan's planning graph: the nodes are actions and atomic facts, arranged into alternate levels, and the edges are of two kinds: from an atomic fact to the actions for which it is a condition, from an action to the atomic facts it makes true or false. the first level contains true atomic facts identifying the initial state. Lists of incompatible facts that cannot be true at the same time and incompatible actions that cannot be executed together are also maintained. The algorithm then iteratively extends the planning graph, proving that there are no solutions of length l-1 before looking for plans of length l by backward chaining: supposing the goals are true, Graphplan looks for the actions and previous states from which the goals can be reached, pruning as many of them as possible thanks to incompatibility information. A closely related approach to planning is the Planning as Satisfiability (Satplan). Both reduce the automated planning problem to search for plans of different fixed horizon lengths. References A. Blum and M. Furst (1997). Fast planning through planning graph analysis. Artificial intelligence. 90:281-300. External links Avrim Blum's Graphplan home page PLPLAN: A Java GraphPlan Implementation NPlanner: A .NET GraphPlan Implementation Emplan and JavaGP: C++ and Java implementations of Graphplan MIT OpenCourseWare lecture on GraphPlan and making planning graphs Automat
https://en.wikipedia.org/wiki/Static%20timing%20analysis
Static timing analysis (STA) is a simulation method of computing the expected timing of a synchronous digital circuit without requiring a simulation of the full circuit. High-performance integrated circuits have traditionally been characterized by the clock frequency at which they operate. Measuring the ability of a circuit to operate at the specified speed requires an ability to measure, during the design process, its delay at numerous steps. Moreover, delay calculation must be incorporated into the inner loop of timing optimizers at various phases of design, such as logic synthesis, layout (placement and routing), and in in-place optimizations performed late in the design cycle. While such timing measurements can theoretically be performed using a rigorous circuit simulation, such an approach is liable to be too slow to be practical. Static timing analysis plays a vital role in facilitating the fast and reasonably accurate measurement of circuit timing. The speedup comes from the use of simplified timing models and by mostly ignoring logical interactions in circuits. This has become a mainstay of design over the last few decades. One of the earliest descriptions of a static timing approach was based on the Program Evaluation and Review Technique (PERT), in 1966. More modern versions and algorithms appeared in the early 1980s. Purpose In a synchronous digital system, data is supposed to move in lockstep, advancing one stage on each tick of the clock signal. This is enforced by synchronizing elements such as flip-flops or latches, which copy their input to their output when instructed to do so by the clock. Only two kinds of timing errors are possible in such a system: A Max time violation, when a signal arrives too late, and misses the time when it should advance. These are more commonly known as setup violations/checks which actually are a subset of max time violations involving a cycle shift on synchronous paths. A Min time violation, when an input signal ch
https://en.wikipedia.org/wiki/Digital%20signage
Digital signage is a segment of electronic signage. Digital displays use technologies such as LCD, LED, projection and e-paper to display digital images, video, web pages, weather data, restaurant menus, or text. They can be found in public spaces, transportation systems, museums, stadiums, retail stores, hotels, restaurants and corporate buildings etc., to provide wayfinding, exhibitions, marketing and outdoor advertising. They are used as a network of electronic displays that are centrally managed and individually addressable for the display of text, animated or video messages for advertising, information, entertainment and merchandising to targeted audiences. Roles and function The many different uses of digital signage allow a business to accomplish a variety of goals. Some of the most common applications include: Public information – news, weather, traffic and local (location specific) information, such as building directory with a map, fire exits and traveler information. Internal information - corporate messages, such as health & safety items, news and so forth. Product information – pricing, photos, raw materials or ingredients, suggested applications and other product information - especially useful in food marketing where signage may include nutritional facts or suggested uses or recipes. Information to enhance the customer service experience - interpretive signage in museums, galleries, zoos, parks and gardens, exhibitions, tourist and cultural attractions. Advertising and Promotion – promoting products or services, may be related to the location of the sign or using the screen's audience reach for general advertising. Brand building – in-store digital sign to promote the brand and build a brand identity. Influencing customer behavior – navigation, directing customers to different areas, increasing the "dwell time" on the store premises and a wide range of other uses in service of such influence. Influencing product or brand decision-making - Signage a
https://en.wikipedia.org/wiki/DNA%20bank
DNA banking is the secure, long term storage of an individual’s genetic material. DNA is most commonly extracted from blood, but can also be obtained from saliva and other tissues. DNA banks allow for conservation of genetic material and comparative analysis of an individual's genetic information. Analyzing an individual's DNA can allow scientists to predict genetic disorders, as used in preventive genetics or gene therapy, and prove that person's identity, as used in the criminal justice system. There are multiple methods for testing and analyzing genetic information including restriction fragment length polymorphism (RFLP) and polymerase chain reactions (PCR). Uses DNA banking is used to conserve genetic material, especially that of organisms that face extinction. This is a more prominent issue today due to deforestation and climate change, which serve as a threat to biodiversity. The genetic information can be stored within lambda phage and plasma vectors. The National Institute of Agrobiological Sciences (NIAS) DNA Bank, for example, collects the DNA of agricultural organisms, such as rice and fish, for scientific research. Most DNA provided by DNA banks is used for studies to attempt to develop more productive or more environmentally friendly agricultural species. Some DNA banks also store the DNA of rare or endangered species to ensure their survival. The DNA bank can be used to compare and analyze DNA samples. Comparison of DNA samples allowed scientists to work on the Human Genome Project, which maps out many of the genes on human DNA. It has also led to the development of preventive genetics. Samples from the DNA bank have been used to identify patterns and determine which genes lead to specific disorders. Once people know which genes lead to disorders, people can take steps to lessen the effects of that disorder. This can occur through adjustments in lifestyle, as demonstrated in preventive healthcare, or even through gene therapy. DNA can be banked at
https://en.wikipedia.org/wiki/Synthetic%20setae
Synthetic setae emulate the setae found on the toes of a gecko and scientific research in this area is driven towards the development of dry adhesives. Geckos have no difficulty mastering vertical walls and are apparently capable of adhering themselves to just about any surface. The five-toed feet of a gecko are covered with elastic hairs called setae and the ends of these hairs are split into nanoscale structures called spatulae (because of their resemblance to actual spatulas). The sheer abundance and proximity to the surface of these spatulae make it sufficient for van der Waals forces alone to provide the required adhesive strength. Following the discovery of the gecko's adhesion mechanism in 2002, which is based on van der Waals forces, biomimetic adhesives have become the topic of a major research effort. These developments are poised to yield families of novel adhesive materials with superior properties which are likely to find uses in industries ranging from defense and nanotechnology to healthcare and sport. Basic principles Geckos are renowned for their exceptional ability to stick and run on any vertical and inverted surface (excluding Teflon). However gecko toes are not sticky in the usual way like chemical adhesives. Instead, they can detach from the surface quickly and remain quite clean around everyday contaminants even without grooming. Extraordinary adhesion The two front feet of a tokay gecko can withstand 20.1 N of force parallel to the surface with 227 mm2 of pad area, a force as much as 40 times the gecko's weight. Scientists have been investigating the secret of this extraordinary adhesion ever since the 19th century, and at least seven possible mechanisms for gecko adhesion have been discussed over the past 175 years. There have been hypotheses of glue, friction, suction, electrostatics, micro-interlocking and intermolecular forces. Sticky secretions were ruled out first early in the study of gecko adhesion since geckos lack glandular tiss
https://en.wikipedia.org/wiki/Microsoft%20RPC
Microsoft RPC (Microsoft Remote Procedure Call) is a modified version of DCE/RPC. Additions include partial support for UCS-2 (but not Unicode) strings, implicit handles, and complex calculations in the variable-length string and structure paradigms already present in DCE/RPC. Example The DCE 1.0 reference implementation only allows such constructs as , or possibly . MSRPC allows much more complex constructs such as and even , a common expression in DCOM IDL files. Use MSRPC was used by Microsoft to seamlessly create a client/server model in Windows NT, with very little effort. For example, the Windows Server domains protocols are entirely MSRPC based, as is Microsoft's DNS administrative tool. Microsoft Exchange Server 5.5's administrative front-ends are all MSRPC client/server applications, and its MAPI was made more secure by "proxying" MAPI over a set of simple MSRPC functions that enable encryption at the MSRPC layer without involving the MAPI protocol. History MSRPC is derived from the Distributed Computing Environment 1.2 reference implementation from the Open Software Foundation, but has been copyrighted by Microsoft. DCE/RPC was originally commissioned by the Open Software Foundation, an industry consortium to set vendor- and technology-neutral open standards for computing infrastructure. None of the Unix vendors (now represented by the Open Group), wanted to use the complex DCE or such components as DCE/RPC at the time. Microsoft's Component Object Model is based heavily on MSRPC, adding interfaces and inheritance. The marshalling semantics of DCE/RPC are used to serialize method calls and results between processes with separate address spaces, albeit COM did not initially allow network calls between different machines. With Distributed Component Object Model (DCOM), COM was extended to software components distributed across several networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the com
https://en.wikipedia.org/wiki/DCE/RPC
DCE/RPC, short for "Distributed Computing Environment / Remote Procedure Calls", is the remote procedure call system developed for the Distributed Computing Environment (DCE). This system allows programmers to write distributed software as if it were all working on the same computer, without having to worry about the underlying network code. History DCE/RPC was commissioned by the Open Software Foundation in a "Request for Technology" (1993 David Chappell). One of the key companies that contributed was Apollo Computer, who brought in NCA - "Network Computing Architecture" which became Network Computing System (NCS) and then a major part of DCE/RPC itself. The naming convention for transports that can be designed (as architectural plugins) and then made available to DCE/RPC echoes these origins, e.g. ncacn_np (SMB Named Pipes transport); ncacn_tcp (DCE/RPC over TCP/IP) and ncacn_http to name a small number. DCE/RPC's history is such that it's sometimes cited as an example of design by committee. It is also frequently noted for its complexity, however this complexity is often a result of features that target large distributed systems and which are often unmatched by more recent RPC implementations such as SOAP. Software license Previously, the DCE source was only available under a proprietary license. As of January 12, 2005, it is available under a recognized open source license (LGPL), which permits a broader community to work on the source to expand its features and keep it current. The source may be downloaded over the web. The release consists of about 100 ".tar.gz" files that take up 170 Megabytes. (Note that they include the PostScript of all the documentation, for example.) The Open Group has stated it will work with the DCE community to make DCE available to the open source development community, as well as continuing to offer the source through The Open Group’s web site. DCE/RPC's reference implementation (version 1.1) was previously available unde
https://en.wikipedia.org/wiki/Dynkin%20system
A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set satisfying a set of axioms weaker than those of -algebra. Dynkin systems are sometimes referred to as -systems (Dynkin himself used this term) or d-system. These set families have applications in measure theory and probability. A major application of -systems is the - theorem, see below. Definition Let be a nonempty set, and let be a collection of subsets of (that is, is a subset of the power set of ). Then is a Dynkin system if is closed under complements of subsets in supersets: if and then is closed under countable increasing unions: if is an increasing sequence of sets in then It is easy to check that any Dynkin system satisfies: is closed under complements in : if then Taking shows that is closed under countable unions of pairwise disjoint sets: if is a sequence of pairwise disjoint sets in (meaning that for all ) then To be clear, this property also holds for finite sequences of pairwise disjoint sets (by letting for all ). Conversely, it is easy to check that a family of sets that satisfy conditions 4-6 is a Dynkin class. For this reason, a small group of authors have adopted conditions 4-6 to define a Dynkin system as they are easier to verify. An important fact is that any Dynkin system that is also a -system (that is, closed under finite intersections) is a -algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under finite unions, which in turn implies closure under countable unions. Given any collection of subsets of there exists a unique Dynkin system denoted which is minimal with respect to containing That is, if is any Dynkin system containing then is called the For instance, For another example, let and ; then Sierpiński–Dynkin's π-λ theorem Sierpiński-Dynkin's - theorem: If is a -system and is a Dynkin system with then In
https://en.wikipedia.org/wiki/Finite%20character
In mathematics, a family of sets is of finite character if for each , belongs to if and only if every finite subset of belongs to . That is, For each , every finite subset of belongs to . If every finite subset of a given set belongs to , then belongs to . Properties A family of sets of finite character enjoys the following properties: For each , every (finite or infinite) subset of belongs to . Every nonempty family of finite character has a maximal element with respect to inclusion (Tukey's lemma): In , partially ordered by inclusion, the union of every chain of elements of also belongs to , therefore, by Zorn's lemma, contains at least one maximal element. Example Let be a vector space, and let be the family of linearly independent subsets of . Then is a family of finite character (because a subset is linearly dependent if and only if has a finite subset which is linearly dependent). Therefore, in every vector space, there exists a maximal family of linearly independent elements. As a maximal family is a vector basis, every vector space has a (possibly infinite) vector basis. See also Hereditarily finite set References Families of sets
https://en.wikipedia.org/wiki/DIN%20sync
DIN sync, also called Sync24, is a synchronization interface for electronic musical instruments. It was introduced in 1980 by Roland Corporation and has been superseded by MIDI. Definition and history DIN sync was introduced in 1980 by Roland Corporation with the release of the TR-808 drum machine. The intended use was the synchronization of music sequencers, drum machines, arpeggiators and similar devices. It was superseded by MIDI in the mid-to-late 1980s. DIN sync consists of two signals, clock (tempo) and run/stop. Both signals are TTL compatible, meaning the low state is 0 V and the high state is about +5 V. The clock signal is a low-frequency pulse wave suggesting the tempo. Instead of measuring the waveform's frequency, the machine receiving the signal merely has to count the number of pulses to work out when to increment its position in the music. Roland equipment uses 24 pulses per quarter note, known as Sync24. Therefore, a Roland-compatible device playing sixteenth notes would have to advance to the next note every time it receives 6 pulses. Korg equipment uses 48 pulses per quarter note. The run/stop signal indicates whether the sequence is playing or not. If a device is a DIN sync sender, the positive slope of start/stop must reset the clock signal, and the clock signal must start with a delay of 9 ms. A detailed description on how to implement a DIN sync sender with Play, Pause, Continue and Stop functionality was published by E-RM Erfindungsbuero. Pinouts DIN sync is so named because it uses 5-pin DIN connectors, the same as used for MIDI. DIN sync itself is not a DIN standard. Note that despite using the same connectors as MIDI, it uses different pins on these connectors (1, 2, and 3 rather than MIDI's 2, 4 and 5), so a cable made specifically for MIDI will not necessarily have the pins required for DIN sync connected. In some applications the remaining DIN sync pins (4 and 5) are used as tap and fill in or reset and start, but this di
https://en.wikipedia.org/wiki/Bioswale
Bioswales are channels designed to concentrate and convey stormwater runoff while removing debris and pollution. Bioswales can also be beneficial in recharging groundwater. Bioswales are typically vegetated, mulched, or xeriscaped. They consist of a swaled drainage course with gently sloped sides (less than 6%). Bioswale design is intended to safely maximize the time water spends in the swale, which aids the collection and removal of pollutants, silt and debris. Depending on the site topography, the bioswale channel may be straight or meander. Check dams are also commonly added along the bioswale to increase stormwater infiltration. A bioswale's make-up can be influenced by many different variables, including climate, rainfall patterns, site size, budget, and vegetation suitability. It is important to maintain bioswales to ensure the best possible efficiency and effectiveness in removal of pollutants from stormwater runoff. Planning for maintenance is an important step, which can include the introduction of filters or large rocks to prevent clogging. Annual maintenance through soil testing, visual inspection, and mechanical testing is also crucial to the health of a bioswale. Bioswales are commonly applied along streets and around parking lots, where substantial automotive pollution settles on the pavement and is flushed by the first instance of rain, known as the first flush. Bioswales, or other types of biofilters, can be created around the edges of parking lots to capture and treat stormwater runoff before releasing it to the watershed or storm sewer. Contaminants addressed Bioswales work to remove pollutants through vegetation and the soil. As the storm water runoff flows through the bioswale, the pollutants are captured and settled by the leaves and stems of the plants. The pollutants then enter the soil where they decompose or can be broken down by bacteria in healthy soil. There are several classes of water pollutants that may be collected or arre
https://en.wikipedia.org/wiki/Reaction%20norm
In ecology and genetics, a reaction norm, also called a norm of reaction, describes the pattern of phenotypic expression of a single genotype across a range of environments. One use of reaction norms is in describing how different species—especially related species—respond to varying environments. But differing genotypes within a single species may also show differing reaction norms relative to a particular phenotypic trait and environment variable. For every genotype, phenotypic trait, and environmental variable, a different reaction norm can exist; in other words, an enormous complexity can exist in the interrelationships between genetic and environmental factors in determining traits. The concept was introduced by Richard Woltereck in 1909. A monoclonal example Scientifically analyzing norms of reaction in natural populations can be very difficult, simply because natural populations of sexually reproductive organisms usually do not have cleanly separated or superficially identifiable genetic distinctions. However, seed crops produced by humans are often engineered to contain specific genes, and in some cases seed stocks consist of clones. Accordingly, distinct seed lines present ideal examples of differentiated norms of reaction. In fact, agricultural companies market seeds for use in particular environments based on exactly this. Suppose the seed line A contains an allele a, and a seed line B of the same crop species contains an allele b, for the same gene. With these controlled genetic groups, we might cultivate each variety (genotype) in a range of environments. This range might be either natural or controlled variations in environment. For example, an individual plant might receive either more or less water during its growth cycle, or the average temperature the plants are exposed to might vary across a range. A simplification of the norm of reaction might state that seed line A is good for "high water conditions" while a seed line B is good for
https://en.wikipedia.org/wiki/Logical%20Unit%20Number%20masking
Logical Unit Number Masking or LUN masking is an authorization process that makes a Logical Unit Number available to some hosts and unavailable to other hosts. LUN Masking is a level of security that makes a LUN available to only selected hosts and unavailable to all others. This kind of security is done on the SAN level and is based on the host HBA, i.e. you can give access of specific LUN on the SAN to specific host with specific HBA. LUN masking is mainly implemented at the host bus adapter (HBA) level. The security benefits of LUN masking implemented at HBAs are limited, since with many HBAs it is possible to forge source addresses (WWNs/MACs/IPs) and compromise the access. Many storage controllers also support LUN masking. When LUN masking is implemented at the storage controller level, the controller itself enforces the access policies to the device and as a result it is more secure. However, it is mainly implemented not as a security measure per se, but rather as a protection against misbehaving servers which may corrupt disks belonging to other servers. For example, Windows servers attached to a SAN will, under some conditions, corrupt non-Windows (Unix, Linux, NetWare) volumes on the SAN by attempting to write Windows volume labels to them. By hiding the other LUNs from the Windows server, this can be prevented, since the Windows server does not even realize the other LUNs exist. See also Persistent binding External links LUN Masking LUN Masking and Zoning Computer storage buses
https://en.wikipedia.org/wiki/Virtual%20tape%20library
A virtual tape library (VTL) is a data storage virtualization technology used typically for backup and recovery purposes. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software. Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. The benefits of such virtualization include storage consolidation and faster data restore processes. For most mainframe data centers, the storage capacity varies, however protecting its business and mission critical data is always vital. Most current VTL solutions use SAS or SATA disk arrays as the primary storage component due to their relatively low cost. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives as disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. By backing up data to disks instead of tapes, VTL often increases performance of both backup and recovery operations. Restore processes are found to be faster than backup regardless of implementations. In some cases, the data stored on the VTL's disk array is exported to other media, such as physical tapes, for disaster recovery purposes (scheme called disk-to-disk-to-tape, or D2D2T). Alternatively, most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage, accessed through NFS and CIFS protocols over IP networks) not requiring a tape library emulation at all. They also often offer a disk staging feature: moving the data from disk to a physical tape for a long-term storage. While a virtual tape library is very fast, the disk storage within is not designed to be remo
https://en.wikipedia.org/wiki/CEO%20%28Data%20General%29
Comprehensive Electronic Office, often referred to by its initialism CEO, was a suite of office automation software from Data General introduced in 1981. It included word processing, e-mail, spreadsheets, business graphics and desktop accessories. The software was developed mostly in PL/I on and for the AOS and AOS/VS operating systems. Overview CEO was considered office automation software, which was an attempt to create a paperless office. CEO has also been cited as an example of an executive information system and as a decision support system. It included a main program known as the Control Program, which offered a menu driven interface on the assorted dumb terminals which existed at the time. The Control Program communicated with separate "Services" like the Mail Server, Calendar Server, File Server (for documents). There was also a Word Processor and a data management program which was also accessible from the Control Program. In 1985, Data General announced a complementary product, TEO (Technical Electronic Office), focused on the office automation needs of engineering professionals. In later years, CEO offerings grew to include various products to connect to CEO from early personal computers. The first such product was called CEO Connection. Later a product named CEO Object Office shipped which repackaged HP NewWave (an object oriented graphical interface). CEO code was heavily dependent on the INFOS II database. When Data General moved from the Eclipse MV platform to the AViiON, CEO was not ported to the new platform as the cost would have been prohibitive. CEO was often compared with IBM's offering, commonly called PROFS. CEO offered integration with DISOSS and SNADS. CEO also supported Xodiac, Data General's proprietary networking system. In 1989, Data General unveiled an email gateway product, Communications Server, which provided interoperability of CEO with X.400 email systems and X.500 directories. One early CEO site, Deutsche Credit in Chi
https://en.wikipedia.org/wiki/Knotted%20cord
A knotted cord was a primitive surveyor's tool for measuring distances. It is a length of cord with knots at regular intervals. They were eventually replaced by surveyor's chains, which being made of metal were less prone to stretching and thus were more accurate and consistent. Knotted cords were used by many ancient cultures. The Greek schoenus is referred to as a rope used to measure land. Ropes generally became cables and chains with Pythagoras making the Greek agros a chain of 10 stadia equal to a nautical mile c 540 BC. The Romans used a waxed cord for measuring distances. A knotted cord 12 lengths long (the units do not matter) closed into a loop can be used to lay out a right angle by forming the loop of cord into a 3–4–5 triangle. This could be used for laying out the corner of a field or a building foundation, for instance. Ancient Egypt Knotted cords were used by rope stretchers, royal surveyors who measured out the sides of fields (Egyptian 3ht). The knotted cords (Egyptian ht) were 100 royal cubits in length, with a knot every hayt or 10 royal cubits. The rope stretchers stretched the rope in order to take the sag out, as well as to keep the measures uniform. Since land in ancient Egypt was measured using several different units, there would have been knotted cords with the knots spaced in each unit. Among these were the mh t3 or land cubits, remen royal cubits, rods or ha3t, generally the lengths in multiples of 100 units. The longest measured length listed in the Rhind Mathematical Papyrus is a circumference of about a Roman mile with a diameter of 9 khet. Despite many popular claims, there is no surviving evidence that the 3-4-5 triangle, and by implication the Pythagoras' theorem, was used in Ancient Egypt to lay out right angles, such as for the pyramids. The historian Moritz Cantor first made the conjecture in 1882. Right angles were certainly laid out accurately in Ancient Egypt; their surveyors did use knotted cords for measurement; Plut
https://en.wikipedia.org/wiki/Windows%20Preinstallation%20Environment
Windows Preinstallation Environment (also known as Windows PE and WinPE) is a lightweight version of Windows used for the deployment of PCs, workstations, and servers, or troubleshooting an operating system while it is offline. It is intended to replace MS-DOS boot disks and can be booted via USB flash drive, PXE, iPXE, CD, DVD, or hard disk. Traditionally used by large corporations and OEMs (to preinstall Windows client operating systems on PCs during manufacturing), it is now widely available free of charge via Windows Assessment and Deployment Kit (WADK) (formerly Windows Automated Installation Kit (WAIK)). Overview WinPE was originally intended to be used only as a pre-installation platform for deploying Microsoft Windows operating systems, specifically to replace MS-DOS in this respect. WinPE has the following uses: Deployment of workstations and servers in large corporations as well as pre-installation by system builders of workstations and servers to be sold to end users. Recovery platform to run 32-bit or 64-bit recovery tools such as Winternals ERD Commander or Windows Recovery Environment (Windows RE). Platform for running third-party 32-bit or 64-bit disk cloning utilities. The package can be used for developer testing or as a recovery CD/DVD for system administrators. Many customized WinPE boot CDs packaged with third-party applications for different uses are now available from volunteers via the Internet. The package can also be used as the base of a forensics investigation to either capture a disk image or run analysis tools without mounting any available disks and thus changing state. Version 2.0 introduced a number of improvements and extended the availability of WinPE to all customers, not just corporate enterprise customers by downloading and installing Microsoft's Windows Automated Installation Kit (WAIK). It was originally designed and built by a small team of engineers in Microsoft's Windows Deployment team, including Vijay Jayaseelan,
https://en.wikipedia.org/wiki/Weill%20Cornell%20Graduate%20School%20of%20Medical%20Sciences
The Weill Cornell Graduate School of Medical Sciences (WCGS) (formerly known as the Cornell University Graduate School of Medical Sciences) is a graduate college of Cornell University that was founded in 1952 as an academic partnership between two major medical institutions in New York City: the Weill Cornell Medical College and the Sloan Kettering Institute. Cornell is involved in the Tri-Institutional MD-PhD Program with Rockefeller University and the Sloan Kettering Institute; each of these three institutions is part of a large biomedical center extending along York Avenue between 65th and 72nd Streets on the Upper East Side of Manhattan. Programs of study Weill Cornell Graduate School of Medical Sciences (WCGS) partners with neighboring institutions along York Avenue, also known as the “corridor of science” in New York City. Such partnerships with Memorial Sloan Kettering Cancer Center, New York-Presbyterian Hospital, the Hospital for Special Surgery and The Rockefeller University offer specialized learning opportunities. WCGS offers a variety of programs at both the Masters and Doctoral levels. As a partnership between the Sloan Kettering Institute and Weill Cornell Medical College, WCGS offers seven PhD programs as well as four distinct Masters programs. Additionally, the school offers two Tri-Institutional PhDs, a Tri-Institutional MD/PhD and the opportunity for students to participate in an Accelerated PhD/MBA program. PhD Programs: Biochemistry and Structural Biology Molecular Biology Cell and Developmental Biology Immunology and Microbial Pathogenesis Pharmacology Neuroscience Physiology Biophysics and Systems Biology Tri-Institutional PhD Programs Chemical Biology Computational Biology and Medicine Tri-I MD / PhD Program See also Weill Cornell Medical College Tri-Institutional MD-PhD Program References Universities and colleges in Manhattan Colleges and schools of Cornell University 1952 establishments in New York City Cornell Universi
https://en.wikipedia.org/wiki/LMS%20Scientific%20Research%20Laboratory
The LMS Scientific Research Laboratory was set up following the formation of the London, Midland and Scottish Railway in 1923. In 1929, the Company President, Lord Stamp read a paper Scientific Research in Transport to the Institute of Transport, and, in 1930 he founded the Advisory Committee on Scientific Research for Railways. The Scientific Research Laboratory was formed by the Vice-President and Director of Scientific Research, Sir Harold Hartley. Purpose-built accommodation was provided on the west side of London Road, Derby which opened in December 1935. The various paint and varnish laboratories were amalgamated and relocated there, joining the textile research from Calvert Street and the metallurgy and general engineering research in the locomotive works. In addition the laboratory liaised with various university departments, its remit covering all areas of railway operation. In 1936 an aerodynamics laboratory was formed, located in the locomotive works, using 1/24 scale models. It was involved in the design work for Stanier's Coronation locomotives, and went on to assess smoke deflectors, carriage ventilation and the effect of passing trains on structures and passengers in stations. This passed to the Derby College of Technology in 1960. The Rugby Testing Station was opened in 1948 as a joint venture with the LNER. Work continued into the nationalised era, when a decision was made to concentrate various headquarters functions, particularly that of the Chief Mechanical and Electrical Engineer, in one national centre. This produced the Railway Technical Centre (RTC) on the opposite side of London Road, on a site which had formerly been part of the Way and Works sidings. Part of the RTC would be occupied by a new British Rail Research Division, reporting directly to the Board, under the Chief Civil Engineer's Department. This went on, in addition to its other work into all aspects of the railway, to design the experimental version of the Advanced Passeng
https://en.wikipedia.org/wiki/Semi-symmetric%20graph
In the mathematical field of graph theory, a semi-symmetric graph is an undirected graph that is edge-transitive and regular, but not vertex-transitive. In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of the graph's edges to any other of its edges, but there is some pair of vertices such that no symmetry maps the first into the second. Properties A semi-symmetric graph must be bipartite, and its automorphism group must act transitively on each of the two vertex sets of the bipartition (in fact, regularity is not required for this property to hold). For instance, in the diagram of the Folkman graph shown here, green vertices can not be mapped to red ones by any automorphism, but every two vertices of the same color are symmetric with each other. History Semi-symmetric graphs were first studied E. Dauber, a student of F. Harary, in a paper, no longer available, titled "On line- but not point-symmetric graphs". This was seen by Jon Folkman, whose paper, published in 1967, includes the smallest semi-symmetric graph, now known as the Folkman graph, on 20 vertices. The term "semi-symmetric" was first used by Klin et al. in a paper they published in 1978. Cubic graphs The smallest cubic semi-symmetric graph (that is, one in which each vertex is incident to exactly three edges) is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by . It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič. All the cubic semi-symmetric graphs on up to 10000 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graphs after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices, a graph on 120 vertices with girth 8 and the Tutte 12-cage. References External links Algebraic graph theory Graph families Regular graphs
https://en.wikipedia.org/wiki/Tinplate
Tinplate consists of sheets of steel coated with a thin layer of tin to impede rusting. Before the advent of cheap milled steel, the backing metal was wrought iron. While once more widely used, the primary use of tinplate now is the manufacture of tin cans. Tinplate is made by rolling the steel (or formerly iron) in a rolling mill, removing any mill scale by pickling it in acid and then coating it with a thin layer of tin. Plates were once produced individually (or in small groups) in what became known as a pack mill. In the late 1920s pack mills began to be replaced by strip mills which produced larger quantities more economically. Formerly, tinplate was used for tin ceiling, and holloware (cheap pots and pans), also known as tinware. The people who made tinware (metal spinning) were tinplate workers. For many purposes, tinplate has been replaced by galvanised (zinc-coated or tinned) vessels, though not for cooking as zinc is poisonous. The zinc layer prevents the iron from rusting through sacrificial protection with the zinc oxidizing instead of the iron, whereas tin will only protect the iron if the tin-surface remains unbroken. History of production processes and markets The practice of tin mining likely began circa 3000 B.C. in Western Asia, British Isles and Europe. Tin was an essential ingredient of bronze production during the Bronze Age. The practice of tinning ironware to protect it against rust is an ancient one. This may have been the work of the whitesmith. This was done after the article was fabricated, whereas tinplate was tinned before fabrication. Tinplate was apparently produced in the 1620s at a mill of (or under the patronage of) the Earl of Southampton, but it is not clear how long this continued. The first production of tinplate was probably in Bohemia, from where the trade spread to Saxony, and was well-established there by the 1660s. Andrew Yarranton and Ambrose Crowley (a Stourbridge blacksmith and father of the more famous Sir Ambros
https://en.wikipedia.org/wiki/Overhead%20%28computing%29
In computer science, overhead is any combination of excess or indirect computation time, memory, bandwidth, or other resources that are required to perform a specific task. It is a special case of engineering overhead. Overhead can be a deciding factor in software design, with regard to structure, error correction, and feature inclusion. Examples of computing overhead may be found in Object Oriented Programming (OOP), functional programming, data transfer, and data structures. Software design Choice of implementation A programmer/software engineer may have a choice of several algorithms, encodings, data types or data structures, each of which have known characteristics. When choosing among them, their respective overhead should also be considered. Tradeoffs In software engineering, overhead can influence the decision whether or not to include features in new products, or indeed whether to fix bugs. A feature that has a high overhead may not be included – or needs a big financial incentive to do so. Often, even though software providers are well aware of bugs in their products, the payoff of fixing them is not worth the reward, because of the overhead. For example, an implicit data structure or succinct data structure may provide low space overhead, but at the cost of slow performance (space/time tradeoff). Run-time complexity of software Algorithmic complexity is generally specified using Big O notation. This makes no comment on how long something takes to run or how much memory it uses, but how its increase depends on the size of the input. Overhead is deliberately not part of this calculation, since it varies from one machine to another, whereas the fundamental running time of an algorithm does not. This should be contrasted with algorithmic efficiency, which takes into account all kinds of resources – a combination (though not a trivial one) of complexity and overhead. Examples Computer programming (run-time and computational overhead) Invoking a function
https://en.wikipedia.org/wiki/Aux-send
An aux-send (auxiliary send) is an electronic signal-routing output used on multi-channel sound mixing consoles used in recording and broadcasting settings and on PA system amplifier-mixers used in music concerts. The signal from the auxiliary send is often routed through outboard audio processing effects units (e.g., reverb, digital delay, compression, etc.) and then returned to the mixer using an auxiliary return input jack, thus creating an effects loop. This allows effects to be added to an audio source or channel within the mixing console. Another common use of the aux send mix is to create monitor mixes for the onstage performers' monitor speakers or in-ear monitors. The aux send's monitor mix is usually different from the front of house mix the audience is hearing. Purpose The routing configuration and usage of an aux-send will vary depending on the application. Two types of aux-sends commonly exist: pre-fader and post-fader. Pre-fader sends are not affected by the main fader for the channel, while post-fader sends are affected by the position of the main fader slider control for the channel. In a common configuration, a post-fader aux-send output is connected to the audio input of an outboard (i.e., an external [usually rack-mounted] unit that is not part of the mixer console) audio effects unit (most commonly a temporal/time-based effect such as reverb or delay; compressors and other dynamic processors would normally be on an insert, instead). The audio output of the outboard unit is then connected to the aux-return input on the mixing console (if the recording console has one), or, alternatively, it can be looped back to one of the console's unused input channels. A post-fader output is used in order to prevent channels whose faders are at zero gain from "contaminating" the effects-return loop with hiss and hum. Mixing consoles most commonly have a group of aux-send knobs in each channel strip, or, on small mixers, a single aux-send knob per channel, w
https://en.wikipedia.org/wiki/Overhead%20%28engineering%29
In engineering, some methods or components make special demands on the system. The extra design features necessary to meet these demands are called overhead. For instance, in electrical engineering, a particular integrated circuit might draw large current, requiring a robust power delivery circuit and a heat-dissipation mechanism. Example An example from software engineering is the encoding of information and data. The date and time "2011-07-12 07:18:47" can be expressed as Unix time with the 32-bit signed integer 1310447927, consuming only 4 bytes. Represented as ISO 8601 formatted UTF-8 encoded string 2011-07-12 07:18:47 the date would consume 19 bytes, a size overhead of 375% over the binary integer representation. As XML this date can be written as follows with an overhead of 218 characters, while adding the semantic context that it is a CHANGEDATE with index 1. <?xml version="1.0" encoding="UTF-8"?> <DATETIME qualifier="CHANGEDATE" index="1"> <YEAR>2011</YEAR> <MONTH>07</MONTH> <DAY>12</DAY> <HOUR>07</HOUR> <MINUTE>18</MINUTE> <SECOND>47</SECOND> </DATETIME> The 349 bytes resulting from the UTF-8 encoded XML correspond to a size overhead of 8725% over the original integer representation. See also Overhead (business) Engineering concepts
https://en.wikipedia.org/wiki/Prova
Prova is an open source programming language that combines Prolog with Java. Description Prova is a rule-based scripting system that is used for middleware. The language combines imperative and declarative programming by using a prolog syntax that allows calls to Java functions. In this way a strong Java code base is combined with Prolog features such as backtracking. Prova is derived from Mandarax, a Java-based inference system developed by Jens Dietrich. Prova extends Mandarax by providing a proper language syntax, native syntax integration with Java, agent messaging and reaction rules. The development of this language was supported by the grant provided within the EU projects GeneStream and BioGRID. In the project, the language is used as a rule-based backbone for distributed web applications in biomedical data integration, in particular, the GoPubMed system. The design goals of Prova: Combine declarative and object-oriented programming. Expose logic and agent behavior as rules. Access data sources via wrappers written in Java or command-line shells like Perl. Make the Java API of various packages accessible as rules. Run within the Java runtime. Enable rapid prototyping of applications. Offer a rule-based platform for distributed agent programming. Prova aims to provide support for data integration tasks when the following is important: Location transparency (local, remote, mirrors); Format transparency (database, RDF, XML, HTML, flat files, computation resource); Resilience to change (databases and web sites change often); Use of open and open source technologies; Understandability and modifiability by a non-IT specialist; Economical knowledge representation; Extensibility with additional functionality; Leveraging ontologies. Prova has been used as the key service integration engine in the Xcalia product where it is used for computing efficient global execution plans across multiple data sources such as Web services, TP monitors transactio
https://en.wikipedia.org/wiki/TV-out
The term TV-out is commonly used to label the connector of equipment providing an analog video signal acceptable for a television AV input. TV-out is different from AV-out in that it only provides video, no audio. Types of signals and their respective connectors include: Composite video S-video Component video See AV input for more information. Television technology Graphics cards
https://en.wikipedia.org/wiki/Crypto%20API%20%28Linux%29
Crypto API is a cryptography framework in the Linux kernel, for various parts of the kernel that deal with cryptography, such as IPsec and dm-crypt. It was introduced in kernel version 2.5.45 and has since expanded to include essentially all popular block ciphers and hash functions. Userspace interfaces Many platforms that provide hardware acceleration of AES encryption expose this to programs through an extension of the instruction set architecture (ISA) of the various chipsets (e.g. AES instruction set for x86). With this sort of implementation any program (kernel-mode or user-space) may utilize these features directly. Some platforms, such as the ARM Kirkwood SheevaPlug and AMD Geode processors, however, are not implemented as ISA extensions, and are only accessible through kernel-mode drivers. In order for user-mode applications that utilize encryption, such as wolfSSL, OpenSSL or GnuTLS, to take advantage of such acceleration, they must interface with the kernel. AF_ALG A netlink-based interface that adds an AF_ALG address family; it was merged into version 2.6.38 of the Linux kernel mainline. There was once a plugin to OpenSSL to support AF_ALG, which has been submitted for merging. In version 1.1.0, OpenSSL landed another patch for AF_ALG contributed by Intel. wolfSSL can make use of AF_ALG and cryptodev The OpenBSD Cryptographic Framework /dev/crypto interface of OpenBSD was ported to Linux, but never merged. See also Microsoft CryptoAPI References Application programming interfaces Cryptographic software Linux security software Linux kernel features
https://en.wikipedia.org/wiki/Paul%20Bernays
Paul Isaac Bernays (17 October 1888 – 18 September 1977) was a Swiss mathematician who made significant contributions to mathematical logic, axiomatic set theory, and the philosophy of mathematics. He was an assistant and close collaborator of David Hilbert. Biography Bernays was born into a distinguished German-Jewish family of scholars and businessmen. His great-grandfather, Isaac ben Jacob Bernays, served as chief rabbi of Hamburg from 1821 to 1849. Bernays spent his childhood in Berlin, and attended the Köllner Gymnasium, 1895–1907. At the University of Berlin, he studied mathematics under Issai Schur, Edmund Landau, Ferdinand Georg Frobenius, and Friedrich Schottky; philosophy under Alois Riehl, Carl Stumpf and Ernst Cassirer; and physics under Max Planck. At the University of Göttingen, he studied mathematics under David Hilbert, Edmund Landau, Hermann Weyl, and Felix Klein; physics under Voigt and Max Born; and philosophy under Leonard Nelson. In 1912, the University of Berlin awarded him a Ph.D. in mathematics for a thesis, supervised by Landau, on the analytic number theory of binary quadratic forms. That same year, the University of Zurich awarded him habilitation for a thesis on complex analysis and Picard's theorem. The examiner was Ernst Zermelo. Bernays was Privatdozent at the University of Zurich, 1912–17, where he came to know George Pólya. His collected communications with Kurt Gödel span many decades. Starting in 1917, David Hilbert employed Bernays to assist him with his investigations of the foundation of arithmetic. Bernays also lectured on other areas of mathematics at the University of Göttingen. In 1918, that university awarded him a second habilitation for a thesis on the axiomatics of the propositional calculus of Principia Mathematica. In 1922, Göttingen appointed Bernays extraordinary professor without tenure. His most successful student there was Gerhard Gentzen. After Nazi Germany enacted the Law for the Restoration of the Professi
https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison%20formula
In mathematics, in particular linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix and the outer product, , of vectors and . The Sherman–Morrison formula is a special case of the Woodbury formula. Though named after Sherman and Morrison, it appeared already in earlier publications. Statement Suppose is an invertible square matrix and are column vectors. Then is invertible iff . In this case, Here, is the outer product of two vectors and . The general form shown here is the one published by Bartlett. Proof () To prove that the backward direction is invertible with inverse given as above) is true, we verify the properties of the inverse. A matrix (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix (in this case ) if and only if . We first verify that the right hand side () satisfies . To end the proof of this direction, we need to show that in a similar way as above: (In fact, the last step can be avoided since for square matrices and , is equivalent to .) () Reciprocally, if , then via the matrix determinant lemma, , so is not invertible. Application If the inverse of is already known, the formula provides a numerically cheap way to compute the inverse of corrected by the matrix (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) . Using unit columns (columns from the identity matrix) for or , individual columns or rows of may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way. In the general case, where is a -by- matrix and and are arbitrary vectors of dimension , the whole matrix is updated and the computation takes
https://en.wikipedia.org/wiki/Lagrange%27s%20identity
In algebra, Lagrange's identity, named after Joseph Louis Lagrange, is: which applies to any two sets {a1, a2, ..., an} and {b1, b2, ..., bn} of real or complex numbers (or more generally, elements of a commutative ring). This identity is a generalisation of the Brahmagupta–Fibonacci identity and a special form of the Binet–Cauchy identity. In a more compact vector notation, Lagrange's identity is expressed as: where a and b are n-dimensional vectors with components that are real numbers. The extension to complex numbers requires the interpretation of the dot product as an inner product or Hermitian dot product. Explicitly, for complex numbers, Lagrange's identity can be written in the form: involving the absolute value. Since the right-hand side of the identity is clearly non-negative, it implies Cauchy's inequality in the finite-dimensional real coordinate space Rn and its complex counterpart Cn. Geometrically, the identity asserts that the square of the volume of the parallelepiped spanned by a set of vectors is the Gram determinant of the vectors. Lagrange's identity and exterior algebra In terms of the wedge product, Lagrange's identity can be written Hence, it can be seen as a formula which gives the length of the wedge product of two vectors, which is the area of the parallelogram they define, in terms of the dot products of the two vectors, as Lagrange's identity and vector calculus In three dimensions, Lagrange's identity asserts that if a and b are vectors in R3 with lengths |a| and |b|, then Lagrange's identity can be written in terms of the cross product and dot product: Using the definition of angle based upon the dot product (see also Cauchy–Schwarz inequality), the left-hand side is where is the angle formed by the vectors a and b. The area of a parallelogram with sides and and angle is known in elementary geometry to be so the left-hand side of Lagrange's identity is the squared area of the parallelogram. The cross product appeari
https://en.wikipedia.org/wiki/Baguenaudier
Baguenaudier (; French for "time-waster"), also known as the Chinese rings, Cardan's suspension, Cardano's rings, Devil's needle or five pillars puzzle, is a disentanglement puzzle featuring a loop which must be disentangled from a sequence of rings on interlinked pillars. The loop can be either string or a rigid structure. It is thought to have been invented originally in China. The origins are obscure. The American ethnographer Stewart Culin related a tradition attributing the puzzle's invention to the 2nd/3rd century Chinese general Zhuge Liang. It was used by French peasants as a locking mechanism. Variations of this include the Devil's staircase, Devil's Halo and the impossible staircase. Another similar puzzle is the Giant's causeway which uses a separate pillar with an embedded ring. Mathematical solution The 19th-century French mathematician Édouard Lucas, the inventor of the Tower of Hanoi puzzle, was known to have come up with an elegant solution which used binary and Gray codes, in the same way that his puzzle can be solved. The minimum number of moves to solve an n-ringed problem has been found to be For other formulae, see . See also ABACABA pattern Disentanglement puzzle Towers of Hanoi References Chinese ancient games Chinese inventions Educational toys Mechanical puzzles
https://en.wikipedia.org/wiki/Leibniz%20integral%20rule
In calculus, the Leibniz integral rule for differentiation under the integral sign states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative. It is named after Gottfried Leibniz. In the special case where the functions and are constants and with values that do not depend on this simplifies to: If is constant and , which is another common situation (for example, in the proof of Cauchy's repeated integration formula), the Leibniz integral rule becomes: This important result may, under certain conditions, be used to interchange the integral and partial differential operators, and is particularly useful in the differentiation of integral transforms. An example of such is the moment generating function in probability theory, a variation of the Laplace transform, which can be differentiated to generate the moments of a random variable. Whether Leibniz's integral rule applies is essentially a question about the interchange of limits. General form: differentiation under the integral sign The right hand side may also be written using Lagrange's notation as: Stronger versions of the theorem only require that the partial derivative exist almost everywhere, and not that it be continuous. This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus. The (first) fundamental theorem of calculus is just the particular case of the above formula where is constant, and does not depend on If both upper and lower limits are taken as constants, then the formula takes the shape of an operator equation: where is the partial derivative with respect to and is the integral operator with respect to over a fixed interval. That is, it is related to the symmetry of second derivatives, but invol
https://en.wikipedia.org/wiki/Phyletic%20gradualism
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. The theory is contrasted with punctuated equilibrium. History The word phyletic derives from the Greek φυλετικός phūletikos, which conveys the meaning of a line of descent. Phyletic gradualism contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another. Evolutionary biologist Richard Dawkins argues that constant-rate gradualism is not present in the professional literature, thereby the term serves only as a straw-man for punctuated-equilibrium advocates. In his book The Blind Watchmaker, Dawkins observes that Charles Darwin himself was not a constant-rate gradualist, as suggested by Niles Eldredge and Stephen Jay Gould. In the first edition of On the Origin of Species, Darwin stated that "Species of different genera and classes have not changed at the same rate, or in the same degree. In the oldest tertiary beds a few living shells may still be found in the midst of a multitude of extinct forms... The Silurian Lingula differs but little from the living species of this genus". Lingula is among the few brachiopods surviving today but also known from fossils over 500 million years old. In the fifth edition of The Origin of Species, Darwin wrote that "the periods during which species
https://en.wikipedia.org/wiki/Environmental%20mitigation
Environmental mitigation, compensatory mitigation, or mitigation banking, are terms used primarily by the United States government and the related environmental industry to describe projects or programs intended to offset known impacts to an existing historic or natural resource such as a stream, wetland, endangered species, archeological site, paleontological site or historic structure. Environmental mitigation is typically a part of an environmental crediting system established by governing bodies which involves allocating debits and credits. Debits occur in situations where a natural resource has been destroyed or severely impaired and credits are given in situations where a natural resource has been deemed to be improved or preserved. Therefore, when an entity such as a business or individual has a "debit" they are required to purchase a "credit". In some cases credits are bought from "mitigation banks" which are large mitigation projects established to provide credit to multiple parties in advance of development when such compensation cannot be achieved at the development site or is not seen as beneficial to the environment. Crediting systems can allow credit to be generated in different ways. For example, in the United States, projects are valued based on what the intentions of the project are which may be to preserve, enhance, restore or create (PERC) a natural resource. Advantages Environmental mitigation and crediting systems are often praised for the following reasons: Development-friendly Mitigation is a more development-friendly alternative to strict environmental laws because it allows development to occur where environmental laws might prohibit it. Mitigation industry Mitigation inevitably creates a "mitigation industry". By requiring those who impact natural resources to purchase credits, a demand for mitigation credit is formed. Businesses related to environmental work typically benefit from such a system. Targeting ecological value Mitiga
https://en.wikipedia.org/wiki/Knee%20wall
A knee wall is a short wall, typically under three feet (one metre) in height, used to support the rafters in timber roof construction. In his book A Visual Dictionary of Architecture, Francis D. K. Ching defines a knee wall as "a short wall supporting rafters at some intermediate position along their length." The knee wall provides support to rafters which therefore need not be large enough to span from the ridge to the eaves. Typically the knee wall is covered with plaster or gypsum board. The term is derived from the association with a human knee, partly bent. Knee walls are common in houses in which the ceiling on the top floor is an attic, i.e. the ceiling is the underside of the roof and slopes down on one or more sides. Jamb height Since there is no legal definition of the jamb height, further specifications are required as to how it is to be measured. It is generally accepted that the knee wall begins at the upper edge of the soffit of the floor below. There is no uniform use of the term for the end of the knee stick. In the greatest extent of the knee wall, this extends to the imaginary intersection of the outer wall with the upper edge of the rafter. The smallest dimension is when the knee wall is understood to mean only the wall that goes beyond the attic ceiling (excluding the base purlin). Between these possibilities, other methods of measurement are in use. See also Sleeper wall – a short wall used to support floor joists of a ground floor References Types of wall Building engineering
https://en.wikipedia.org/wiki/Secure%20Real-time%20Transport%20Protocol
The Secure Real-time Transport Protocol (SRTP) is a profile for Real-time Transport Protocol (RTP) intended to provide encryption, message authentication and integrity, and replay attack protection to the RTP data in both unicast and multicast applications. It was developed by a small team of Internet Protocol and cryptographic experts from Cisco and Ericsson. It was first published by the IETF in March 2004 as . Since RTP is accompanied by the RTP Control Protocol (RTCP) which is used to control an RTP session, SRTP has a sister protocol, called Secure RTCP (SRTCP); it securely provides the same functions to SRTP as the ones provided by RTCP to RTP. Utilization of SRTP or SRTCP is optional in RTP or RTCP applications; but even if SRTP or SRTCP are used, all provided features (such as encryption and authentication) are optional and can be separately enabled or disabled. The only exception is the message authentication feature which is indispensable and required when using SRTCP. Data flow encryption SRTP and SRTCP use Advanced Encryption Standard (AES) as the default cipher. There are two cipher modes defined which allow the AES block cipher to be used as a stream cipher: Segmented Integer Counter Mode A typical counter mode, which allows random access to any blocks, which is essential for RTP traffic running over unreliable network with possible loss of packets. In the general case, almost any function can be used in the role of counter, assuming that this function does not repeat for a large number of iterations. But the standard for encryption of RTP data is just a usual integer incremental counter. AES running in this mode is the default encryption algorithm, with a default key size of 128 bits and a default session salt key length of 112 bits. f8-mode A variation of output feedback mode, enhanced to be seekable and with an altered initialization function. The default values of the encryption key and salt key are the same as for AES in counter mode. (AE
https://en.wikipedia.org/wiki/Grundz%C3%BCge%20der%20Mengenlehre
(German for "Basics of Set Theory") is a book on set theory written by Felix Hausdorff. First published in April 1914, was the first comprehensive introduction to set theory. Besides the systematic treatment of known results in set theory, the book also contains chapters on measure theory and topology, which were then still considered parts of set theory. Hausdorff presented and developed original material which was later to become the basis for those areas. In 1927 Hausdorff published an extensively revised second edition under the title Mengenlehre (German for "Set Theory"), with many of the topics of the first edition omitted. In 1935 there was a third German edition, which in 1957 was translated by John R. Aumann et al. into English under the title Set Theory. Chelsea Publishing Company reprinted the German 1914 edition in New York City in German in 1944, 1949, 1965, 1978 and 1991 but never issued an English translation of this first edition (or the 1927 second edition) to date. When the American Mathematical Society took over and set up AMS Chelsea Publishing it published editions in 2005 and 2021. References . Reprinted by Chelsea Publishing Company in 1944, 1949 and 1965 . Republished by Dover Publications, New York, N. Y., 1944 Republished by AMS-Chelsea 2005. . Extended edition of a chapter in The Princeton Companion to Mathematics. 1914 non-fiction books Mathematics books
https://en.wikipedia.org/wiki/Intertec%20Superbrain
The Intertec SuperBrain was an all-in-one commercial microcomputer that was first sold by Intertec Data Systems Corporation of Columbia, South Carolina, USA in 1979. The machine ran the operating system CP/M and was somewhat unusual in that it used dual Z80 CPUs, the second being used as a disk controller. In 1983, the basic machine sold for about . There were several variants, including the SuperBrain II (released in 1982), SuperBrain II Jr., "QD" (quad density disk drives) and "SD" (super density) models. Intertec also released a similar looking dumb terminal, the Intertube, and smart terminal, the Emulator. The SuperBrain is notable for being at the user end of the first Kermit connection in 1981. The machine was practical and useful in the office environment, but somewhat limited until the arrival of the first 5 MB hard drive in one of the floppy drive bays. This was soon replaced by the 10 MB hard drive. Up to 255 CompuStar workstations could be daisy-chained together via DC-37 "Chaining Adaptor" parallel ports to share the "central disk system" (one of the three hard drive peripheral options below). Each computer, or VPU (Video Processing Unit), was assigned a unique number from 1 to 255 by setting an eight-position DIP switch. Specifications Peripherals CompuStar DSS-10 10 MB Hard Drive (CompuStar Disk Storage System) CDC 96 MB Hard Drive (80 MB fixed disk with 16 MB removable platter) Priam 14" 144 MB Hard Drive Applications Microsoft BASIC 8080 Assembler Microsoft COBOL 74 APL In pop culture The Superbrain can be seen in two episodes of Knight Rider: one in Season 1, Episode 10, "The Final Verdict" (1982), and the second in Season 1, Episode 18, "White Bird" (1983). In John Carpenter’s The Thing, Dr. Blair uses a Superbrain to analyse samples from The Thing from which he estimates that it will take over the world in about three years. References External links Intertec SuperBrain DAVES OLD COMPUTERS Superbrain at Old Computer Museum Ma
https://en.wikipedia.org/wiki/Platform%20screen%20doors
Platform screen doors (PSDs), also known as platform edge doors (PEDs), are used at some train, rapid transit and people mover stations to separate the platform from train tracks, as well as on some bus rapid transit, tram and light rail systems. Primarily used for passenger safety, they are a relatively new addition to many metro systems around the world, some having been retrofitted to established systems. They are widely used in newer Asian and European metro systems, and Latin American bus rapid transit systems. History The idea for platform edge doors dates from as early as 1908, when Charles S. Shute of Boston was granted a patent for "Safety fence and gate for railway-platforms". The invention consisted of "a fence for railway platform edges", composed of a series of pickets bolted to the platform edge, and vertically movable pickets that could retract into a platform edge when there was a train in the station. In 1917, Carl Albert West was granted a patent for "Gate for subrailways and the like". The invention provided for spaced guides secured to a tunnel's side wall, with "a gate having its ends guided in the guides, the ends and intermediate portions of the gate having rollers engaging the side wall". Pneumatic cylinders with pistons would be used to raise the gates above the platform when a train was in the station. Unlike Shute's invention, the entire platform gate was movable, and was to retract upward. The first stations in the world with platform screen doors were the ten stations of the Saint Petersburg Metro's Line 2 that opened between 1961 and 1972. The platform "doors" are actually openings in the station wall, which supports the ceiling of the platform. The track tunnels adjoining the ten stations' island platforms were built with tunnel boring machines (TBMs), and the island platforms were actually located in a separate vault between the two track tunnels. Usually, TBMs bore the deep-level tunnels between stations, while the station vaults
https://en.wikipedia.org/wiki/International%20Computer%20Science%20Institute
The International Computer Science Institute (ICSI) is an independent, non-profit research organization located in Berkeley, California, United States. Since its founding in 1988, ICSI has maintained an affiliation agreement with the University of California, Berkeley, where several of its members hold faculty appointments. Research areas ICSI's research activities include Internet architecture, network security, network routing, speech and speaker recognition, spoken and text-based natural language processing, computer vision, multimedia, privacy and biological system modeling. Research groups and leaders The Institute's director is Dr. Lea Shanley. SIGCOMM Award winner Professor Scott Shenker, one of the most-cited authors in computer science, is the Chief Scientist and head of the New Initiatives group. SIGCOMM Award winner Professor Vern Paxson, who leads network security efforts and who previously chaired the Internet Research Task Force. Professor Jerry Feldman is the head of the Artificial Intelligence Group. Adjunct Professor Gerald Friedland is the head of the Audio and Multimedia Group. Dr. Stella Yu is head of the Computer Vision Group. Dr. Serge Egelman is head of the Usable Security and Privacy Group. Dr. Steven Wegman is head of the Speech Group. Notable members and alumni Turing Award and Kyoto Prize winner Professor Richard Karp is an alumnus and former head of the Algorithms Group. Professor Nelson Morgan is a former director and former head of the speech group. Professor Trevor Darrell is an alumnus and former head of the Computer Vision Group. Professor Krste Asanovic, an ACM Distinguished Scientist, is an alumna and former head of the Computer Architecture Group. IEEE Internet Award winner Sally Floyd; connectionist pioneer Jerry Feldman; frame semantics and construction grammar pioneer Charles J. Fillmore and Collin F. Baker, who lead the FrameNet semantic parsing project; and Paul Kay, who published an influential study on t
https://en.wikipedia.org/wiki/TelecityGroup
Telecity Group plc (formerly TelecityRedbus and before that Telecity), was a European carrier-neutral datacentre and colocation centre provider. It specialised in the design, build and management of datacentre space. It was listed on the London Stock Exchange until it was acquired by Equinix in January 2016. History Telecity Group plc was the result of the uniting of three separate companies – TeleCity Limited, Redbus Interhouse Limited and Globix Holdings (UK) Limited. TeleCity Limited was founded by Mike Kelly and Anish Kapoor from Manchester University in April 1998 and opened its first data centre in Manchester. At that time 3i Group made an investment of £24 million in the Company. In July 1998, Redbus Interhouse Limited was incorporated, and commenced operations in its first data centre in London Docklands in July 1999. By March 2000, Redbus Interhouse Limited floated on the main market of the London Stock Exchange and in June 2000, TeleCity Limited’s parent company, TeleCity plc floated on the London Stock Exchange. In September 2005, TeleCity plc was taken private by 3i and Oak Hill and by October of that year Telecity Group plc was incorporated and became the holding company of Telecity plc and its group companies in November 2005. In January 2006 Telecity Group acquired Redbus Interhouse plc, a rival business, resulting in the two business, TeleCity and Redbus, trading under the name of TelecityRedbus. Later in 2006 Telecity Group plc bought the European assets of the US-based Globix Corporation. Following a rebranding exercise implemented in August 2007, TeleCity, Redbus and Globix (UK) began to trade under the name TelecityGroup. In October Telecity Group plc listed on the main market of the London Stock Exchange. In August 2010, TelecityGroup acquired Internet Facilitators Limited (IFL), a provider of-carrier neutral data centres in Manchester. In August 2011 TelecityGroup acquired Data Electronics, which operates two carrier-neutral data centres i
https://en.wikipedia.org/wiki/Limen
In physiology, psychology, or psychophysics, a limen or a liminal point is a sensory threshold of a physiological or psychological response. Such points delineate boundaries of perception; that is, a limen defines a sensory threshold beyond which a particular stimulus becomes perceivable, and below which it remains unperceivable. Liminal, as an adjective, means situated at a sensory threshold, hence barely perceptible. Subliminal means below perception. The absolute threshold is the lowest amount of sensation detectable by a sense organ. See also Just noticeable difference (least perceptible difference) Threshold of pain, the boundary where perception becomes pain Weber–Fechner law (Weber's law) References Physiology Perception
https://en.wikipedia.org/wiki/George%20Szekeres
George Szekeres AM FAA (; 29 May 1911 – 28 August 2005) was a Hungarian–Australian mathematician. Early years Szekeres was born in Budapest, Hungary, as Szekeres György and received his degree in chemistry at the Technical University of Budapest. He worked six years in Budapest as an analytical chemist. He married Esther Klein in 1937. Being Jewish, the family had to escape from the Nazi persecution so Szekeres took a job in Shanghai, China. There they lived through World War II, the Japanese occupation and the beginnings of the Communist revolution. Career In 1948, he was offered a position at the University of Adelaide, Australia, that he gladly accepted. After all the troubles he had had, he began flourishing as a mathematician. In 1963, the family moved to Sydney, where Szekeres took a position at the University of New South Wales, and taught there until his retirement in 1975. He also devised problems for secondary school mathematical olympiads run by the university where he taught, and for a yearly undergraduate competition run by the Sydney University Mathematics Society. Szekeres worked closely with many prominent mathematicians throughout his life, including Paul Erdős, his wife Esther, Pál Turán, Béla Bollobás, Ronald Graham, Alf van der Poorten, Miklós Laczkovich, and John Coates. Honours In 1968 he was the winner of the Thomas Ranken Lyle Medal of the Australian Academy of Science. In May 2001, a festschrift was held in honour of his ninetieth birthday at the University of New South Wales. In January 2001 he was awarded the Australian Centenary Medal "for service to Australian society and science in pure mathematics". In 2001, the Australian Mathematical Society created the George Szekeres Medal in his honour. In June 2002, he was made a Member of the Order of Australia (AM) 'for service to mathematics and science, particularly as a contributor to education and research, to the support and development of the University of New South Wales Mathema
https://en.wikipedia.org/wiki/Reliability%20%28computer%20networking%29
In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum. Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols. Transmission Control Protocol (TCP), the main protocol used on the Internet, is a reliable unicast protocol; it provides the abstraction of a reliable byte stream to applications. UDP is an unreliable protocol and is often used in computer games, streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data. Often, a reliable unicast protocol is also connection oriented. For example, TCP is connection oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection oriented, such as Asynchronous Transfer Mode and Frame Relay. In addition, some connectionless protocols, such as IEEE 802.11, are reliable. History Building on the packet switching concepts proposed by Donald Davies, the first communication protocol on the ARPANET was a reliable packet delivery procedure to connect its hosts via the 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgment was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host. Meanwhile, the developers of CYCLADES and of ALO
https://en.wikipedia.org/wiki/TO-92
The TO-92 is a widely used style of semiconductor package mainly used for transistors. The case is often made of epoxy or plastic, and offers compact size at a very low cost. History and origin The JEDEC TO-92 descriptor is derived from the original full name for the package: Transistor Outline Package, Case Style 92. The package is also known by the designation SOT54. By 1966 the package was being used by Motorola for their 2N3904 devices among others. Construction and orientation The case is molded around the transistor elements in two parts; the face is flat, usually bearing a machine-printed part number (some early examples had the part number printed on the top surface instead). The back is semi-circularly-shaped. A line of moulding flash from the injection-moulding process can be seen around the case. The leads protrude from the bottom of the case. When looking at the face of the transistor, the leads are commonly configured from left-to-right as the emitter, base, and collector for 2N series (JEDEC) transistors, however, other configurations are possible, such as emitter, collector, and base commonly used for 2S series (Japanese) transistors or collector, base, and emitter for many of the BC series (Pro Electron) types. If the face has a part name made up of only one letter and a few numbers, it can be either a Japanese or a Pro Electron part number. Thus, "C1234" would likely be a 2SC1234 device, but "C547" is usually short for "BC547". The leads coming out of the case are spaced 0.05" (1.27 mm) apart. It is often convenient to bend them outward to a 0.10" (2.54 mm) spacing to make more room for wiring. Units with their leads pre-bent may be ordered to fit specific board layouts, depending on the application. Otherwise, the leads may be bent manually; however, care must be taken as they can break easily, as with any other device that is manually configured. The physical dimensions of the TO-92 housing may vary slightly depending on the manufact
https://en.wikipedia.org/wiki/Metagenics
The word metagenics uses the prefix meta and the suffix gen. Literally, it means "the creation of something which creates". In the context of biotechnology, metagenics is the practice of engineering organisms to create a specific enzyme, protein, or other biochemicals from simpler starting materials. The genetic engineering of E. coli with the specific task of producing human insulin from starting amino acids is an example. E. coli has also been engineered to digest plant biomass and use it to produce hydrocarbons in order to synthesize biofuels. The applications of metagenics on E. coli also include higher alcohols, fatty-acid based chemicals and terpenes. Biofuels The depletion of petroleum sources and increase in greenhouse gas emissions in the twenty and twenty-first centuries has been the driving factor behind the development of biofuels from microorganisms. E. coli is currently regarded as the best option for biofuel production because of the amount of knowledge available about its genome. The process converts biomass into fuels, and has proven successful on an industrial scale, with the United States having produced 6.4 billion gallons of bioethanol in 2007. Bioethenol is currently the front-runner for alternative fuel production and uses S.cerevisiae and Zymomonas mobilis to create ethanol through fermentation. However, maximum productivity is limited due to the fact that these organisms cannot use pentose sugars, leading to consideration of E.coli and Clostridia. E.coli is capable of producing ethanol under anaerobic conditions through metabolizing glucose into two moles of formate, two moles of acetate, and one mole of ethanol. While bioethanol has proved to be a successful alternative fuel source on an industrial scale, it also has its shortcomings, namely, its low energy density, high vapor pressure, and hygroscopicity. Current alternatives to bioethanol include biobutanol, biodiesel, propanol, and synthetic hydrocarbons. The most common form of biodi
https://en.wikipedia.org/wiki/Intrinsically%20photosensitive%20retinal%20ganglion%20cell
Intrinsically photosensitive retinal ganglion cells (ipRGCs), also called photosensitive retinal ganglion cells (pRGC), or melanopsin-containing retinal ganglion cells (mRGCs), are a type of neuron in the retina of the mammalian eye. The presence of (something like) ipRGCs was first suspected in 1927 when rodless, coneless mice still responded to a light stimulus through pupil constriction, This implied that rods and cones are not the only light-sensitive neurons in the retina. Yet research on these cells did not advance until the 1980s. Recent research has shown that these retinal ganglion cells, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light-sensitive protein. Therefore, they constitute a third class of photoreceptors, in addition to rod and cone cells. Overview Compared to the rods and cones, the ipRGCs respond more sluggishly and signal the presence of light over the long term. They represent a very small subset (~1%) of the retinal ganglion cells. Their functional roles are non-image-forming and fundamentally different from those of pattern vision; they provide a stable representation of ambient light intensity. They have at least three primary functions: They play a major role in synchronizing circadian rhythms to the 24-hour light/dark cycle, providing primarily length-of-day and length-of-night information. They send light information via the retinohypothalamic tract (RHT) directly to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. The physiological properties of these ganglion cells match known properties of the daily light entrainment (synchronization) mechanism regulating circadian rhythms. In addition, ipRGCs could also influence peripheral tissues such as the hair follicle regeneration through SCN-sympathetic nerve circuit. Photosensitive ganglion cells innervate other brain targets, such as the center of pupillary control, the olivary pretectal
https://en.wikipedia.org/wiki/Mirimanoff%27s%20congruence
In number theory, a branch of mathematics, a Mirimanoff's congruence is one of a collection of expressions in modular arithmetic which, if they hold, entail the truth of Fermat's Last Theorem. Since the theorem has now been proven, these are now of mainly historical significance, though the Mirimanoff polynomials are interesting in their own right. The theorem is due to Dmitry Mirimanoff. Definition The nth Mirimanoff polynomial for the prime p is In terms of these polynomials, if t is one of the six values {-X/Y, -Y/X, -X/Z, -Z/X, -Y/Z, -Z/Y} where Xp+Yp+Zp=0 is a solution to Fermat's Last Theorem, then φp-1(t) ≡ 0 (mod p) φp-2(t)φ2(t) ≡ 0 (mod p) φp-3(t)φ3(t) ≡ 0 (mod p) ... φ(p+1)/2(t)φ(p-1)/2(t) ≡ 0 (mod p) Other congruences Mirimanoff also proved the following: If an odd prime p does not divide one of the numerators of the Bernoulli numbers Bp-3, Bp-5, Bp-7 or Bp-9, then the first case of Fermat's Last Theorem, where p does not divide X, Y or Z in the equation Xp+Yp+Zp=0, holds. If the first case of Fermat's Last Theorem fails for the prime p, then 3p-1 ≡ 1 (mod p2). A prime number with this property is sometimes called a Mirimanoff prime, in analogy to a Wieferich prime which is a prime such that 2p-1 ≡ 1 (mod p2). The existence of primes satisfying such congruences was recognized long before their implications for the first case of Fermat's Last Theorem became apparent; but while the discovery of the first Wieferich prime came after these theoretical developments and was prompted by them, the first instance of a Mirimanoff prime is so small that it was already known before Mirimanoff formulated the connection to FLT in 1910, which fact may explain the reluctance of some writers to use the name. So early as his 1895 paper (p. 298), Mirimanoff alludes to a rather complicated test for the primes now known by his name, deriving from a formula published by Sylvester in 1861, which is of little computational value but great theoretical interest. This test
https://en.wikipedia.org/wiki/Location%20information%20server
The location information server, or LIS is a network node originally defined in the National Emergency Number Association i2 network architecture that addresses the intermediate solution for providing e911 service for users of VoIP telephony. The LIS is the node that determines the location of the VoIP terminal. Beyond the NENA architecture and VoIP, the LIS is capable of providing location information to any IP device within its served access network. The role of the LIS Distributed systems for locating people and equipment will be at the heart of tomorrow's active offices. Computer and communications systems continue to proliferate in the office and home. Systems are varied and complex, involving wireless networks and mobile computers. However, systems are underused because the choices of control mechanisms and application interfaces are too diverse. It is therefore pertinent to consider which mechanisms might allow the user to manipulate systems in simple and ubiquitous ways, and how computers can be made more aware of the facilities in their surroundings. Knowledge of the location of people and equipment within an organization is such a mechanism. Annotating a resource database with location information allows location-based heuristics for control and interaction to be constructed. This approach is particularly attractive because location techniques can be devised that are physically unobtrusive and do not rely on explicit user action. The article describes the technology of a system for locating people and equipment, and the design of a distributed system service supporting access to that information. The application interfaces made possible by or that benefit from this facility are presented Location determination The method used to determine the location of a device in an access network varies between the different types of networks. For a wired network, such as Ethernet or DSL a wiremap method is common. In wiremap location determination, the locat
https://en.wikipedia.org/wiki/Inter-working%20function
The inter-working function (IWF) is a method for interfacing a wireless telecommunication network with the public switched telephone network (PSTN). The IWF converts the data transmitted over the air interface into a format suitable for the PSTN. IWF contains both the hardware and software elements that provide the rate adaptation and protocol conversion between PSTN and the wireless network. Some systems require more IWF capability than others, depending on the network which is being connected. The IWF also incorporates a "modem bank", which may be used when, for example, the GSM data terminal equipment (DTE) exchanges data with a land DTE connected via analogue modem The IWF provides the function to enable the GSM system to interface with the various forms of public and private data networks currently available. The basic features of the IWF are: Data rate adaption Protocol conversion References Wireless networking Telephony
https://en.wikipedia.org/wiki/Traversal%20Using%20Relays%20around%20NAT
Traversal Using Relays around NAT (TURN) is a protocol that assists in traversal of network address translators (NAT) or firewalls for multimedia applications. It may be used with the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It is most useful for clients on networks masqueraded by symmetric NAT devices. TURN does not aid in running servers on well known ports in the private network through a NAT; it supports the connection of a user behind a NAT to only a single peer, as in telephony, for example. TURN is specified by . The TURN URI scheme is documented in . Introduction Network address translation (NAT), a mechanism that serves as a measure to mitigate the issue of IPv4 address exhaustion during the transition to IPv6, is accompanied by various limitations. The most troublesome among these limitations is the fact that NAT breaks many existing IP applications, and makes it more difficult to deploy new ones. Guidelines have been developed that describe how to build "NAT friendly" protocols, but many protocols simply cannot be constructed according to those guidelines. Examples of such protocols include multimedia applications and file sharing. Session Traversal Utilities for NAT (STUN) provides one way for an application to traverse a NAT. STUN allows a client to obtain a transport address (an IP address and port) which may be useful for receiving packets from a peer. However, addresses obtained by STUN may not be usable by all peers. Those addresses work depending on the topological conditions of the network. Therefore, STUN by itself cannot provide a complete solution for NAT traversal. A complete solution requires a means by which a client can obtain a transport address from which it can receive media from any peer which can send packets to the public Internet. This can only be accomplished by relaying data through a server that resides on the public Internet. Traversal Using Relays around NAT (TURN) is a protocol that allows a cl
https://en.wikipedia.org/wiki/Formal%20specification
In computer science, formal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information. Motivation In each passing decade, computer systems have become increasingly more powerful and, as a result, they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this in software engineering reliability as once predicted. Other methods such as testing are more commonly used to enhance code quality. Uses Given such a specification, it is possible to use formal verification techniques to demonstrate that a system design is correct with respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use probably correct refinement steps to transform a specification into a design, which is ultimately transformed into an implementation that is correct by construction. It is important to note that a formal specification is not an implementation, but rather it may be used to develop an implementation. Formal specifications describe what a system should do, not how the system should do it. A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal A good specification will have: Constructability, manageability and evol
https://en.wikipedia.org/wiki/Transport%20Research%20Laboratory
TRL Limited, trading as TRL (formerly Transport Research Laboratory) is an independent private company offering a transport consultancy and research service to the public and private sector. Originally established in 1933 by the UK Government as the Road Research Laboratory (RRL), it was privatised in 1996. Its motto or tagline is 'The Future of Transport'. History TRL was originally established in 1933 by the UK Government as the Road Research Laboratory (RRL) under the Department of Scientific and Industrial Research (DSIR), and later became the Transport and Road Research Laboratory (TRRL) in 1972. During the Second World War, the Laboratory contributed to the war effort. Among its contributions, under William Glanville, were research that aided the development of plastic armour, the bouncing bomb and the Disney bomb. During governmental reorganisation in the 1970s, the TRRL moved from the Department of Trade and Industry (DTI) to the Department of the Environment (DoE). At the TRRL, Frank Blackmore developed the mini-roundabout and its associated 'priority rule', which was adopted in 1975. With the encouragement of the UK Department of Transport, TRRL was instrumental in promoting cooperation with other European laboratories. In 1989, TRRL's initiative to create a Forum of European National Highway Research Laboratories led to its hosting of the inaugural meeting. It became an executive agency of the UK Department for Transport (DfT) in 1992, and changed its name to the Transport Research Laboratory (TRL). It was privatised in 1996, though earlier plans in 1994 for a proposed privatisation were criticised at the time, notably by former Transport Minister Barbara Castle. Operations TRL is based in Crowthorne, Berkshire, with additional offices in Edinburgh and Birmingham. TRL's key areas of work include road, network and vehicle safety; traffic management; planning and control; investigations and risk management; transport infrastructure; and environment
https://en.wikipedia.org/wiki/Zero%20state%20response
In electrical circuit theory, the zero state response (ZSR) is the behaviour or response of a circuit with initial state of zero. The ZSR results only from the external inputs or driving functions of the circuit and not from the initial state. The total response of the circuit is the superposition of the ZSR and the ZIR, or Zero Input Response. The ZIR results only from the initial state of the circuit and not from any external drive. The ZIR is also called the natural response, and the resonant frequencies of the ZIR are called the natural frequencies. Given a description of a system in the s-domain, the zero-state response can be described as Y(s)=Init(s)/a(s) where a(s) and Init(s) are system-specific. Zero state response and zero input response in integrator and differentiator circuits One example of zero state response being used is in integrator and differentiator circuits. By examining a simple integrator circuit it can be demonstrated that when a function is put into a linear time-invariant (LTI) system, an output can be characterized by a superposition or sum of the Zero Input Response and the zero state response. A system can be represented as with the input on the left and the output on the right. The output can be separated into a zero input and a zero state solution with The contributions of and to output are additive and each contribution and vanishes with vanishing and This behavior constitutes a linear system. A linear system has an output that is a sum of distinct zero-input and zero-state components, each varying linearly, with the initial state of the system and the input of the system respectively. The zero input response and zero state response are independent of each other and therefore each component can be computed independently of the other. Zero state response in integrator and differentiator circuits The Zero State Response represents the system output when When there is no influence from internal voltages
https://en.wikipedia.org/wiki/Polymatroid
In mathematics, a polymatroid is a polytope associated with a submodular function. The notion was introduced by Jack Edmonds in 1970. It is also described as the multiset analogue of the matroid. Definition Let be a finite set and a non-decreasing submodular function, that is, for each we have , and for each we have . We define the polymatroid associated to to be the following polytope: . When we allow the entries of to be negative we denote this polytope by , and call it the extended polymatroid associated to . An equivalent definition Let be a finite set. If then we denote by the sum of the entries of , and write whenever for every (notice that this gives an order to ). A polymatroid on the ground set is a nonempty compact subset in , the set of independent vectors, such that: We have that if , then for every : If with , then there is a vector such that . This definition is equivalent to the one described before, where is the function defined by for every . Relation to matroids To every matroid on the ground set we can associate the set , where is the set of independent sets of and we denote by the characteristic vector of : for every By taking the convex hull of we get a polymatroid. It is associated to the rank function of . The conditions of the second definition reflect the axioms for the independent sets of a matroid. Relation to generalized permutahedra Because generalized permutahedra can be constructed from submodular functions, and every generalized permutahedron has an associated submodular function, we have that there should be a correspondence between generalized permutahedra and polymatroids. In fact every polymatroid is a generalized permutahedron that has been translated to have a vertex in the origin. This result suggests that the combinatorial information of polymatroids is shared with generalized permutahedra. Properties is nonempty if and only if and that is nonempty if and only if . Given any extended
https://en.wikipedia.org/wiki/Cycloconverter
A cycloconverter (CCV) or a cycloinverter converts a constant amplitude, constant frequency AC waveform to another AC waveform of a lower frequency by synthesizing the output waveform from segments of the AC supply without an intermediate DC link ( and ). There are two main types of CCVs, circulating current type or blocking mode type, most commercial high power products being of the blocking mode type. Characteristics Whereas phase-controlled semiconductor controlled rectifier devices (SCR) can be used throughout the range of CCVs, low cost, low-power TRIAC-based CCVs are inherently reserved for resistive load applications. The amplitude and frequency of converters' output voltage are both variable. The output to input frequency ratio of a three-phase CCV must be less than about one-third for circulating current mode CCVs or one-half for blocking mode CCVs. Output waveform quality improves as the pulse number of switching-device bridges in phase-shifted configuration increases in CCV's input. In general, CCVs can be with 1-phase/1-phase, 3-phase/1-phase and 3-phase/3-phase input/output configurations, most applications however being 3-phase/3-phase. Applications The competitive power rating span of standardized CCVs ranges from few megawatts up to many tens of megawatts. CCVs are used for driving mine hoists, rolling mill main motors, ball mills for ore processing, cement kilns, ship propulsion systems, slip power recovery wound-rotor induction motors (i.e., Scherbius drives) and aircraft 400 Hz power generation. The variable-frequency output of a cycloconverter can be reduced essentially to zero. This means that very large motors can be started on full load at very slow revolutions, and brought gradually up to full speed. This is invaluable with, for example, ball mills, allowing starting with a full load rather than the alternative of having to start the mill with an empty barrel then progressively load it to full capacity. A fully loaded "hard start" for su
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Johnson%E2%80%93Trotter%20algorithm
The Steinhaus–Johnson–Trotter algorithm or Johnson–Trotter algorithm, also called plain changes, is an algorithm named after Hugo Steinhaus, Selmer M. Johnson and Hale F. Trotter that generates all of the permutations of elements. Each permutation in the sequence that it generates differs from the previous permutation by swapping two adjacent elements of the sequence. Equivalently, this algorithm finds a Hamiltonian cycle in the permutohedron. This method was known already to 17th-century English change ringers, and calls it "perhaps the most prominent permutation enumeration algorithm". A version of the algorithm can be implemented in such a way that the average time per permutation is constant. As well as being simple and computationally efficient, this algorithm has the advantage that subsequent computations on the permutations that it generates may be sped up because of the similarity between consecutive permutations that it generates. Algorithm The sequence of permutations generated by the Steinhaus–Johnson–Trotter algorithm has a natural recursive structure, that can be generated by a recursive algorithm. However the actual Steinhaus–Johnson–Trotter algorithm does not use recursion, instead computing the same sequence of permutations by a simple iterative method. A later improvement allows it to run in constant average time per permutation. Recursive structure The sequence of permutations for a given number can be formed from the sequence of permutations for by placing the number into each possible position in each of the shorter permutations. The Steinhaus–Johnson–Trotter algorithm follows this structure: the sequence of permutations it generates consists of blocks of permutations, so that within each block the permutations agree on the ordering of the numbers from 1 to and differ only in the position of . The blocks themselves are ordered recursively, according to the Steinhaus–Johnson–Trotter algorithm for one less element. Within each block, the
https://en.wikipedia.org/wiki/Hdparm
hdparm is a command line program for Linux to set and view ATA hard disk drive hardware parameters and test performance. It can set parameters such as drive caches, sleep mode, power management, acoustic management, and DMA settings. GParted and Parted Magic both include hdparm. Changing hardware parameters from suboptimal conservative defaults to their optimal settings can improve performance greatly. For example, turning on DMA can, in some instances, double or triple data throughput. There is, however, no reliable method for determining the optimal settings for a given controller-drive combination, except careful trial and error. Depending on the given parameters, hdparm can cause computer crashes or render the data on the disk inaccessible. Usage examples hdparm has to be run with special privileges, otherwise it will either not be found or the requested actions will not be executed properly. Display information of the hard drive: sudo hdparm -I /dev/sda Turn on DMA for the first hard drive: sudo hdparm -d1 /dev/sda Test device read performance speed (-t for timing buffered disk reads) of the first hard drive: sudo hdparm -t /dev/sda Enable energy saving spindown after inactivity (24*5=120 seconds): sudo hdparm -S 24 /dev/sda To retain hdparm settings after a software reset, run: sudo hdparm -K 1 /dev/sda Enable read-ahead: sudo hdparm -A 1 /dev/sda Change its acoustic management, at the cost of read/write performance (Some drives, such as newer WD drives and all SSDs, ignore this setting.): sudo hdparm -M 128 /dev/sda If the disk synchronisation intervals are too short, then even small amounts of data will be written to disk which can have severe consequences for its lifespan. The better way would be to collect small data into bigger chunks and wait until the chunk is big enough to be written to disk. Current web browsers like Chrome write regularly small chunks when browsing in order not to lose any important data when the application c
https://en.wikipedia.org/wiki/Cliff%20Jones%20%28computer%20scientist%29
Clifford "Cliff" B. Jones (born 1 June 1944) is a British computer scientist, specializing in research into formal methods. He undertook a late DPhil at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science) under Tony Hoare, awarded in 1981. Jones' thesis proposed an extension to Hoare logic for handling concurrent programs, rely/guarantee. Prior to his DPhil, Jones worked for IBM, between the Hursley and Vienna Laboratories. In Vienna, Jones worked with Peter Lucas, Dines Bjørner and others on the Vienna Development Method (VDM), originally as a method for specifying the formal semantics of programming languages, and subsequently for specifying and verifying programs. Cliff Jones was a professor at the Victoria University of Manchester in the 1980s and early 1990s, worked in industry at Harlequin for a period, and is now a Professor of Computing Science at Newcastle University. He has been editor-in-chief of the Formal Aspects of Computing journal. As well as formal methods, Jones also has interests in interdisciplinary aspects of computer science and the history of computer science. Books Jones has authored and edited many books, including: Understanding Programming Languages, Jones, C.B. Springer, Cham. Print / online (2020). Reflections on the Work of C.A.R. Hoare, Roscoe, A.W., Jones, C.B. and Wood, K. (eds.). Springer. (2010). VDM: Une methode rigoureuse pour le development du logiciel, Jones, C.B. Masson, Paris. (1993). MURAL: A Formal Development Support System, Jones, C.B., Jones, K.D., Lindsay, P.A. and Moore, R. (eds.). Springer-Verlag. (1991). Systematic Software Development using VDM (2nd Edition), Jones, C.B. Prentice Hall International Series in Computer Science, Prentice Hall. , 1990 Case Studies in Systematic Software Development, Jones, C.B. and Shaw, R.C.F. (eds.). Prentice Hall International Series in Computer Science, Prentice Hall. (1989). Essays in Computing Science, Hoare,
https://en.wikipedia.org/wiki/Opaque%20pointer
In computer programming, an opaque pointer is a special case of an opaque data type, a data type declared to be a pointer to a record or data structure of some unspecified type. Opaque pointers are present in several programming languages including Ada, C, C++, D and Modula-2. If the language is strongly typed, programs and procedures that have no other information about an opaque pointer type T can still declare variables, arrays, and record fields of type T, assign values of that type, and compare those values for equality. However, they will not be able to de-reference such a pointer, and can only change the object's content by calling some procedure that has the missing information. Opaque pointers are a way to hide the implementation details of an interface from ordinary clients, so that the implementation may be changed without the need to recompile the modules using it. This benefits the programmer as well since a simple interface can be created, and most details can be hidden in another file. This is important for providing binary code compatibility through different versions of a shared library, for example. This technique is described in Design Patterns as the Bridge pattern. It is sometimes referred to as "handle classes", the "Pimpl idiom" (for "pointer to implementation idiom"), "Compiler firewall idiom", "d-pointer" or "Cheshire Cat", especially among the C++ community. Examples Ada package Library_Interface is type Handle is limited private; -- Operations... private type Hidden_Implementation; -- Defined in the package body type Handle is access Hidden_Implementation; end Library_Interface; The type Handle is an opaque pointer to the real implementation, that is not defined in the specification. Note that the type is not only private (to forbid the clients from accessing the type directly, and only through the operations), but also limited (to avoid the copy of the data structure, and thus preventing dangling references). p
https://en.wikipedia.org/wiki/Titan%20Rain
Titan Rain was a series of coordinated attacks on computer systems in the United States since 2003; they were known to have been ongoing for at least three years. The attacks originated in Guangdong, China. The activity is believed to be associated with a state-sponsored advanced persistent threat. It was given the designation Titan Rain by the federal government of the United States. Titan Rain hackers gained access to many United States defense contractor computer networks, which were targeted for their sensitive information, including those at Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA. Attackers The attacks are reported to be the result of actions by People's Liberation Army Unit 61398. These hackers attacked both the US government (Defense Intelligence Agency) and the UK government (Ministry of Defence). In 2006, an "organised Chinese hacking group" shut down a part of the UK House of Commons computer system. The Chinese government has denied responsibility. Consequences The U.S. government has blamed the Chinese government for the 2004 attacks. Alan Paller, SANS Institute research director, stated that the attacks came from individuals with "intense discipline" and that "no other organization could do this if they were not a military". Such sophistication has pointed toward the People's Liberation Army as the attackers. Titan Rain reportedly attacked multiple organizations, such as NASA and the FBI. Although no classified information was reported stolen, the hackers were able to steal unclassified information (e.g., information from a home computer) that could reveal strengths and weaknesses of the United States. Titan Rain has also caused distrust between other countries (such as the United Kingdom and Russia) and China. The United Kingdom has stated officially that Chinese hackers attacked its governmental offices. Titan Rain has caused the rest of the world to be more cautious of attacks not just from China but from o
https://en.wikipedia.org/wiki/Universal%20Data%20Element%20Framework
The Universal Data Element Framework (UDEF) was a controlled vocabulary developed by The Open Group. It provided a framework for categorizing, naming, and indexing data. It assigned to every item of data a structured alphanumeric tag plus a controlled vocabulary name that describes the meaning of the data. This allowed relating data elements to similar elements defined by other organizations. UDEF defined a Dewey-decimal like code for each concept. For example, an "employee number" is often used in human resource management. It has a UDEF tag a.5_12.35.8 and a controlled vocabulary description "Employee.PERSON_Employer.Assigned.IDENTIFIER". UDEF has been superseded by the Open Data Element Framework (O-DEF). Examples In an application used by a hospital, the last name and first name of several people could include the following example concepts: Patient Person Family Name – find the word “Patient” under the UDEF object “Person” and find the word “Family” under the UDEF property “Name” Patient Person Given Name – find the word “Patient” under the UDEF object “Person” and find the word “Given” under the UDEF property “Name” Doctor Person Family Name – find the word “Doctor” under the UDEF object “Person” and find the word “Family” under the UDEF property “Name” Doctor Person Given Name – find the word “Doctor” under the UDEF object “Person” and find the word “Given” under the UDEF property “Name” For the examples above, the following UDEF IDs are available: “Patient Person Family Name” the UDEF ID is “au.5_11.10” “Patient Person Given Name” the UDEF ID is “au.5_12.10” “Doctor Person Family Name” the UDEF ID is “aq.5_11.10” “Doctor Person Given Name” the UDEF ID is “aq.5_12.10” See also Data integration ISO/IEC 11179 National Information Exchange Model Metadata Semantic web Data element Representation term Controlled vocabulary References External links UDEF Project of The Open Group UDEF Frequently Asked Questions Data management Interoperability
https://en.wikipedia.org/wiki/Dines%20Bj%C3%B8rner
Professor Dines Bjørner (born 4 October 1937, in Odense) is a Danish computer scientist. He specializes in research into domain engineering, requirements engineering and formal methods. He worked with Cliff Jones and others on the Vienna Development Method (VDM) at IBM Laboratory Vienna (and elsewhere). Later he was involved with producing the RAISE (Rigorous Approach to Industrial Software Engineering) formal method with tool support. Bjørner was a professor at the Technical University of Denmark (DTU) from 1965–1969 and 1976–2007, before he retired in March 2007. He was responsible for establishing the United Nations University International Institute for Software Technology (UNU-IIST), Macau, in 1992 and was its first director. His magnum opus on software engineering (three volumes) appeared in 2005/6. To support VDM, Bjørner co-founded VDM-Europe, which subsequently became Formal Methods Europe, an organization that supports conferences and related activities. In 2003, he instigated the associated ForTIA Formal Techniques Industry Association. Bjørner became a knight of the Order of the Dannebrog in 1985. He received a Dr.h.c. from the Masaryk University, Brno, Czech Republic in 2004. In 2021, he obtained a Dr. techn. from the Technical University of Denmark, Kongens Lyngby, Denmark. He is a Fellow of the IEEE (2004) and ACM (2005). He has also been a member of the Academia Europaea since 1989. In 2007, a Symposium was held in Macau in honour of Dines Bjørner and Zhou Chaochen. In 2021, Bjørner was elected to a Formal Methods Europe (FME) Fellowship. Bjørner is married to Kari Bjørner, with two children and five grandchildren. Selected books Domain Science and Engineering: A Foundation for Software Development, Bjørner, D. Monographs in Theoretical Computer Science, An EATCS Series, Springer Nature. Hardcover ; softcover ; eBook (2021). Software Engineering 1: Abstraction and Modelling, Bjørner, D. Texts in Theoretical Computer Science, An EATCS Seri
https://en.wikipedia.org/wiki/F5%2C%20Inc.
F5, Inc. is a publicly-held American technology company specializing in application security, multi-cloud management, online fraud prevention, application delivery networking (ADN), application availability & performance, network security, and access & authorization. F5 is headquartered in Seattle, Washington in F5 Tower, with an additional 75 offices in 43 countries focusing on account management, global services support, product development, manufacturing, software engineering, and administrative jobs. Notable office locations include Spokane, Washington; New York, New York; Boulder, Colorado; London, England; San Jose, California; and San Francisco, California. F5's originally offered application delivery controller (ADC) technology, but expanded into application layer, automation, multi-cloud, and security services. As ransomware, data leaks, DDoS, and other attacks on businesses of all sizes are arising, companies such as F5 have continued to reinvent themselves. While the majority of F5's revenue continues to be attributed to its hardware products such as the BIG-IP iSeries systems, the company has begun to offer additional modules on their proprietary operating system, TMOS (Traffic Management Operating System.) These modules are listed below and include, but are not limited to, Local Traffic Manager (LTM), Advanced Web Application Firewall (AWAF), DNS (previously named GTM), and Access Policy Manager (APM). These offer organizations running the BIG-IP the ability to deploy load balancing, Layer 7 application firewalls, single sign-on (for Azure AD, Active Directory, LDAP, and Okta), as well as enterprise-level VPNs. While the BIG-IP was traditionally a hardware product, F5 now offers it as a virtual machine, which they have branded as the BIG-IP Virtual Edition. The BIG-IP Virtual Edition is cloud agnostic and can be deployed on-premises in a public and/or hybrid cloud environment. F5's customers include Bank of America, Microsoft, Oracle, Alaska Airlin
https://en.wikipedia.org/wiki/Pharmacode
Pharmacode, also known as Pharmaceutical Binary Code, is a barcode standard, used in the pharmaceutical industry as a packing control system. It is designed to be readable despite printing errors. It can be printed in multiple colors as a check to ensure that the remainder of the packaging (which the pharmaceutical company must print to protect itself from legal liability) is correctly printed. This barcode is also known as Laetuscode. For best practice (better security), the code should always contain at least three bars and should always be a combination of both thick and thin bars, (all thick bars or all thin bars do not represent a secure code). Encoding Pharmacode can represent only a single integer from 3 to 131070. Unlike other commonly used one-dimensional barcode schemes, pharmacode does not store the data in a form corresponding to the human-readable digits; the number is encoded in binary, rather than decimal. Pharmacode is read from right to left, also in left to right (if omnidirectional scanner): with as the bar position starting at 0 on the right, each narrow bar adds to the value and each wide bar adds . The minimum barcode is 2 bars and the maximum 16, so the smallest number that could be encoded is 3 (2 narrow bars) and the biggest is 131070 (16 wide bars). It represents colors which are on the label. References External links Pharmacode Specification Barcodes Unique identifiers Supply chain management
https://en.wikipedia.org/wiki/%CE%92-Hydroxy%20%CE%B2-methylbutyric%20acid
β-Hydroxy β-methylbutyric acid (HMB), otherwise known as its conjugate base, , is a naturally produced substance in humans that is used as a dietary supplement and as an ingredient in certain medical foods that are intended to promote wound healing and provide nutritional support for people with muscle wasting due to cancer or HIV/AIDS. In healthy adults, supplementation with HMB has been shown to increase exercise-induced gains in muscle size, muscle strength, and lean body mass, reduce skeletal muscle damage from exercise, improve aerobic exercise performance, and expedite recovery from exercise. Medical reviews and meta-analyses indicate that HMB supplementation also helps to preserve or increase lean body mass and muscle strength in individuals experiencing age-related muscle loss. HMB produces these effects in part by stimulating the production of proteins and inhibiting the breakdown of proteins in muscle tissue. No adverse effects from long-term use as a dietary supplement in adults have been found. HMB is sold as a dietary supplement at a cost of about per month when taking 3 grams per day. HMB is also contained in several nutritional products, including certain formulations of Ensure and Juven. HMB is also present in insignificant quantities in certain foods, such as alfalfa, asparagus, avocados, cauliflower, grapefruit, and catfish. The effects of HMB on human skeletal muscle were first discovered by Steven L. Nissen at Iowa State University in the . HMB has not been banned by the National Collegiate Athletic Association, World Anti-Doping Agency, or any other prominent national or international athletic organization. In 2006, only about 2% of college student athletes in the United States used HMB as a dietary supplement. As of 2017, HMB has found widespread use as an ergogenic supplement among young athletes. Uses Available forms HMB is sold as an over-the-counter dietary supplement in the free acid form, β-hydroxy β-methylbutyric acid (HMB-FA), a
https://en.wikipedia.org/wiki/Application%20security
Application security (short AppSec) includes all tasks that introduce a secure software development life cycle to development teams. Its final goal is to improve security practices and, through that, to find, fix and preferably prevent security issues within applications. It encompasses the whole application life cycle from requirements analysis, design, implementation, verification as well as maintenance. Approaches Different approaches will find different subsets of the security vulnerabilities lurking in an application and are most effective at different times in the software lifecycle. They each represent different tradeoffs of time, effort, cost and vulnerabilities found. Design review. Before code is written the application's architecture and design can be reviewed for security problems. A common technique in this phase is the creation of a threat model. Whitebox security review, or code review. This is a security engineer deeply understanding the application through manually reviewing the source code and noticing security flaws. Through comprehension of the application, vulnerabilities unique to the application can be found. Blackbox security audit. This is only through the use of an application testing it for security vulnerabilities, no source code is required. Automated Tooling. Many security tools can be automated through inclusion into the development or testing environment. Examples of those are automated DAST/SAST tools that are integrated into code editor or CI/CD platforms. Coordinated vulnerability platforms. These are hacker-powered application security solutions offered by many websites and software developers by which individuals can receive recognition and compensation for reporting bugs. Web application security Web application security is a branch of information security that deals specifically with the security of websites, web applications, and web services. At a high level, web application security draws on the principles of appl
https://en.wikipedia.org/wiki/Z-variant
In Unicode, two glyphs are said to be Z-variants (often spelled zVariants) if they share the same etymology but have slightly different appearances and different Unicode code points. For example, the Unicode characters U+8AAA 說 and U+8AAC 説 are Z-variants. The notion of Z-variance is only applicable to the "CJKV scripts"—Chinese, Japanese, Korean and Vietnamese—and is a subtopic of Han unification. Differences on the Z-axis The Unicode philosophy of code point allocation for CJK languages is organized along three "axes." The X-axis represents differences in semantics; for example, the Latin capital A (U+0041 A) and the Greek capital alpha (U+0391 Α) are represented by two distinct code points in Unicode, and might be termed "X-variants" (though this term is not common). The Y-axis represents significant differences in appearance though not in semantics; for example, the traditional Chinese character māo "cat" (U+8C93 貓) and the simplified Chinese character (U+732B 猫) are Y-variants. The Z-axis represents minor typographical differences. For example, the Chinese characters (U+838A 莊) and (U+8358 荘) are Z-variants, as are (U+8AAA 說) and (U+8AAC 説). The glossary at Unicode.org defines "Z-variant" as "Two CJK unified ideographs with identical semantics and unifiable shapes," where "unifiable" is taken in the sense of Han unification. Thus, were Han unification perfectly successful, Z-variants would not exist. They exist in Unicode because it was deemed useful to be able to "round-trip" documents between Unicode and other CJK encodings such as Big5 and CCCII. For example, the character 莊 has CCCII encoding 21552D, while its Z-variant 荘 has CCCII encoding 2D552D. Therefore, these two variants were given distinct Unicode code points, so that converting a CCCII document to Unicode and back would be a lossless operation. Confusion There is some confusion over the exact definition of "Z-variant." For example, in an Internet Draft (of ) dated 2002, one finds "no" (U+4E0D
https://en.wikipedia.org/wiki/Joe%20Stoy
Joseph E. Stoy is a British computer scientist. He initially studied physics at Oxford University. Early in his career, in the 1970s, he worked on denotational semantics with Christopher Strachey in the Programming Research Group at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science). He was a Fellow of Balliol College, Oxford. He has also spent time at MIT in the United States. In 2003, he co-founded Bluespec, Inc. His book Denotational Semantics: The Scott-Strachey Approach to Programming Language Semantics (MIT Press, 1977) is now a classic text. Stoy married Gabrielle Stoy, a mathematician and Fellow of Lady Margaret Hall, Oxford. References External links Program Verification and Semantics: The Early Work Strachey and the Oxford Programming Research Group: a talk by Joe Stoy on Christopher Strachey and the Oxford Programming Research Group. Year of birth missing (living people) Living people Alumni of the University of Oxford English computer scientists Members of the Department of Computer Science, University of Oxford Fellows of Balliol College, Oxford Massachusetts Institute of Technology faculty Formal methods people Programming language researchers Computer science writers British expatriates in the United States
https://en.wikipedia.org/wiki/KSMO-TV
KSMO-TV (channel 62) is a television station in Kansas City, Missouri, United States, affiliated with MyNetworkTV. It is owned by Gray Television alongside CBS affiliate KCTV (channel 5). Both stations share studios on Shawnee Mission Parkway in Fairway, Kansas, while KSMO-TV's transmitter is located in Independence, Missouri. Channel 62 in Kansas City began broadcasting as KEKR-TV in 1983, changing its call letters to KZKC in 1985. Originally owned by Media Central of Chattanooga, Tennessee, it suffered for most of its first decade on air from a management style more suited to stations in smaller markets, inferior programming, and a poor reputation. In 1988, the station was fined for airing an indecent film in prime time, attracting national attention. Financial issues also strapped KZKC, particularly after Media Central entered bankruptcy reorganization in 1987. KZKC was sold out of bankruptcy to First American National Bank of Nashville, Tennessee, in early 1990; the bank quickly sold the station to ABRY Communications. ABRY instituted a top-to-bottom overhaul of programming and facilities, changing the call letters to KSMO-TV in April 1991. The relaunched channel 62 cemented itself as the primary sports and children's station in Kansas City; from 1990 to 1995, viewership tripled and advertising revenue quadrupled. ABRY affiliated the station with UPN upon its January 1995 debut. The station also was the broadcast home of Kansas City Royals baseball for four years, further increasing its visibility. Sinclair Broadcast Group exercised an option to buy KSMO-TV in December 1995. The station dropped UPN in January 1998 after a corporate dispute between Sinclair and the network; two months later, the station became the new Kansas City affiliate of The WB. With the company focusing on duopolies elsewhere and unable to buy a second station in Kansas City, Sinclair sold KSMO-TV to the Meredith Corporation, then-owner of KCTV, in 2005 after Meredith assumed operating c
https://en.wikipedia.org/wiki/KMCI-TV
KMCI-TV (channel 38) is an independent television station licensed to Lawrence, Kansas, United States, serving the Kansas City metropolitan area. It is owned by the E. W. Scripps Company alongside NBC affiliate KSHB-TV (channel 41). Both stations share studios on Oak Street in Kansas City, Missouri, while KMCI-TV's transmitter is located at the Blue River Greenway in the city's Hillcrest section. Despite Lawrence being KMCI-TV's city of license, the station maintains no physical presence there. History The station first signed on the air on February 1, 1988. Founded by Miller Broadcasting, it originally served as an affiliate of the Home Shopping Network (HSN). In March 1996, KSHB owner Scripps Howard Broadcasting reached a deal to manage KMCI under a local marketing agreement. That August, KMCI then dropped much of its home shopping programming and rebranded as "38 Family Greats", with a family-oriented general entertainment format from 6:00 a.m. to midnight, with HSN programming being relegated to the overnight hours. The new KMCI lineup included an inventory of programs that KSHB owned but had not had time to air after it switched to NBC in 1994. Exercising an option from the 1996 pact with Miller, Scripps bought KMCI outright for $14.6 million in 2000, forming a legal duopoly with KSHB. In 2002, KMCI dropped the "Family Greats" branding and simply branded by its channel number. In July 2003, coinciding with the move of its transmitter site from Lawrence toward Kansas City, the station officially became known as "38 the Spot". Programming Syndicated programs broadcast on KMCI as of September 2020 include Mike & Molly, Last Man Standing, Family Guy, Divorce Court and 2 Broke Girls, among others. KMCI features hosts that promote the station's programming, as well as local events during commercial breaks. Taunia Hottman was the first spokesperson for KMCI as "38 the Spot". Meredith Hoenes (who became a traffic reporter for KSHB-TV around this time) replaced Hott
https://en.wikipedia.org/wiki/Web%20Standards%20Project
The Web Standards Project (WaSP) was a group of professional web developers dedicated to disseminating and encouraging the use of the web standards recommended by the World Wide Web Consortium, along with other groups and standards bodies. Founded in 1998, The Web Standards Project campaigned for standards that reduced the cost and complexity of development while increasing the accessibility and long-term viability of any document published on the Web. WaSP worked with browser companies, authoring tool makers, and peers to encourage them to use these standards, since they "are carefully designed to deliver the greatest benefits to the greatest number of web users". The group disbanded in 2013. Organization The Web Standards Project began as a grassroots coalition "fighting for standards in our [web] browsers" founded by George Olsen, Glenn Davis, and Jeffrey Zeldman in August 1998. By 2001, the group had achieved its primary goal of persuading Microsoft, Netscape, Opera, and other browser makers to accurately and completely support HTML 4.01/XHTML 1.0, CSS1, and ECMAScript. Had browser makers not been persuaded to do so, the Web would likely have fractured into pockets of incompatible content, with various websites available only to people who possessed the right browser. In addition to streamlining web development and significantly lowering its cost, support for common web standards enabled the development of the semantic web. By marking up content in semantic (X)HTML, front-end developers make a site's content more available to search engines, more accessible to people with disabilities, and more available to the world beyond the desktop (e.g. mobile). The project relaunched in June 2002 with new members, a redesigned website, new site features, and a redefined mission focused on developer education and standards compliance in authoring tools as well as browsers. Project leaders were: George Olsen (1998–1999) Jeffrey Zeldman (1999–2002) Steven Champeon (20
https://en.wikipedia.org/wiki/Soundstream
Soundstream Inc. was the first United States audiophile digital audio recording company, providing commercial services for recording and computer-based editing. Company Soundstream was founded in 1975 in Salt Lake City, Utah by Dr. Thomas G. Stockham, Jr. The company provided worldwide on-location recording services to Telarc, Delos, RCA, Philips, Vanguard, Varèse Sarabande, Angel, Warner Brothers, CBS, Decca, Chalfont, and other labels. They manufactured a total of 18 digital recorders, of which seven were sold and the rest leased out. Although most recordings were of classical music, the range included country, rock, jazz, pop, and avant-garde. The first US live digital recording was made in 1976 by Soundstream's prototype 37 kHz, 16-bit, two channel recorder. New World Records recorded the Santa Fe Opera's performance of Virgil Thomson's The Mother of Us All, and provided Soundstream with a stereo feed from their multitrack console. Soundstream demonstrated this recording at the Fall 1976 AES Convention; however the resulting record was pressed not from the digital master but from the analog tape that New World recorded themselves concurrently. Critiques of the recording, most notably from Telarc's Jack Renner and Robert Woods, led directly to the improved four-channel, 50 kHz sample rate recorder that was used for all of Soundstream's future commercial releases. Also in 1976, Soundstream restored acoustic (pre-electronic) recordings of Enrico Caruso, by digitizing the recordings on a computer, and processing them using a technique called "blind deconvolution". These were released by RCA Records as "Caruso – A Legendary Performer". In subsequent years Soundstream restored most of the RCA Caruso catalog, as well as some RCA recordings by Irish tenor John McCormack. Soundstream's first commercially released recording, Diahann Carroll With the Duke Ellington Orchestra Under The Direction Of Mercer Ellington – A Tribute To Ethel Waters (on the Orinda label) a
https://en.wikipedia.org/wiki/Flicker-free
Flicker-free is a term given to video displays, primarily cathode ray tubes, operating at a high refresh rate to reduce or eliminate the perception of screen flicker. For televisions, this involves operating at a 100 Hz or 120 Hz hertz field rate to eliminate flicker, compared to standard televisions that operate at 50 Hz (PAL, SÉCAM systems) or 60 Hz (NTSC), most simply done by displaying each field twice, rather than once. For computer displays, this is usually a refresh rate of 70–90 Hz, sometimes 100 Hz or higher. This should not be confused with motion interpolation, though they may be combined – see implementation, below. Televisions operating at these frequencies are often labelled as being 100 or 120 Hz without using the words flicker-free in the description. Prevalence The term is primarily used for CRTs, especially televisions in 50 Hz countries (PAL or SECAM) and computer monitors from the 1990s and early 2000s – the 50 Hz rate of PAL/SECAM video (compared with 60 Hz NTSC video) and the relatively large computer monitors close to the viewer's peripheral vision make flicker most noticeable on these devices. Contrary to popular belief, modern LCD monitors are not flicker free, since most of them use pulse-width modulation (PWM) for brightness control. As the brightness setting is lowered, the flicker becomes more noticeable, since the period when the backlight is active in each PWM duty cycle shortens. The problem is much more pronounced on modern LED backlit monitors, because LED backlights reacts faster to changes in current. Implementation The goal is to display images sufficiently frequently to exceed the human flicker fusion threshold, and hence create the impression of a constant (non-flickering) source. In computer displays this consists of changing the frame rate of the produced signal in the video card (and in sync with this, the displayed image on the display). This is limited by the clock speed of the video adapter and frame rate required
https://en.wikipedia.org/wiki/Current%E2%80%93voltage%20characteristic
A current–voltage characteristic or I–V curve (current–voltage curve) is a relationship, typically represented as a chart or graph, between the electric current through a circuit, device, or material, and the corresponding voltage, or potential difference, across it. In electronics In electronics, the relationship between the direct current (DC) through an electronic device and the DC voltage across its terminals is called a current–voltage characteristic of the device. Electronic engineers use these charts to determine basic parameters of a device and to model its behavior in an electrical circuit. These characteristics are also known as I–V curves, referring to the standard symbols for current and voltage. In electronic components with more than two terminals, such as vacuum tubes and transistors, the current–voltage relationship at one pair of terminals may depend on the current or voltage on a third terminal. This is usually displayed on a more complex current–voltage graph with multiple curves, each one representing the current–voltage relationship at a different value of current or voltage on the third terminal. For example the diagram at right shows a family of I–V curves for a MOSFET as a function of drain voltage with overvoltage (VGS − Vth) as a parameter. The simplest I–V curve is that of a resistor, which according to Ohm's law exhibits a linear relationship between the applied voltage and the resulting electric current; the current is proportional to the voltage, so the I–V curve is a straight line through the origin with positive slope. The reciprocal of the slope is equal to the resistance. The I–V curve of an electrical component can be measured with an instrument called a curve tracer. The transconductance and Early voltage of a transistor are examples of parameters traditionally measured from the device's I–V curve. Types of I–V curves The shape of an electrical component's characteristic curve reveals much about its operating properti
https://en.wikipedia.org/wiki/Black%20level
Video black level is defined as the level of brightness at the darkest (black) part of a visual image or the level of brightness at which no light is emitted from a screen, resulting in a pure black screen. Video displays generally need to be calibrated so that the displayed black is true to the black information in the video signal. If the black level is not correctly adjusted, visual information in a video signal could be displayed as black, or black information could be displayed as above black information (gray). The voltage of the black level varies across different television standards. PAL sets the black level the same as the blanking level, while NTSC sets the black level approximately 54 mV above the blanking level. User misadjustment of black level on monitors is common. It results in darker colors having their hue changed, it affects contrast, and in many cases causes some of the image detail to be lost. Black level is set by displaying a testcard image and adjusting display controls. With CRT displays: "brightness" adjusts black level "contrast" adjusts white level CRTs tend to have some interdependence of controls, so a control sometimes needs adjustment more than once. In digital video black level usually means the range of RGB values in video signal, which can be either [0..255] (or "normal"; typical of a computer output) or [16..235] (or "low"; standard for video). See also Picture line-up generation equipment (PLUGE) Display technology Television technology
https://en.wikipedia.org/wiki/Chaos%20game
In mathematics, the term chaos game originally referred to a method of creating a fractal, using a polygon and an initial point selected at random inside it. The fractal is created by iteratively creating a sequence of points, starting with the initial random point, in which each point in the sequence is a given fraction of the distance between the previous point and one of the vertices of the polygon; the vertex is chosen at random in each iteration. Repeating this iterative process a large number of times, selecting the vertex at random on each iteration, and throwing out the first few points in the sequence, will often (but not always) produce a fractal shape. Using a regular triangle and the factor 1/2 will result in the Sierpinski triangle, while creating the proper arrangement with four points and a factor 1/2 will create a display of a "Sierpinski Tetrahedron", the three-dimensional analogue of the Sierpinski triangle. As the number of points is increased to a number N, the arrangement forms a corresponding (N-1)-dimensional Sierpinski Simplex. The term has been generalized to refer to a method of generating the attractor, or the fixed point, of any iterated function system (IFS). Starting with any point x0, successive iterations are formed as xk+1 = fr(xk), where fr is a member of the given IFS randomly selected for each iteration. The iterations converge to the fixed point of the IFS. Whenever x0 belongs to the attractor of the IFS, all iterations xk stay inside the attractor and, with probability 1, form a dense set in the latter. The "chaos game" method plots points in random order all over the attractor. This is in contrast to other methods of drawing fractals, which test each pixel on the screen to see whether it belongs to the fractal. The general shape of a fractal can be plotted quickly with the "chaos game" method, but it may be difficult to plot some areas of the fractal in detail. With the aid of the "chaos game" a new fractal can be made and w
https://en.wikipedia.org/wiki/48-bit%20computing
In computer architecture, 48-bit integers can represent 281,474,976,710,656 (248 or 2.814749767×1014) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (248 − 1) or a signed two's complement range of -140,737,488,355,328 (-247) through 140,737,488,355,327 (247 − 1). A 48-bit memory address can directly address every byte of 256 terabytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets) in width. Examples include 48-bit CPU and ALU architectures that are based on registers, address buses, or data buses of that size. Word size Computers with 48-bit words include the AN/FSQ-32, CDC 1604/upper-3000 series, BESM-6, Ferranti Atlas, Philco TRANSAC S-2000 and Burroughs large systems. The Honeywell DATAmatic 1000, H-800, the MANIAC II, the MANIAC III, the Brookhaven National Laboratory Merlin, the Philco CXPQ, the Ferranti Orion, the Telefunken Rechner TR 440, the ICT 1301, and many other early transistor-based and vacuum tube computers used 48-bit words. Addressing The IBM System/38, and the IBM AS/400 in its CISC variants, use 48-bit addresses. The address size used in logical block addressing was increased to 48 bits with the introduction of ATA-6. The Ext4 file system physically limits the file block count to 48 bits. The minimal implementation of the x86-64 architecture provides 48-bit addressing encoded into 64 bits; future versions of the architecture can expand this without breaking properly written applications. The media access control address (MAC address) of a network interface controller uses a 48-bit address space. Images In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image, but the existence of more shades of each of the three primary colors (65,536 as opposed to 256) means that more operations
https://en.wikipedia.org/wiki/Integrable%20system
In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals that its motion is confined to a submanifold of much smaller dimensionality than that of its phase space. Three features are often referred to as characterizing integrable systems: the existence of a maximal set of conserved quantities (the usual defining property of complete integrability) the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability) the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability) Integrable systems may be seen as very different in qualitative character from more generic dynamical systems, which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time. Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top). In the late 1960's, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equati
https://en.wikipedia.org/wiki/Polar%20mount
A polar mount is a movable mount for satellite dishes that allows the dish to be pointed at many geostationary satellites by slewing around one axis. It works by having its slewing axis parallel, or almost parallel, to the Earth's polar axis so that the attached dish can follow, approximately, the geostationary orbit, which lies in the plane of the Earth's equator. Description Polar mounts are popular with home television receive-only (TVRO) satellite systems where they can be used to access the TV signals from many different geostationary satellites. They are also used in other types of installations such as TV, cable, and telecommunication Earth stations although those applications usually use more sophisticated altazimuth or fix angle dedicated mounts. Polar mounts can use a simplified one axis design because geostationary satellite are fixed in the sky relative to the observing dish and their equatorial orbits puts them all in a common line that can be accessed by swinging the satellite dish along a single arc approximately 90 degrees from the mount's polar axis. This also allows them to use a single positioner to move the antenna in the form of a "jackscrew" or horizon-to-horizon gear drive. Polar mounts work in a similar way to astronomical equatorial mounts in that they point at objects at fixed hour angles that follow the astronomical right ascension axis. Like equatorial mounts, polar mounts require polar alignment. They differ from equatorial mounts in that the objects (satellites) they point at are fixed in position and usually require no tracking, just accurate fixed aiming. Adjustments When observed from the equator, geostationary satellites follow exactly the imaginary line of the Earth's equatorial plane on the celestial sphere (i.e. they follow the celestial equator). But when observed from other latitudes the fact that geostationary satellites are at a fixed altitude of 35,786 km (22,236 mi) above the Earth's equator, and vary in distance from
https://en.wikipedia.org/wiki/Acoustic%20emission
Acoustic emission (AE) is the phenomenon of radiation of acoustic (elastic) waves in solids that occurs when a material undergoes irreversible changes in its internal structure, for example as a result of crack formation or plastic deformation due to aging, temperature gradients, or external mechanical forces. In particular, AE occurs during the processes of mechanical loading of materials and structures accompanied by structural changes that generate local sources of elastic waves. This results in small surface displacements of a material produced by elastic or stress waves generated when the accumulated elastic energy in a material or on its surface is released rapidly. The mechanism of emission of the primary elastic pulse AE (act or event AE) may have a different physical nature. The figure shows the mechanism of the AE act (event) during the nucleation of a microcrack due to the breakthrough of the dislocations pile-up (dislocation is a linear defect in the crystal lattice of a material) across the boundary in metals with a body-centered cubic (bcc) lattice under mechanical loading, as well as time diagrams of the stream of AE acts (events) (1) and the stream of recorded AE signals (2). The AE method makes it possible to study the kinetics of processes at the earliest stages of microdeformation, dislocation nucleation and accumulation of microcracks. Roughly speaking, each crack seems to "scream" about its growth. This makes it possible to diagnose the moment of crack origin itself by the accompanying AE. In addition, for each crack that has already arisen, there is a certain critical size, depending on the properties of the material. Up to this size, the crack grows very slowly (sometimes for decades) through a huge number of small discrete jumps accompanied by AE radiation. After the crack reaches a critical size, catastrophic destruction occurs, because its further growth is already at a speed close to half the speed of sound in the material of the struct
https://en.wikipedia.org/wiki/Kahn%20process%20networks
A Kahn process network (KPN, or process network) is a distributed model of computation in which a group of deterministic sequential processes communicate through unbounded first in, first out channels. The model requires that reading from a channel is blocking while writing is non-blocking. Due to these key restrictions, the resulting process network exhibits deterministic behavior that does not depend on the timing of computation nor on communication delays. Kahn process networks were originally developed for modeling parallel programs, but have proven convenient for modeling embedded systems, high-performance computing systems, signal processing systems, stream processing systems, dataflow programming languages, and other computational tasks. KPNs were introduced by Gilles Kahn in 1974. Execution model KPN is a common model for describing signal processing systems where infinite streams of data are incrementally transformed by processes executing in sequence or parallel. Despite parallel processes, multitasking or parallelism are not required for executing this model. In a KPN, processes communicate via unbounded FIFO channels. Processes read and write atomic data elements, alternatively called tokens, from and to channels. Writing to a channel is non-blocking, i.e. it always succeeds and does not stall the process, while reading from a channel is blocking, i.e. a process that reads from an empty channel will stall and can only continue when the channel contains sufficient data items (tokens). Processes are not allowed to test an input channel for existence of tokens without consuming them. A FIFO cannot be consumed by multiple processes, nor can multiple processes write to a single FIFO. Given a specific input (token) history for a process, the process must be deterministic so that it always produces the same outputs (tokens). Timing or execution order of processes must not affect the result and therefore testing input channels for tokens is forbidden. Notes
https://en.wikipedia.org/wiki/SPARQL
SPARQL (pronounced "sparkle", a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013. SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns. Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery. Advantages SPARQL allows users to write queries against what can loosely be called "key-value" data or, more specifically, data that follow the RDF specification of the W3C. Thus, the entire database is a set of "subject-predicate-object" triples. This is analogous to some NoSQL databases' usage of the term "document-key-value", such as MongoDB. In SQL relational database terms, RDF data can also be considered a table with three columns – the subject column, the predicate column, and the object column. The subject in RDF is analogous to an entity in a SQL database, where the data elements (or fields) for a given business object are placed in multiple columns, sometimes spread across more than one table, and identified by a unique key. In RDF, those fields are instead represented as separate predicate/object rows sharing the same subject, often the same unique key, with the predicate being analogous to the column name and the object the actual data. Unlike relational databases, the object column is heterogeneous: the
https://en.wikipedia.org/wiki/FAPSI
FAPSI () or Federal Agency of Government Communications and Information (FAGCI) () was a Russian government agency, which was responsible for signal intelligence and security of governmental communications. The present-day FAPSI successor agencies are the relevant departments of the Federal Security Service (FSB) and Foreign Intelligence Service (SVR) as well as the Special Communications Service of Russia (Spetssvyaz) (part of the Federal Protective Service of the Russian Federation) (FSO RF). History Creation FAPSI was created from the 8th Main Directorate (Government Communications) and 16th Directorate (Electronic Intelligence) of the KGB. It is the equivalent to the USA National Security Agency. On September 25, 1991, Soviet president Mikhail Gorbachev dismantled the KGB into several independent departments. One of them became the Committee on Government Communications under the President of Soviet Union. On December 24, 1991 after the disbanding of the Soviet Union the organization became the Federal Agency of Government Communications and Information under the President of Russian Federation. Dissolution On March 11, 2003 the agency was reorganized into the Service of Special Communications and Information (Spetssvyaz, Spetssviaz) () of the Federal Security Service of the Russian Federation (FSB RF). On August 7, 2004, Spetssviaz was incorporated as a structural sub unit of the Federal Protective Service of the Russian Federation (FSO RF). Structure According to the press, the structure of FAPSI copied the structure of the US National Security Agency, it includes: Chief R&D Directorate (Главное научно-техническое управление) Chief directorate of government communications Chief directorate of security of communications Chief directorate of information technology (Главное управление информационных систем) Special troops of FAPSI Academy of Cryptography Military School of FAPSI in Voronezh, sometimes referred as the world largest hacker's school
https://en.wikipedia.org/wiki/List%20of%20software%20development%20philosophies
This is a list of approaches, styles, methodologies, philosophies in software development and engineering. It also contains programming paradigms, software development methodologies, software development processes, and single practices, principles and laws. Some of the mentioned methods are more relevant to a specific field than another, such as automotive or aerospace. The trend towards agile methods in software engineering is noticeable, however the need for improved studies on the subject is also paramount. Also note that some of the methods listed might be newer or older or still in use or out-dated, and the research on software design methods is not new and on-going. Software development methodologies, guidelines, strategies Large-scale programming styles Behavior-driven development Design-driven development Domain-driven design Secure by design Test-driven development Acceptance test-driven development Continuous test-driven development Specification by example Data-driven development Data-oriented design Specification-related paradigms Iterative and incremental development Waterfall model Formal methods Comprehensive systems Agile software development Lean software development Lightweight methodology Adaptive software development Extreme programming Feature-driven development ICONIX Kanban (development) Unified Process Rational Unified Process OpenUP Agile Unified Process Rules of thumb, laws, guidelines and principles 300 Rules of Thumb and Nuggets of Wisdom (excerpt from Managing the Unmanageable - Rules, Tools, and Insights for Managing Software People and Teams by Mickey W. Mantle, Ron Lichty) ACID Big ball of mud Brooks's law C++ Core Guidelines (Stroustrup/Sutter) P1 - P13 Philosophy rules CAP theorem Code reuse Command–query separation (CQS) Conways Law Cowboy coding Do what I mean (DWIM) Don't repeat yourself (DRY) Egoless programming Fail-fast Gall's law General Responsibility Assignment Software Patterns (GRASP) If
https://en.wikipedia.org/wiki/List%20of%20MUD%20clients
A MUD client is a game client, a computer application used to connect to a MUD, a type of multiplayer online game. Generally, a MUD client is a very basic telnet client that lacks VT100 terminal emulation and the capability to perform telnet negotiations. On the other hand, MUD clients are enhanced with various features designed to make the MUD telnet interface more accessible to users, and enhance the gameplay of MUDs, with features such as syntax highlighting, keyboard macros, and connection assistance. Standard features seen in most MUD clients include ANSI color support, aliases, triggers and scripting. The client can often be extended almost indefinitely with its built-in scripting language. Most MUDs restrict the usage of scripts because they give an unfair advantage, as well as the fear that the game will end up being played by fully automated clients instead of human beings. Prominent clients include TinyTalk, TinyFugue, TinTin++, and zMUD. History The first MUD client with a notable number of features was Tinytalk by Anton Rang in January 1990, for Unix-like systems. In May 1990 TinyWar 1.1.4 was released by Leo Plotkin which was based on TinyTalk 1.0 and added support for event-driven programming. In September 1990 TinyFugue which was based on TinyWar 1.2.3 and TT 1.1 was released by Greg Hudson and featured more advanced trigger support. Development of TinyFugue was taken over by Ken Keys in 1991. TinyFugue has continued to evolve and remains a popular client today for Unix-like systems. TinyFugue, or tf, was primarily written for Unix-like operating systems. It is one of the earliest MUD clients in existence. It is primarily geared toward TinyMUD variants. TinyFugue is extensible through its own macro language, which also ties to its extensive trigger system. The trigger system allows implementation of automatically run commands. Another early client was TINTIN by Peter Unold in April 1992. In October 1992 Peter Unold made his final release, TIN
https://en.wikipedia.org/wiki/Firestarter%20%28firewall%29
Firestarter is a personal firewall tool that uses the Netfilter (iptables/ipchains) system built into the Linux kernel. It has the ability to control both inbound and outbound connections. Firestarter provides a graphical interface for configuring firewall rules and settings. It provides real-time monitoring of all network traffic for the system. Firestarter also provides facilities for port forwarding, internet connection sharing and DHCP service. Firestarter is free and open-source software and uses GUI widgets from GTK+. Note that it uses GTK2 and has not been upgraded to use GTK3 so the last Linux distributions it will run on are Ubuntu 18, Debian 9, etc. See also Uncomplicated Firewall iptables netfilter References External links Firestarter manual on SourceForge How to Set Up a Firewall in Linux Firewall software Software that uses GTK Linux security software Discontinued software