source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Municipal%20or%20urban%20engineering
Municipal or urban engineering applies the tools of science, art and engineering in an urban environment. Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of garbage collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously (for a given street or development project), and managed by the same municipal authority. History Modern municipal engineering finds its origins in the 19th-century United Kingdom, following the Industrial Revolution and the growth of large industrial cities. The threat to urban populations from epidemics of waterborne diseases such as cholera and typhus led to the development of a profession devoted to "sanitary science" that later became "municipal engineering". A key figure of the so-called "public health movement" was Edwin Chadwick, author of the parliamentary report, published in 1842. Early British legislation included: Burgh Police Act 1833 - powers of paving, lighting, cleansing, watching, supplying with water and improving their communities. Municipal Corporations Act 1835 Public Health Act 1866 – formation of drainage boards Public Health Act 1875 known at the time as the Great Public Health Act This legislation provided local authorities with powers to undertake municipa
https://en.wikipedia.org/wiki/Schr%C3%B6dinger%E2%80%93Newton%20equation
The Schrödinger–Newton equation, sometimes referred to as the Newton–Schrödinger or Schrödinger–Poisson equation, is a nonlinear modification of the Schrödinger equation with a Newtonian gravitational potential, where the gravitational potential emerges from the treatment of the wave function as a mass density, including a term that represents interaction of a particle with its own gravitational field. The inclusion of a self-interaction term represents a fundamental alteration of quantum mechanics. It can be written either as a single integro-differential equation or as a coupled system of a Schrödinger and a Poisson equation. In the latter case it is also referred to in the plural form. The Schrödinger–Newton equation was first considered by Ruffini and Bonazzola in connection with self-gravitating boson stars. In this context of classical general relativity it appears as the non-relativistic limit of either the Klein–Gordon equation or the Dirac equation in a curved space-time together with the Einstein field equations. The equation also describes fuzzy dark matter and approximates classical cold dark matter described by the Vlasov–Poisson equation in the limit that the particle mass is large. Later on it was proposed as a model to explain the quantum wave function collapse by Lajos Diósi and Roger Penrose, from whom the name "Schrödinger–Newton equation" originates. In this context, matter has quantum properties, while gravity remains classical even at the fundamental level. The Schrödinger–Newton equation was therefore also suggested as a way to test the necessity of quantum gravity. In a third context, the Schrödinger–Newton equation appears as a Hartree approximation for the mutual gravitational interaction in a system of a large number of particles. In this context, a corresponding equation for the electromagnetic Coulomb interaction was suggested by Philippe Choquard at the 1976 Symposium on Coulomb Systems in Lausanne to describe one-component plasmas.
https://en.wikipedia.org/wiki/Common-mode%20rejection%20ratio
In electronics, the common mode rejection ratio (CMRR) of a differential amplifier (or other device) is a metric used to quantify the ability of the device to reject common-mode signals, i.e. those that appear simultaneously and in-phase on both inputs. An ideal differential amplifier would have infinite CMRR, however this is not achievable in practice. A high CMRR is required when a differential signal must be amplified in the presence of a possibly large common-mode input, such as strong electromagnetic interference (EMI). An example is audio transmission over balanced line in sound reinforcement or recording. Theory Ideally, a differential amplifier takes the voltages, and on its two inputs and produces an output voltage , where is the differential gain. However, the output of a real differential amplifier is better described as : where is the "common-mode gain", which is typically much smaller than the differential gain. The CMRR is defined as the ratio of the powers of the differential gain over the common-mode gain, measured in positive decibels (thus using the 20 log rule): As differential gain should exceed common-mode gain, this will be a positive number, and the higher the better. The CMRR is a very important specification, as it indicates how much of the common-mode signal will appear in your measurement. The value of the CMRR often depends on signal frequency as well, and must be specified as a function thereof. It is often important in reducing noise on transmission lines. For example, when measuring the resistance of a thermocouple in a noisy environment, the noise from the environment appears as an offset on both input leads, making it a common-mode voltage signal. The CMRR of the measurement instrument determines the attenuation applied to the offset or noise. Amplifier design CMRR is an important feature of operational amplifiers, difference amplifiers and instrumentation amplifiers, and can be found in the datasheet. The CMRR often v
https://en.wikipedia.org/wiki/List%20of%20communication%20satellite%20companies
This is a list of all companies currently operating at least one commercial communication satellite or currently has one on order. Global Top 20 The World Teleport Association publishes lists of companies based on revenues from all customized communications sources and includes operators of teleports and satellite fleets. In order from largest to smallest, the Global Top 20 of 2021 were: SES (Luxembourg) Intelsat S.A. (Luxembourg) EchoStar Satellite Services (USA) Hughes Network Systems (USA) Eutelsat (France) Arqiva (UK) Telesat (Canada) Speedcast (USA) Telespazio S.p.A.(Italy) Encompass Digital Media (USA) SingTel Satellite (Singapore) Hispasat (Spain) Globecast (France) Liquid Intelligent Technologies (South Africa) Russian Satellite Communications Company (Russia) Telstra (Australia) MEASAT Global (Malaysia) Thaicom (Thailand) du (UAE) Gazprom Space Systems (Russia) List References Satellite broadcasting Space lists Satellite
https://en.wikipedia.org/wiki/Software%20rot
Software rot (bit rot, code rot, software erosion, software decay, or software entropy) is either a slow deterioration of software quality over time or its diminishing responsiveness that will eventually lead to software becoming faulty, unusable, or in need of upgrade. This is not a physical phenomenon; the software does not actually decay, but rather suffers from a lack of being responsive and updated with respect to the changing environment in which it resides. The Jargon File, a compendium of hacker lore, defines "bit rot" as a jocular explanation for the degradation of a software program over time even if "nothing has changed"; the idea behind this is almost as if the bits that make up the program were subject to radioactive decay. Causes Several factors are responsible for software rot, including changes to the environment in which the software operates, degradation of compatibility between parts of the software itself, and the appearance of bugs in unused or rarely used code. Environment change When changes occur in the program's environment, particularly changes which the designer of the program did not anticipate, the software may no longer operate as originally intended. For example, many early computer game designers used the CPU clock speed as a timer in their games. However, newer CPU clocks were faster, so the gameplay speed increased accordingly, making the games less usable over time. Onceability There are changes in the environment not related to the program's designer, but its users. Initially, a user could bring the system into working order, and have it working flawlessly for a certain amount of time. But, when the system stops working correctly, or the users want to access the configuration controls, they cannot repeat that initial step because of the different context and the unavailable information (password lost, missing instructions, or simply a hard-to-manage user interface that was first configured by trial and error). Information Arc
https://en.wikipedia.org/wiki/R/K%20selection%20theory
In ecology, r/K selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of individual parental investment of r-strategists, or on a reduced quantity of offspring with a corresponding increased parental investment of K-strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as "cheap" or "expensive", a comment on the expendable nature of the offspring and parental commitment made. The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood. The terminology of r/K-selection was coined by the ecologists Robert MacArthur and E. O. Wilson in 1967 based on their work on island biogeography; although the concept of the evolution of life history strategies has a longer history (see e.g. plant strategies). The theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies. A life-history paradigm has replaced the r/K selection paradigm, but continues to incorporate its important themes as a subset of life history theory. Some scientists now prefer to use the terms fast versus slow life history as a replacement for, respectively, r versus K reproductive strategy. Overview In r/K selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: r- or K-selection
https://en.wikipedia.org/wiki/Extended%20Data%20Services
Extended Data Services (now XDS, previously EDS), is an American standard classified under Electronic Industries Alliance standard CEA-608-E for the delivery of any ancillary data (metadata) to be sent with an analog television program, or any other NTSC video signal. XDS is used by TV stations, TV networks, and TV program syndication distributors in the US for several purposes. Here are some of the most common uses of XDS: The "autoclock" system delivers time data via an XDS "Time-of-Day Packet" for automatically setting the clock of newer TVs & VCRs sold in the US. Most PBS stations provide this service. Rudimentary program information which can be displayed on-screen, such as the name and remaining time of the program, Station identification, V-chip content ratings data. XDS is also used by the American TV network ABC for their Network Alert System (NAS). NAS is a one-way communication system used by ABC to inform and alert their local affiliate stations across the US of information regarding ABC's network programming (such as program timings & changes, news special report information, etc.), using a special decoder manufactured for ABC by EEG Enterprises , a manufacturer of related equipment for the TV broadcast industry such as closed captioning and general-purpose XDS encoders. The CBS Television Network uses a similar method to transmit three separate internal messaging services to stations: one for programming departments, one for master control operations, and one for newsrooms. Many standard definition receivers produced by Dish Network encode XDS data into their output signal. Data encoded includes time of day, program name, program description, program time remaining, channel identification, and content rating. This data is obtained from the satellite service's EPG and replaces any data which may have been present when the signal was uplinked. XDS uses the same line in the vertical blanking interval as closed captioning (NTSC line 21), a
https://en.wikipedia.org/wiki/Finite-difference%20time-domain%20method
Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics (finding approximate solutions to the associated system of differential equations). Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way. The FDTD method belongs in the general class of grid-based differential numerical modeling methods (finite difference methods). The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved. History Finite difference schemes for time-dependent partial differential equations (PDEs) have been employed for many years in computational fluid dynamics problems, including the idea of using centered finite difference operators on staggered grids in space and time to achieve second-order accuracy. The novelty of Kane Yee's FDTD scheme, presented in his seminal 1966 paper, was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations. The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were originated by Allen Taflove in 1980. Since about 1990, FDTD techniques have emerged as primary means to computationally model many scienti
https://en.wikipedia.org/wiki/Artificial%20brain
An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating "artificial brains" and brain emulation plays three important roles in science: An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience. A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being. A long-term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI. An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease. The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus's critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence". The third objective is generally called artificial general intelligence by researchers. However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on groun
https://en.wikipedia.org/wiki/Literal%20%28computer%20programming%29
In computer science, a literal is a textual representation (notation) of a value as it is written in source code. Almost all programming languages have notations for atomic values such as integers, floating-point numbers, and strings, and usually for booleans and characters; some also have notations for elements of enumerated types and compound values such as arrays, records, and objects. An anonymous function is a literal for the function type. In contrast to literals, variables or constants are symbols that can take on one of a class of fixed values, the constant being constrained not to change. Literals are often used to initialize variables; for example, in the following, 1 is an integer literal and the three letter string in "cat" is a string literal: int a = 1; string s = "cat"; In lexical analysis, literals of a given type are generally a token type, with a grammar rule, like "a string of digits" for an integer literal. Some literals are specific keywords, like true for the boolean literal "true". In some object-oriented languages (like ECMAScript), objects can also be represented by literals. Methods of this object can be specified in the object literal using function literals. The brace notation below, which is also used for array literals, is typical for object literals: {"cat", "dog"} {name: "cat", length: 57} Literals of objects In ECMAScript (as well as its implementations JavaScript or ActionScript), an object with methods can be written using the object literal like this: var newobj = { var1: true, var2: "very interesting", method1: function () { alert(this.var1) }, method2: function () { alert(this.var2) } }; newobj.method1(); newobj.method2(); These object literals are similar to anonymous classes in other languages like Java. The JSON data interchange format is based on a subset of the JavaScript object literal syntax, with some additional restrictions (among them requiring all keys to be quoted, and disallowing functi
https://en.wikipedia.org/wiki/Polarimetry
Polarimetry is the measurement and interpretation of the polarization of transverse waves, most notably electromagnetic waves, such as radio or light waves. Typically polarimetry is done on electromagnetic waves that have traveled through or have been reflected, refracted or diffracted by some material in order to characterize that object. Plane polarized light: According to the wave theory of light, an ordinary ray of light is considered to be vibrating in all planes of right angles to the direction of its propagation. If this ordinary ray of light is passed through a nicol prism, the emergent ray has its vibration only in one plane. Applications Polarimetry of thin films and surfaces is commonly known as ellipsometry. Polarimetry is used in remote sensing applications, such as planetary science, astronomy, and weather radar. Polarimetry can also be included in computational analysis of waves. For example, radars often consider wave polarization in post-processing to improve the characterization of the targets. In this case, polarimetry can be used to estimate the fine texture of a material, help resolve the orientation of small structures in the target, and, when circularly-polarized antennas are used, resolve the number of bounces of the received signal (the chirality of circularly polarized waves alternates with each reflection). Imaging In 2003, a visible-near IR (VNIR) Spectropolarimetric Imager with an acousto-optic tunable filter (AOTF) was reported. These hyperspectral and spectropolarimetric imager functioned in radiation regions spanning from ultraviolet (UV) to long-wave infrared (LWIR). In AOTFs a piezoelectric transducer converts a radio frequency (RF) signal into an ultrasonic wave. This wave then travels through a crystal attached to the transducer and upon entering an acoustic absorber is diffracted. The wavelength of the resulting light beams can be modified by altering the initial RF signal. VNIR and LWIR hyperspectral imaging consistently
https://en.wikipedia.org/wiki/Nicol%20prism
A Nicol prism is a type of polarizer. It is an optical device made from calcite crystal used to convert ordinary light into plane polarized light. It is made in such a way that it eliminates one of the rays by total internal reflection, i.e. the ordinary ray is eliminated and only the extraordinary ray is transmitted through the prism. It was the first type of polarizing prism, invented in 1828 by William Nicol (1770–1851) of Edinburgh. Mechanism The Nicol prism consists of a rhombohedral crystal of Iceland spar (a variety of calcite) that has been cut at an angle of 68° with respect to the crystal axis, cut again diagonally, and then rejoined, using a layer of transparent Canada balsam as a glue. Unpolarized light ray enters through the side face of the crystal, and is split into two orthogonally polarized, differently directed rays by the birefringence property of calcite. The ordinary ray, or o-ray, experiences a refractive index of no = 1.658 in the calcite and undergoes a total internal reflection at the calcite–glue interface because of its angle of incidence at the glue layer (refractive index n = 1.550) exceeds the critical angle for the interface. It passes out the top side of the upper half of the prism with some refraction. The extraordinary ray, or e-ray, experiences a lower refractive index (ne = 1.486) in the calcite crystal and is not totally reflected at the interface because it strikes the interface at a sub-critical angle. The e-ray merely undergoes a slight refraction, or bending, as it passes through the interface into the lower half of the prism. It finally leaves the prism as a ray of plane-polarized light, undergoing another refraction, as it exits the opposite side of the prism. The two exiting rays have polarizations orthogonal (at right angles) to each other, but the lower, or e-ray, is the more commonly used for further experimentation because it is again traveling in the original horizontal direction, assuming that the calcite prism
https://en.wikipedia.org/wiki/Composability
Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides components that can be selected and assembled in various combinations to satisfy specific user requirements. In information systems, the essential features that make a component composable are that it be: self-contained (modular): it can be deployed independently – note that it may cooperate with other components, but dependent components are replaceable stateless: it treats each request as an independent transaction, unrelated to any previous request. Stateless is just one technique; managed state and transactional systems can also be composable, but with greater difficulty. It is widely believed that composable systems are more trustworthy than non-composable systems because it is easier to evaluate their individual parts. Simulation theory In simulation theory, current literature distinguishes between Composability of Models and Interoperability of Simulation. Modeling is understood as the purposeful abstraction of reality, resulting in the formal specification of a conceptualization and underlying assumptions and constraints. Modeling and simulation (M&S) is, in particular, interested in models that are used to support the implementation of an executable version on a computer. The execution of a model over time is understood as the simulation. While modeling targets the conceptualization, simulation challenges mainly focus on implementation, in other words, modeling resides on the abstraction level, whereas simulation resides on the implementation level. Following the ideas derived from the Levels of Conceptual Interoperability model (LCIM), Composability addresses the model challenges on higher levels, interoperability deals with simulation implementation issues, and integratability with network questions. Tolk proposes the following definitions: Interoperability allows exchanging information between the systems and using
https://en.wikipedia.org/wiki/Smart%20cow%20problem
The smart cow problem is the concept that, when a group of individuals is faced with a technically difficult task, only one of their members has to solve it. When the problem has been solved once, an easily repeatable method may be developed, allowing the less technically proficient members of the group to accomplish the task. The term smart cow problem is thought to be derived from the expression: "It only takes one smart cow to open the latch of the gate, and then all the other cows follow." This concept has been applied to digital rights management (DRM), where, due to the rapid spread of information on the Internet, it only takes one individual's defeat of a DRM scheme to render the method obsolete. See also Jon Lech Johansen (aka "DVD Jon", among the first hackers to crack DVD encryption) Script kiddie (an unskilled hacker who relies on tools created by others) References Digital rights management Hacker culture Technology neologisms
https://en.wikipedia.org/wiki/Nixtamalization
Nixtamalization () is a process for the preparation of maize, or other grain, in which the grain is soaked and cooked in an alkaline solution, usually limewater (but sometimes aqueous alkali metal carbonates), washed, and then hulled. The term can also refer to the removal via an alkali process of the pericarp from other grains such as sorghum. Nixtamalized corn has several benefits over unprocessed grain: It is more easily ground, its nutritional value is increased, flavor and aroma are improved, and mycotoxins are reduced by up to 97%–100% (for aflatoxins). Lime and ash are highly alkaline: the alkalinity helps the dissolution of hemicellulose, the major glue-like component of the maize cell walls, and loosens the hulls from the kernels and softens the maize. Corn's hemicellulose-bound niacin is converted to free niacin (a form of vitamin B3), making it available for absorption into the body, thus helping to prevent pellagra. Some of the corn oil is broken down into emulsifying agents (monoglycerides and diglycerides), while bonding of the maize proteins to each other is also facilitated. The divalent calcium in lime acts as a cross-linking agent for protein and polysaccharide acidic side chains. While cornmeal made from untreated ground maize is unable by itself to form a dough on addition of water, the chemical changes in masa allow dough formation. These benefits make nixtamalization a crucial preliminary step for further processing of maize into food products, and the process is employed using both traditional and industrial methods, in the production of tortillas and tortilla chips (but not corn chips), tamales, hominy, and many other items. Etymology In the Aztec language Nahuatl, the word for the product of this procedure is or ( or ), which in turn has yielded Mexican Spanish (). The Nahuatl word is a compound of "lime ashes" and "unformed/cooked corn dough, tamal". The term nixtamalization can also be used to describe the removal of the pericar
https://en.wikipedia.org/wiki/NatureServe%20conservation%20status
The NatureServe conservation status system, maintained and presented by NatureServe in cooperation with the Natural Heritage Network, was developed in the United States in the 1980s by The Nature Conservancy (TNC) as a means for ranking or categorizing the relative imperilment of species of plants, animals, or other organisms, as well as natural ecological communities, on the global, national or subnational levels. These designations are also referred to as NatureServe ranks, NatureServe statuses, or Natural Heritage ranks. While the Nature Conservancy is no longer substantially involved in the maintenance of these ranks, the name TNC ranks is still sometimes encountered for them. NatureServe ranks indicate the imperilment of species or ecological communities as natural occurrences, ignoring individuals or populations in captivity or cultivation, and also ignoring non-native occurrences established through human intervention beyond the species' natural range, as for example with many invasive species). NatureServe ranks have been designated primarily for species and ecological communities in the United States and Canada, but the methodology is global, and has been used in some areas of Latin America and the Caribbean. The NatureServe Explorer website presents a centralized set of global, national, and subnational NatureServe ranks developed by NatureServe or provided by cooperating U.S. Natural Heritage Programs and Canadian and other international Conservation Data Centers. Introduction Most NatureServe ranks show the conservation status of a plant or animal species or a natural ecological community using a one-to-five numerical scale (from most vulnerable to most secure), applied either globally (world-wide or range-wide) or to the entity's status within a particular nation or a specified subnational unit within a nation. Letter-based notations are used for various special cases to which the numerical scale does not apply, as explained below. Ranks at variou
https://en.wikipedia.org/wiki/Level%20%28video%20games%29
In video games, a level (also referred to as a map, stage, or round in some older games) is any space available to the player during the course of completion of an objective. Video game levels generally have progressively increasing difficulty to appeal to players with different skill levels. Each level may present new concepts and challenges to keep a player's interest high. In games with linear progression, levels are areas of a larger world, such as Green Hill Zone. Games may also feature interconnected levels, representing locations. Although the challenge in a game is often to defeat some sort of character, levels are sometimes designed with a movement challenge, such as a jumping puzzle, a form of obstacle course. Players must judge the distance between platforms or ledges and safely jump between them to reach the next area. These puzzles can slow the momentum down for players of fast action games; the first Half-Life's penultimate chapter, "Interloper", featured multiple moving platforms high in the air with enemies firing at the player from all sides. Level design Level design or environment design, is a discipline of game development involving the making of video game levels—locales, stages or missions. This is commonly done using a level editor, a game development software designed for building levels; however, some games feature built-in level editing tools. History In the early days of video games (1970s–2000s), a single programmer would develop the maps and layouts for a game, and a discipline or profession dedicated solely to level design did not exist. Early games often featured a level system of ascending difficulty as opposed to progression of storyline. An example of the former approach is the arcade shoot 'em up game Space Invaders (1978), where each level looks the same, repeating endlessly until the player loses all their lives. An example of the latter approach is the arcade platform game Donkey Kong (1981), which uses multiple distinct lev
https://en.wikipedia.org/wiki/Object%20composition
In computer science, object composition and object aggregation are closely related ways to combine objects or data types into more complex ones. In conversation the distinction between composition and aggregation is often ignored. Common kinds of compositions are objects used in object-oriented programming, tagged unions, sets, sequences, and various graph structures. Object compositions relate to, but are not the same as, data structures. Object composition refers to the logical or conceptual structure of the information, not the implementation or physical data structure used to represent it. For example, a sequence differs from a set because (among other things) the order of the composed items matters for the former but not the latter. Data structures such as arrays, linked lists, hash tables, and many others can be used to implement either of them. Perhaps confusingly, some of the same terms are used for both data structures and composites. For example, "binary tree" can refer to either: as a data structure it is a means of accessing a linear sequence of items, and the actual positions of items in the tree are irrelevant (the tree can be internally rearranged however one likes, without changing its meaning). However, as an object composition, the positions are relevant, and changing them would change the meaning (as for example in cladograms). Programming technique Object-oriented programming is based on objects to encapsulate data and behavior. It uses two main techniques for assembling and composing functionality into more complex ones, sub-typing and object composition. Object composition is about combining objects within compound objects, and at the same time, ensuring the encapsulation of each object by using their well-defined interface without visibility of their internals. In this regard, object composition differs from data structures, which do not enforce encapsulation. Object composition may also be about a group of multiple related objects, s
https://en.wikipedia.org/wiki/Function%20composition%20%28computer%20science%29
In computer science, function composition is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole. Programmers frequently apply functions to results of other functions, and almost all programming languages allow it. In some cases, the composition of functions is interesting as a function in its own right, to be used later. Such a function can always be defined but languages with first-class functions make it easier. The ability to easily compose functions encourages factoring (breaking apart) functions for maintainability and code reuse. More generally, big systems might be built by composing whole programs. Narrowly speaking, function composition applies to functions that operate on a finite amount of data, each step sequentially processing it before handing it to the next. Functions that operate on potentially infinite data (a stream or other codata) are known as filters, and are instead connected in a pipeline, which is analogous to function composition and can execute concurrently. Composing function calls For example, suppose we have two functions and , as in and . Composing them means we first compute , and then use to compute . Here is the example in the C language: float x, y, z; // ... y = g(x); z = f(y); The steps can be combined if we don't give a name to the intermediate result: z = f(g(x)); Despite differences in length, these two implementations compute the same result. The second implementation requires only one line of code and is colloquially referred to as a "highly composed" form. Readability and hence maintainability is one advantage of highly composed forms, since they require fewer lines of code, minimizing a program's "surface area". DeMarco and Lister empirically verify an inverse relationship between surface area and maintainab
https://en.wikipedia.org/wiki/Stevenson%20screen
A Stevenson screen or instrument shelter is a shelter or an enclosure to meteorological instruments against precipitation and direct heat radiation from outside sources, while still allowing air to circulate freely around them. It forms part of a standard weather station and holds instruments that may include thermometers (ordinary, maximum/minimum), a hygrometer, a psychrometer, a dewcell, a barometer, and a thermograph. Stevenson screens may also be known as a cotton region shelter, an instrument shelter, a thermometer shelter, a thermoscreen, or a thermometer screen. Its purpose is to provide a standardised environment in which to measure temperature, humidity, dewpoint, and atmospheric pressure. It is white in color to reflect direct solar radiation. History It was designed by Thomas Stevenson (1818–1887), a Scottish civil engineer who designed many lighthouses, and was the father of author Robert Louis Stevenson. The development of his small thermometer screen with double-louvered walls on all sides and no floor was reported in 1864. After comparisons with other screens in the United Kingdom, Stevenson's original design was modified. The modifications by Edward Mawley of the Royal Meteorological Society in 1884 included a double roof, a floor with slanted boards, and a modification of the double louvers. This design was adopted by the British Meteorological Office and eventually other national services, such as Canada. The national services developed their own variations, such as the single-louvered Cotton Region design in the United States. Composition The traditional Stevenson screen is a box shape, constructed of wood, in a double-louvered design. However, it is possible to construct a screen using other materials and shapes, such as a pyramid. The World Meteorological Organization (WMO) agreed standard for the height of the thermometers is between above the ground. Size The interior size of the screen will depend on the number of instruments that a
https://en.wikipedia.org/wiki/Global%20Telecommunications%20System
The Global Telecommunication System (GTS) is a secured communication network enabling real-time exchange of meteorological data from weather stations, satellites and numerical weather prediction centres, providing critical meteorological forecasting, warnings, and alerts. It was established by the World Meteorological Organization in 1951 under the World Weather Watch programme for the free and open exchange of meteorological information. The GTS consists of an integrated network of point-to-point circuits, and multi-point circuits which interconnect meteorological telecommunication centres. The circuits of the GTS are composed of a combination of terrestrial and satellite telecommunication links. They comprise point-to-point circuits, point-to-multi-point circuits for data distribution, multi-point-to-point circuits for data collection, as well as two-way multi-point circuits. Meteorological Telecommunication Centres are responsible for receiving data and relaying it selectively on GTS circuits. The GTS is organized on a three level basis: The Main Telecommunication Network (MTN) The Regional Meteorological Telecommunication Networks (RMTNs) The National Meteorological Telecommunication Networks (NMTNs) Satellite-based data collection and/or data distribution systems are integrated in the GTS as an essential element of the global, regional and national levels of the GTS. Data collection systems operated via geostationary or near-polar orbiting meteorological/environmental satellites, including the Argos System, are widely used for the collection of observational data from data collection platforms. Marine data are also collected through the International Maritime Mobile Service and Inmarsat satellites. References Further reading WMO (2013) Manual on the Global Telecommunications System WMO publication 386 External links WMO's Global Telecommunication System WMO's GTS Community Site 1951 establishments Telecommunications-related introductions in the 195
https://en.wikipedia.org/wiki/Sethusamudram%20Shipping%20Canal%20Project
Sethusamudram Shipping Canal Project () is a proposed project to create a shipping route in the shallow straits between India and Sri Lanka. This would provide a continuously navigable sea route around the Indian Peninsula. The channel would be dredged in the Sethusamudram sea between Tamil Nadu and Sri Lanka, passing through the limestone shoals of Rama Sethu. The project involves digging a long deepwater channel linking the shallow Palk Strait with the Gulf of Mannar. Conceived in 1860 by Alfred Dundas Taylor, it received approval of the Indian government in 2005. The proposed route through the shoals of Ram Setu is opposed by some groups on religious, environmental and economical grounds. Five alternative routes were considered that avoid damage to the shoals. The most recent plan is to dig the channel roughly in the middle of the straits to provide the shortest course and the course requiring least maintenance. This plan avoids the demolition of Ram Setu. History Because of its shallow waters, Sethusamudramthe sea separating Sri Lanka from Indiapresents a hindrance to navigation through the Palk Strait. Though trade across the India-Sri Lanka divide has been active since at least the first millennium BCE, it has been limited to small boats and dinghies. Larger oceangoing vessels coming from the West have had to navigate around Sri Lanka to reach India' eastern coast. Eminent British geographer Major James Rennell surveyed the region in late 18th century; he suggested that a "navigable passage could be maintained by dredging of the Ramisseram [sic]". Little notice was given to his proposal, perhaps because it came from "so young and an unknown officer", and the idea was only revived 60 years later. Efforts were made in 1838 to dredge the canal, but the passage did not remain navigable for any vessels except those with a shallow draft. The project was conceived in 1860 by Commander A. D. Taylor of the Indian Marines and has been reviewed many times without a
https://en.wikipedia.org/wiki/WHUT-TV
WHUT-TV (channel 32) is the secondary PBS member television station in Washington, D.C. The station is owned by Howard University, a historically black college, and is sister to commercial urban contemporary radio station WHUR-FM (96.3). WHUT-TV's studios are located on the Howard University campus, and its transmitter is located in the Tenleytown neighborhood in the northwest quadrant of Washington. WHUT airs a variety of standard PBS programming, as well as programs produced by Howard University, and international programs focusing on regions such as the Caribbean and Africa. History On June 25, 1974, Howard University was granted a construction permit to build a new television station on channel 32 in Washington, D.C. It was more than six years before the station signed on November 17, 1980. WHMM-TV (whose call letters stood for Howard University Mass Media) turned Howard, owner of the only radio station owned by an HBCU at the time, into the owner of the first Black-owned public television station. At the outset, the station suffered from some problems with its antenna and the need to train staff on the job. It also faced issues carving out an identity for itself and its mission, with standard PBS fare airing during much of the day; in 1983, its budget was one-third that of WETA-TV. However, within its first decade, it produced 1,000 Howard graduates trained in television production. The long-running Evening Exchange public affairs program, which debuted with the station, became a station staple; it was hosted by Kojo Nnamdi between 1985 and 2011. Budget cuts at Howard in the late 1980s and 1990s prompted staff cuts in operations. Even as the station tried to significantly step up fundraising, its treatment as another academic department, requiring a different style of management, often hurt WHMM-TV. Staff levels were cut from 90 in 1988 to 65 five years later, when a blue-ribbon panel was convened by PBS to discuss the station's problems; that year, it had
https://en.wikipedia.org/wiki/Libquantum
Libquantum is a C library quantum mechanics simulator originally focused on virtual quantum computers. It is licensed under the GNU GPL. It was a part of SPEC 2006. The latest version is stated to be v1.1.1 (Jan 2013) on the mailing list, but on the website there is only v0.9.1 from 2007. An author of libquantum, Hendrik Weimer, has published a paper in Nature about using Rydberg atoms for universal quantum simulation with colleagues, using his own work. References External links libquantum homepage Virtualization software Quantum information science Quantum programming
https://en.wikipedia.org/wiki/Conjugacy%20class%20sum
In abstract algebra, a conjugacy class sum, or simply class sum, is a function defined for each conjugacy class of a finite group G as the sum of the elements in that conjugacy class. The class sums of a group form a basis for the center of the associated group algebra. Definition Let G be a finite group, and let C1,...,Ck be the distinct conjugacy classes of G. For 1 ≤ i ≤ k, define The functions are the class sums of G. In the group algebra Let CG be the complex group algebra over G. Then the center of CG, denoted Z(CG), is defined by . This is equal to the set of all class functions (functions which are constant on conjugacy classes). To see this, note that f is central if and only if f(yx) = f(xy) for all x,y in G. Replacing y by yx−1, this condition becomes . The class sums are a basis for the set of all class functions, and thus they are a basis for the center of the algebra. In particular, this shows that the dimension of Z(CG) is equal to the number of class sums of G. References Goodman, Roe; and Wallach, Nolan (2009). Symmetry, Representations, and Invariants. Springer. . See chapter 4, especially 4.3. James, Gordon; and Liebeck, Martin (2001). Representations and Characters of Groups (2nd ed.). Cambridge University Press. . See chapter 12. Group theory
https://en.wikipedia.org/wiki/Shadow%20price
A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in order to account for the presence of distortionary market instruments (e.g. quotas, tariffs, taxes or subsidies). Shadow prices are the real economic prices given to goods and services after they have been appropriately adjusted by removing distortionary market instruments and incorporating the societal impact of the respective good or service. A shadow price is often calculated based on a group of assumptions and estimates because it lacks reliable data, so it is subjective and somewhat inaccurate. The need for shadow prices arises as a result of “externalities” and the presence of distortionary market instruments. An externality is defined as a cost or benefit incurred by a third party as a result of production or consumption of a good or services. Where the external effect is not being accounted for in the final cost-benefit analysis of its production. These inaccuracies and skewed results produce an imperfect market mechanism which inefficiently allocates resources. Market distortion happen when the market is not behaving as it would in a perfect competition due to interventions by governments, companies, and other economic agents. Specifically, the presence of a monopoly or monopsony, in which firms do not behave in a perfect competition, government intervention through taxes and subsidies, public goods, information asymmetric, and restrictions on labour markets are distortionary effects on the market. Shadow prices are often utilised in cost-benefit analyses by economic and financial analysts when evaluating the merits of public policy & government projects, when externalities or distortionary market instruments are present. The utilisation of shadow prices in these types of public policy decisions is extremely impo
https://en.wikipedia.org/wiki/Olga%20Ladyzhenskaya
Olga Aleksandrovna Ladyzhenskaya (; 7 March 1922 – 12 January 2004) was a Russian mathematician who worked on partial differential equations, fluid dynamics, and the finite difference method for the Navier–Stokes equations. She received the Lomonosov Gold Medal in 2002. She is the author of more than two hundred scientific works, among which are six monographs. Biography Ladyzhenskaya was born and grew up in the small town of Kologriv, the daughter of a mathematics teacher who is credited with her early inspiration and love of mathematics. The artist Gennady Ladyzhensky was her grandfather's brother, also born in this town. In 1937 her father, Aleksandr Ivanovich Ladýzhenski, was arrested by the NKVD and executed as an "enemy of the people". Ladyzhenskaya completed high school in 1939, unlike her older sisters who weren't permitted to do the same. She was not admitted to the Leningrad State University due to her father's status and attended a pedagogical institute. After the German invasion of June 1941, she taught school in Kologriv. She was eventually admitted to Moscow State University in 1943 and graduated in 1947. She began teaching in the Physics department of the university in 1950 and defended her PhD there, in 1951, under Sergei Sobolev and Vladimir Smirnov. She received a second doctorate from the Moscow State University in 1953. In 1954, she joined the mathematical physics laboratory of the Steklov Institute and became its head in 1961. Ladyzhenskaya had a love of arts and storytelling, counting writer Aleksandr Solzhenitsyn and poet Anna Akhmatova among her friends. Like Solzhenitsyn she was religious. She was once a member of the city council, and engaged in philanthropic activities, repeatedly risking her personal safety and career to aid people opposed to the Soviet regime. Ladyzhenskaya suffered from various eye problems in her later years and relied on special pencils to do her work. Two days before a trip to Florida, she died in her sleep
https://en.wikipedia.org/wiki/Holarctic%20realm
The Holarctic realm is a biogeographic realm that comprises the majority of habitats found throughout the continents in the Northern Hemisphere. It corresponds to the floristic Boreal Kingdom. It includes both the Nearctic zoogeographical region (which covers most of North America), and Alfred Wallace's Palearctic zoogeographical region (which covers North Africa, and all of Eurasia except for Southeast Asia, the Indian subcontinent, the southern Arabian Peninsula). These regions are further subdivided into a variety of ecoregions. Many ecosystems and the animal and plant communities that depend on them extend across a number of continents and cover large portions of the Holarctic realm. This continuity is the result of those regions’ shared glacial history. Major ecosystems Within the Holarctic realm, there are a variety of ecosystems. The type of ecosystem found in a given area depends on its latitude and the local geography. In the far north, a band of Arctic tundra encircles the shore of the Arctic Ocean. The ground beneath this land is permafrost (frozen year-round). In these difficult growing conditions, few plants can survive. South of the tundra, the boreal forest stretches across North America and Eurasia. This land is characterized by coniferous trees. Further south, the ecosystems become more diverse. Some areas are temperate grassland, while others are temperate forests dominated by deciduous trees. Many of the southernmost parts of the Holarctic are deserts, which are dominated by plants and animals adapted to the dry conditions. Animal species with a Holarctic distribution A variety of animal species are distributed across continents, throughout much of the Holarctic realm. These include the brown bear, grey wolf, red fox, wolverine, moose, caribou, golden eagle and common raven. The brown bear (Ursus arctos) is found in mountainous and semi-open areas distributed throughout the Holarctic. It once occupied much larger areas, but has been driv
https://en.wikipedia.org/wiki/Code-talker%20paradox
A code-talker paradox is a situation in which a language prevents communication. As an issue in linguistics, the paradox raises questions about the fundamental nature of languages. As such, the paradox is a problem in philosophy of language. The term code-talker paradox was coined in 2001 by Mark Baker to describe the Navajo code talking used during World War II. Code talkers are able to create a language mutually intelligible to each other but completely unintelligible to everyone who does not know the code. This causes a conflict of interests without actually causing any conflict at all. In the case of Navajo code-talkers, cryptanalysts were unable to decode messages in Navajo, even when using the most sophisticated methods available. At the same time, the code talkers were able to encrypt and decrypt messages quickly and easily by translating them into and from Navajo. Thus the code talker paradox refers to how human languages can be so similar and different at once: so similar that one can learn them both and gain the ability to translate from one to the other, yet so different that if someone knows one language but does not know another, it is not always possible to derive the meaning of a text by analyzing it or infer it from the other language. See also Drift (linguistics) List of paradoxes Plato's Problem The Analysis of Verbal Behavior Philip Johnston References Baker, Mark C. The Atoms of Language: The Mind's Hidden Rules of Grammar. Basic Books, 2001. Philosophy of language History of cryptography Paradoxes
https://en.wikipedia.org/wiki/Off-the-record%20messaging
Off-the-Record Messaging (OTR) is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AES symmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides forward secrecy and malleable encryption. The primary motivation behind the protocol was providing deniable authentication for the conversation participants while keeping conversations confidential, like a private conversation in real life, or off the record in journalism sourcing. This is in contrast with cryptography tools that produce output which can be later used as a verifiable record of the communication event and the identities of the participants. The initial introductory paper was named "Off-the-Record Communication, or, Why Not To Use PGP". The OTR protocol was designed by cryptographers Ian Goldberg and Nikita Borisov and released on 26 October 2004. They provide a client library to facilitate support for instant messaging client developers who want to implement the protocol. A Pidgin and Kopete plugin exists that allows OTR to be used over any IM protocol supported by Pidgin or Kopete, offering an auto-detection feature that starts the OTR session with the buddies that have it enabled, without interfering with regular, unencrypted conversations. Version 4 of the protocol has been in development since 2017 by a team led by Sofía Celi, and reviewed by Nik Unger and Ian Goldberg. This version aims to provide online and offline deniability, to update the cryptographic primitives, and to support out-of-order delivery and asynchronous communication. History OTR was presented in 2004 by Nikita Borisov, Ian Avrum Goldberg, and Eric A. Brewer as an improvement over the OpenPGP and the S/MIME system at the "Workshop on Privacy in the Electronic Society" (WPES). The first version 0.8.0 of the reference implementation w
https://en.wikipedia.org/wiki/%40stake
ATstake, Inc. was a computer security professional services company in Cambridge, Massachusetts, United States. It was founded in 1999 by Battery Ventures (Tom Crotty, Sunil Dhaliwal, and Scott Tobin) and Ted Julian. Its initial core team of technologists included Dan Geer (Chief Technical Officer) and the east coast security team from Cambridge Technology Partners (including Dave Goldsmith). History In January 2000, @stake acquired L0pht Heavy Industries (who were known for their many hacker employees), bringing on Mudge as its Vice President of Research and Development. Its domain name was atstake.com. In July 2000, @stake acquired Cerberus Information Security Limited of London, England, from David and Mark Litchfield and Robert Stein-Rostaing, to be their launchpad into Europe, the Middle East and Africa. @stake was subsequently acquired by Symantec in 2004. In addition to Dan Geer and Mudge, @stake employed many famous security experts including Dildog, Window Snyder, Dave Aitel, Katie Moussouris, David Litchfield, Mark Kriegsman, Mike Schiffman, the grugq, Chris Wysopal, Alex Stamos, Cris Thomas, and Joe Grand. In September 2000, an @stake recruiter contacted Mark Abene to recruit him for a security consultant position. The recruiter was apparently unaware of his past felony conviction since @stake had a policy of not hiring convicted hackers. Mark was informed by a company representative that @stake could not hire him, saying: "We ran a background check." This caused some debate regarding the role of convicted hackers working in the security business. @stake was primarily a consulting company, but also offered information security training through the @stake academy, and created a number of software security tools: LC 3, LC 4 and LC 5 were versions of a password auditing and recovery tool also known as L0phtCrack WebProxy was a security testing tool for Web applications SmartRisk Analyzer was an application security analysis tool The @stake Sleu
https://en.wikipedia.org/wiki/Dielectric%20heating
Dielectric heating, also known as electronic heating, radio frequency heating, and high-frequency heating, is the process in which a radio frequency (RF) alternating electric field, or radio wave or microwave electromagnetic radiation heats a dielectric material. At higher frequencies, this heating is caused by molecular dipole rotation within the dielectric. Mechanism Molecular rotation occurs in materials containing polar molecules having an electrical dipole moment, with the consequence that they will align themselves in an electromagnetic field. If the field is oscillating, as it is in an electromagnetic wave or in a rapidly oscillating electric field, these molecules rotate continuously by aligning with it. This is called dipole rotation, or dipolar polarisation. As the field alternates, the molecules reverse direction. Rotating molecules push, pull, and collide with other molecules (through electrical forces), distributing the energy to adjacent molecules and atoms in the material. The process of energy transfer from the source to the sample is a form of radiative heating. Temperature is related to the average kinetic energy (energy of motion) of the atoms or molecules in a material, so agitating the molecules in this way increases the temperature of the material. Thus, dipole rotation is a mechanism by which energy in the form of electromagnetic radiation can raise the temperature of an object. There are also many other mechanisms by which this conversion occurs. Dipole rotation is the mechanism normally referred to as dielectric heating, and is most widely observable in the microwave oven where it operates most effectively on liquid water, and also, but much less so, on fats and sugars. This is because fats and sugar molecules are far less polar than water molecules, and thus less affected by the forces generated by the alternating electromagnetic fields. Outside of cooking, the effect can be used generally to heat solids, liquids, or gases, provided th
https://en.wikipedia.org/wiki/Backup%20validation
Backup validation is the process whereby owners of computer data may examine how their data was backed up in order to understand what their risk of data loss might be. It also speaks to optimization of such processes, charging for them as well as estimating future requirements, sometimes called capacity planning. History Over the past several decades (leading up to 2005), organizations (banks, governments, schools, manufacturers and others) have increased their reliance more on "Open Systems" and less on "Closed Systems". For example, 25 years ago, a large bank might have most if not all of its critical data housed in an IBM mainframe computer (a "Closed System"), but today, that same bank might store a substantially greater portion of its critical data in spreadsheets, databases, or even word processing documents (i.e., "Open Systems"). The problem with Open Systems is, primarily, their unpredictable nature. The very nature of an Open System is that it is exposed to potentially thousands if not millions of variables ranging from network overloads to computer virus attacks to simple software incompatibility. Any one, or indeed several in combination, of these factors may result in either lost data and/or compromised data backup attempts. These types of problems do not generally occur on Closed Systems, or at least, in unpredictable ways. In the "old days", backups were a nicely contained affair. Today, because of the ubiquity of, and dependence upon, Open Systems, an entire industry has developed around data protection. Three key elements of such data protection are Validation, Optimization and Chargeback. Validation Validation is the process of finding out whether a backup attempt succeeded or not, or, whether the data is backed up enough to consider it "protected". This process usually involves the examination of log files, the "smoking gun" often left behind after a backup attempts takes place, as well as media databases, data traffic and even magnetic tapes.
https://en.wikipedia.org/wiki/Circular%20shift
In combinatorial mathematics, a circular shift is the operation of rearranging the entries in a tuple, either by moving the final entry to the first position, while shifting all other entries to the next position, or by performing the inverse operation. A circular shift is a special kind of cyclic permutation, which in turn is a special kind of permutation. Formally, a circular shift is a permutation σ of the n entries in the tuple such that either modulo n, for all entries i = 1, ..., n or modulo n, for all entries i = 1, ..., n. The result of repeatedly applying circular shifts to a given tuple are also called the circular shifts of the tuple. For example, repeatedly applying circular shifts to the four-tuple (a, b, c, d) successively gives (d, a, b, c), (c, d, a, b), (b, c, d, a), (a, b, c, d) (the original four-tuple), and then the sequence repeats; this four-tuple therefore has four distinct circular shifts. However, not all n-tuples have n distinct circular shifts. For instance, the 4-tuple (a, b, a, b) only has 2 distinct circular shifts. The number of distinct circular shifts of an n-tuple is , where is a divisor of , indicating the maximal number of repeats over all subpatterns. In computer programming, a bitwise rotation, also known as a circular shift, is a bitwise operation that shifts all bits of its operand. Unlike an arithmetic shift, a circular shift does not preserve a number's sign bit or distinguish a floating-point number's exponent from its significand. Unlike a logical shift, the vacant bit positions are not filled in with zeros but are filled in with the bits that are shifted out of the sequence. Implementing circular shifts Circular shifts are used often in cryptography in order to permute bit sequences. Unfortunately, many programming languages, including C, do not have operators or standard functions for circular shifting, even though virtually all processors have bitwise operation instructions for it (e.g. Intel x86 has ROL a
https://en.wikipedia.org/wiki/Local%20Area%20Transport
Local Area Transport (LAT) is a non-routable (data link layer) networking technology developed by Digital Equipment Corporation to provide connection between the DECserver terminal servers and Digital's VAX and Alpha and MIPS host computers via Ethernet, giving communication between those hosts and serial devices such as video terminals and printers. The protocol itself was designed in such a manner as to maximize packet efficiency over Ethernet by bundling multiple characters from multiple ports into a single packet for Ethernet transport. One LAT strength was efficiently handling time-sensitive data transmission. Over time, other host implementations of the LAT protocol appeared allowing communications to a wide range of Unix and other non-Digital operating systems using the LAT protocol. History In 1984, the first implementation of the LAT protocol connected a terminal server to a VMS VAX-Cluster in Spit Brook Road, Nashua, NH. By "virtualizing" the terminal port at the host end, a very large number of plug-and-play VT100-class terminals could connect to each host computer system. Additionally, a single physical terminal could connect via multiple sessions to multiple hosts simultaneously. Future generations of terminal servers included both LAT and TELNET protocols, one of the earliest protocols created to run on a burgeoning TCP/IP based Internet. Additionally, the ability to create reverse direction pathways from users to non-traditional RS232 devices (i.e. UNIX Host TTYS1 operator ports) created an entirely new market for Terminal Servers, now known as console servers in the mid to late 1990s, year 2000 and beyond through today. LAT and VMS drove the initial surge of adoption of thick Ethernet by the computer industry. By 1986, terminal server networks accounted for 10% of Digital's $10 billion revenue. These early Ethernet LANs scaled using Ethernet bridges (another DEC invention) as well as DECnet routers. Subsequently, Cisco routers, which implemented T
https://en.wikipedia.org/wiki/Height%20above%20ground%20level
In aviation, atmospheric sciences and broadcasting, a height above ground level (AGL or HAGL) is a height measured with respect to the underlying ground surface. This is as opposed to height above mean sea level (AMSL or HAMSL), height above ellipsoid (HAE, as reported by a GPS receiver), or height above average terrain (AAT or HAAT, in broadcast engineering). In other words, these expressions (AGL, AMSL, HAE, AAT) indicate where the "zero level" or "reference altitude" – the vertical datum – is located. Aviation A pilot flying an aircraft under instrument flight rules (typically under poor visibility conditions) must rely on the aircraft's altimeter to decide when to deploy the undercarriage and prepare for landing. Therefore, the pilot needs reliable information on the height of the plane with respect to the landing area (usually an airport). The altimeter, which is usually a barometer calibrated in units of distance instead of atmospheric pressure, can therefore be set in such a way as to indicate the height of the aircraft above ground. This is done by communicating with the control tower of the airport (to get the current surface pressure) and setting the altimeter so as to read zero on the ground of that airport. Confusion between AGL and AMSL, or improper calibration of the altimeter, may result in controlled flight into terrain, a crash of a fully functioning aircraft under pilot control. While the use of a barometric altimeter setting that provides a zero reading on the ground of the airport is a reference available to pilots, in commercial aviation it is a country-specific procedure that is not often used (it is used, e.g., in Russia, and a few other countries). Most countries (Far East, North and South America, all of Europe, Africa, Australia) use the airport's AMSL (above mean sea level) elevation as a reference. During approaches to landing, there are several other references that are used, including AFE (above field elevation) which is height refer
https://en.wikipedia.org/wiki/List%20of%20materials%20analysis%20methods
This is a list of analysis methods used in materials science. Analysis methods are listed by their acronym, if one exists. Symbols μSR – see muon spin spectroscopy χ – see magnetic susceptibility A AAS – Atomic absorption spectroscopy AED – Auger electron diffraction AES – Auger electron spectroscopy AFM – Atomic force microscopy AFS – Atomic fluorescence spectroscopy Analytical ultracentrifugation APFIM – Atom probe field ion microscopy APS – Appearance potential spectroscopy ARPES – Angle resolved photoemission spectroscopy ARUPS – Angle resolved ultraviolet photoemission spectroscopy ATR – Attenuated total reflectance B BET – BET surface area measurement (BET from Brunauer, Emmett, Teller) BiFC – Bimolecular fluorescence complementation BKD – Backscatter Kikuchi diffraction, see EBSD BRET – Bioluminescence resonance energy transfer BSED – Back scattered electron diffraction, see EBSD C CAICISS – Coaxial impact collision ion scattering spectroscopy CARS – Coherent anti-Stokes Raman spectroscopy CBED – Convergent beam electron diffraction CCM – Charge collection microscopy CDI – Coherent diffraction imaging CE – Capillary electrophoresis CET – Cryo-electron tomography CL – Cathodoluminescence CLSM – Confocal laser scanning microscopy COSY – Correlation spectroscopy Cryo-EM – Cryo-electron microscopy Cryo-SEM – Cryo-scanning electron microscopy CV – Cyclic voltammetry D DE(T)A – Dielectric thermal analysis dHvA – De Haas–van Alphen effect DIC – Differential interference contrast microscopy Dielectric spectroscopy DLS – Dynamic light scattering DLTS – Deep-level transient spectroscopy DMA – Dynamic mechanical analysis DPI – Dual polarisation interferometry DRS – Diffuse reflection spectroscopy DSC – Differential scanning calorimetry DTA – Differential thermal analysis DVS – Dynamic vapour sorption E EBIC – Electron beam induced current (see IBIC: ion beam induced charge) EBS – Elastic (non-Rutherford) backscatterin
https://en.wikipedia.org/wiki/Scenario%20%28computing%29
In computing, a scenario (, ; loaned (), ) is a narrative of foreseeable interactions of user roles (known in the Unified Modeling Language as 'actors') and the technical system, which usually includes computer hardware and software. A scenario has a goal, which is usually functional. A scenario describes one way that a system is used, or is envisaged to be used, in the context of an activity in a defined time-frame. The time-frame for a scenario could be (for example) a single transaction; a business operation; a day or other period; or the whole operational life of a system. Similarly the scope of a scenario could be (for example) a single system or a piece of equipment; an equipped team or a department; or an entire organization. Scenarios are frequently used as part of the system development process. They are typically produced by usability or marketing specialists, often working in concert with end users and developers. Scenarios are written in plain language, with minimal technical details, so that stakeholders (designers, usability specialists, programmers, engineers, managers, marketing specialists, etc.) can have a common ground to focus their discussions. Increasingly, scenarios are used directly to define the wanted behaviour of software: replacing or supplementing traditional functional requirements. Scenarios are often defined in use cases, which document alternative and overlapping ways of reaching a goal. Types of scenario in system development Many types of scenario are in use in system development. Alexander and Maiden list the following types: Story: "a narrated description of a causally connected sequence of events, or of actions taken". Brief User stories are written in the Agile style of software development. Situation, Alternative World: "a projected future situation or snapshot". This meaning is common in planning, but less usual in software development. Simulation: use of models to explore and animate 'Stories' or 'Situations', to
https://en.wikipedia.org/wiki/Continuum%20limit
In mathematical physics and mathematics, the continuum limit or scaling limit of a lattice model refers to its behaviour in the limit as the lattice spacing goes to zero. It is often useful to use lattice models to approximate real-world processes, such as Brownian motion. Indeed, according to Donsker's theorem, the discrete random walk would, in the scaling limit, approach the true Brownian motion. Terminology The term continuum limit mostly finds use in the physical sciences, often in reference to models of aspects of quantum physics, while the term scaling limit is more common in mathematical use. Application in quantum field theory A lattice model that approximates a continuum quantum field theory in the limit as the lattice spacing goes to zero may correspond to finding a second order phase transition of the model. This is the scaling limit of the model. See also Universality classes References H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena H. Kleinert, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore, 1989); Paperback (also available online: Vol. I and Vol. II) H. Kleinert and V. Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback (also available online) Lattice models Lattice field theory Renormalization group Critical phenomena Articles containing video clips
https://en.wikipedia.org/wiki/Low%20Pin%20Count
The Low Pin Count (LPC) bus is a computer bus used on IBM-compatible personal computers to connect low-bandwidth devices to the CPU, such as the BIOS ROM (BIOS ROM was moved to the Serial Peripheral Interface (SPI) bus in 2006), "legacy" I/O devices (integrated into Super I/O, Embedded Controller, CPLD, and/or IPMI chip), and Trusted Platform Module (TPM). "Legacy" I/O devices usually include serial and parallel ports, PS/2 keyboard, PS/2 mouse, and floppy disk controller. Most PC motherboards with an LPC bus have either a Platform Controller Hub (PCH) or a southbridge chip, which acts as the host and controls the LPC bus. All other devices connected to the physical wires of the LPC bus are peripherals. Overview The LPC bus was introduced by Intel in 1998 as a software-compatible substitute for the Industry Standard Architecture (ISA) bus. It resembles ISA to software, although physically it is quite different. The ISA bus has a 16-bit data bus and a 24-bit address bus that can be used for both 16-bit I/O port addresses and 24-bit memory addresses; both run at speeds up to 8.33 MHz. The LPC bus uses a heavily multiplexed four-bit-wide bus operating at four times the clock speed (33.3 MHz) to transfer addresses and data with similar performance. LPC's main advantage is that the basic bus requires only seven signals, greatly reducing the number of pins required on peripheral chips. An integrated circuit using LPC will need 30 to 72 fewer pins than its ISA equivalent. It is also easier to route on modern motherboards, which are often quite crowded. The clock rate was chosen to match that of PCI in order to further ease integration. Also, LPC is intended to be a motherboard-only bus. There is no standardized connector in common use, though Intel defines one for use for debug modules, and few LPC peripheral daughterboards are available, including Trusted Platform Modules (TPMs) with a TPM daughterboard whose pinout is proprietary to the motherboard vendor as wel
https://en.wikipedia.org/wiki/Arbitrated%20loop
The arbitrated loop, also known as FC-AL, is a Fibre Channel topology in which devices are connected in a one-way loop fashion in a ring topology. Historically it was a lower-cost alternative to a fabric topology. It allowed connection of many servers and computer storage devices without using then very costly Fibre Channel switches. The cost of the switches dropped considerably, so by 2007, FC-AL had become rare in server-to-storage communication. It is however still common within storage systems. It is a serial architecture that can be used as the transport layer in a SCSI network, with up to 127 devices. The loop may connect into a fibre channel fabric via one of its ports. The bandwidth on the loop is shared among all ports. Only two ports may communicate at a time on the loop. One port wins arbitration and may open one other port in either half or full duplex mode. A loop with two ports is valid and has the same physical topology as point-to-point, but still acts as a loop protocol-wise. Fibre Channel ports capable of arbitrated loop communication are NL_port (node loop port) and FL_port (fabric loop port), collectively referred to as the L_ports. The ports may attach to each other via a hub, with cables running from the hub to the ports. The physical connectors on the hub are not ports in terms of the protocol. A hub does not contain ports. An arbitrated loop with no fabric port (with only NL_ports) is a private loop. An arbitrated loop connected to a fabric (through an FL_port) is a public loop. An NL_Port must provide fabric logon (FLOGI) and name registration facilities to initiate communication with other node through the fabric (to be an initiator). Arbitrated loop can be physically cabled in a ring fashion or using a hub. The physical ring ceases to work if one of the devices in the chain fails. The hub on the other hand, while maintaining a logical ring, allows a star topology on the cable level. Each receive port on the hub is simply p
https://en.wikipedia.org/wiki/Switched%20fabric
Switched fabric or switching fabric is a network topology in which network nodes interconnect via one or more network switches (particularly crossbar switches). Because a switched fabric network spreads network traffic across multiple physical links, it yields higher total throughput than broadcast networks, such as the early 10BASE5 version of Ethernet and most wireless networks such as Wi-Fi. The generation of high-speed serial data interconnects that appeared in 2001–2004 which provided point-to-point connectivity between processor and peripheral devices are sometimes referred to as fabrics; however, they lack features such as a message-passing protocol. For example, HyperTransport, the computer processor interconnect technology, continues to maintain a processor bus focus even after adopting a higher speed physical layer. Similarly, PCI Express is just a serial version of PCI; it adheres to PCI's host/peripheral load/store direct memory access (DMA)-based architecture on top of a serial physical and link layer. Fibre Channel In the Fibre Channel Switched Fabric (FC-SW-6) topology, devices are connected to each other through one or more Fibre Channel switches. While this topology has the best scalability of the three FC topologies (the other two are Arbitrated Loop and point-to-point), it is the only one requiring switches, which are costly hardware devices. Visibility among devices (called nodes) in a fabric is typically controlled with Fibre Channel zoning. Multiple switches in a fabric usually form a mesh network, with devices being on the "edges" ("leaves") of the mesh. Most Fibre Channel network designs employ two separate fabrics for redundancy. The two fabrics share the edge nodes (devices), but are otherwise unconnected. One of the advantages of such setup is capability of failover, meaning that in case one link breaks or a fabric goes out of order, datagrams can be sent via the second fabric. The fabric topology allows the connection of up to the
https://en.wikipedia.org/wiki/Timeline%20of%20solar%20cells
In the 19th century, it was observed that the sunlight striking certain materials generates detectable electric current – the photoelectric effect. This discovery laid the foundation for solar cells. Solar cells have gone on to be used in many applications. They have historically been used in situations where electrical power from the grid was unavailable. As the invention was brought out it made solar cells as a prominent utilization for power generation for satellites. Satellites orbit the Earth, thus making solar cells a prominent source for power generation through the sunlight falling on them. Solar cells are commonly used in satellites in today's times. 1800s 1839 - Edmond Becquerel observes the photovoltaic effect via an electrode in a conductive solution exposed to light. 1873 - Willoughby Smith finds that selenium shows photoconductivity. 1874 - James Clerk Maxwell writes to fellow mathematician Peter Tait of his observation that light affects the conductivity of selenium. 1877 - William Grylls Adams and Richard Evans Day observed the photovoltaic effect in solidified selenium, and published a paper on the selenium cell. 'The action of light on selenium,' in "Proceedings of the Royal Society, A25, 113. 1883 - Charles Fritts develops a solar cell using selenium on a thin layer of gold to form a device giving less than 1% efficiency. 1887 - Heinrich Hertz investigates ultraviolet light photoconductivity and discovers the photoelectric effect 1887 - James Moser reports dye sensitized photoelectrochemical cell. 1888 - Edward Weston receives patent US389124, "Solar cell," and US389125, "Solar cell." 1888–91 - Aleksandr Stoletov creates the first solar cell based on the outer photoelectric effect 1894 - Melvin Severy receives patent US527377, "Solar cell," and US527379, "Solar cell." 1897 - Harry Reagan receives patent US588177, "Solar cell." 1899 - Weston Bowser receives patent US598177, "Solar storage." 1900–1929 1901 - Philipp von Lenard
https://en.wikipedia.org/wiki/Structure%20of%20Management%20Information
In computing, the Structure of Management Information (SMI), an adapted subset of ASN.1, is a technical language used in definitions of Simple Network Management Protocol (SNMP) and its extensions to define sets ("modules") of related managed objects in a Management Information Base (MIB). SMI subdivides into three parts: module definitions, object definitions, and notification definitions. Module definitions are used when describing information modules. An ASN .1 macro, MODULE-IDENTITY, is used to concisely convey the semantics of an information module. Object definitions describe managed objects. An ASN.1 macro, OBJECT-TYPE, is used to concisely convey the syntax and semantics of a managed object. Notification definitions (aka "traps") are used when describing unsolicited transmissions of management information. An ASN.1 macro, NOTIFICATION-TYPE, concisely conveys the syntax and semantics of a notification. Implementations libsmi, a C library for accessing MIB information References External links , Standard 58, Conformance Statements for SMIv2 , Standard 58, Textual Conventions for SMIv2 , Standard 58, Structure of Management Information Version 2 (SMIv2) Network management Data modeling ASN.1
https://en.wikipedia.org/wiki/Volume%20element
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form where the are the coordinates, so that the volume of any set can be computed by For example, in spherical coordinates , and so . The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density. Volume element in Euclidean space In Euclidean space, the volume element is given by the product of the differentials of the Cartesian coordinates In different coordinate systems of the form , , , the volume element changes by the Jacobian (determinant) of the coordinate change: For example, in spherical coordinates (mathematical convention) the Jacobian determinant is so that This can be seen as a special case of the fact that differential forms transform through a pullback as Volume element of a linear subspace Consider the linear subspace of the n-dimensional Euclidean space Rn that is spanned by a collection of linearly independent vectors To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the is the square root of the determinant of the Gramian matrix of the : Any point p in the subsp
https://en.wikipedia.org/wiki/Contraflexure
In solid mechanics, a point along a beam under a lateral load is known as a point of contraflexure if the bending moment about the point equals zero. In a bending moment diagram, it is the point at which the bending moment curve intersects with the zero line (i.e. where the bending moment reverses direction along the beam). Knowing the place of the contraflexure is especially useful when designing reinforced concrete or structural steel beams and also for designing bridges. Flexural reinforcement may be reduced at this point. However, to omit reinforcement at the point of contraflexure entirely is inadvisable as the actual location is unlikely to realistically be defined with confidence. Additionally, an adequate quantity of reinforcement should extend beyond the point of contraflexure to develop bond strength and to facilitate shear force transfer. See also Deformation Engineering mechanics Flexural rigidity Flexural stress Fluid mechanics Inflection point Strength of materials References Solid mechanics
https://en.wikipedia.org/wiki/Leonard%20Mlodinow
Leonard Mlodinow (; November 26, 1954) is an American theoretical physicist and mathematician, screenwriter and author. In physics, he is known for his work on the large N expansion, a method of approximating the spectrum of atoms based on the consideration of an infinite-dimensional version of the problem, and for his work on the quantum theory of light inside dielectrics. He has also written books for the general public, five of which have been New York Times best-sellers, including The Drunkard's Walk: How Randomness Rules Our Lives, which was chosen as a New York Times notable book, and short-listed for the Royal Society Science Book Prize; The Grand Design, co-authored with Stephen Hawking, which argues that invoking God is not necessary to explain the origins of the universe; War of the Worldviews, co-authored with Deepak Chopra; and Subliminal: How Your Unconscious Mind Rules Your Behavior, which won the 2013 PEN/E. O. Wilson Literary Science Writing Award. He also makes public lectures and media appearances on programs including Morning Joe and Through the Wormhole, and debated Deepak Chopra on ABC's Nightline. Biography Mlodinow was born in Chicago, Illinois, of parents who were both Holocaust survivors. His father, who spent more than a year in the Buchenwald concentration camp, had been a leader in the Jewish resistance in his hometown of Częstochowa, in Nazi Germany-occupied Poland. As a child, Mlodinow was interested in both mathematics and chemistry, and while in high school was tutored in organic chemistry by a professor from the University of Illinois. As recounted in his book Feynman's Rainbow, his interest turned to physics during a semester he took off from college to spend on a kibbutz in Israel, during which he had little to do at night besides reading The Feynman Lectures on Physics, which was one of the few English books he found in the kibbutz library. Mlodinow completed his doctorate at the University of California, Berkeley. It was in th
https://en.wikipedia.org/wiki/Mcrypt
mcrypt is a replacement for the popular Unix crypt command. crypt was a file encryption tool that used an algorithm very close to the World War II Enigma cipher. Mcrypt provides the same functionality but uses several modern algorithms such as AES. Libmcrypt, Mcrypt's companion, is a library of code that contains the actual encryption functions and provides an easy method for use. The last update to libmcrypt was in 2007, despite years of unmerged patches. Maintained alternatives include ccrypt, libressl, and others. Examples of mcrypt usage in a Linux command-line environment: mcrypt --list # See available encryption algorithms. mcrypt -a blowfish myfilename # Encrypts myfilename to myfilename.nc # using the Blowfish encryption algorithm. # You are prompted two times for a passphrase. mcrypt -d mytextfile.txt.nc # Decrypts mytextfile.txt.nc to mytextfile.txt. mcrypt -V -d -a enigma -o scrypt --bare # Can en/decrypt files crypted with SunOS crypt. mcrypt --help It implements numerous cryptographic algorithms, mostly block ciphers and stream ciphers, some of which fall under export restrictions in the United States. Algorithms include DES, Blowfish, ARCFOUR, Enigma, GOST, LOKI97, RC2, Serpent, Threeway, Twofish, WAKE, and XTEA. See also bcrypt crypt (Unix) ccrypt scrypt References External links The original mcrypt homepage MCrypt homepage MCrypt development site Cryptographic software
https://en.wikipedia.org/wiki/Grundig
Grundig ( , ) is a Turkish consumer electronics manufacturer owned by the Arçelik A.Ş., the white goods (major appliance) manufacturer of Turkish conglomerate Koç Holding. The company made domestic appliances and personal-care products. Originally a German consumer electronic company, Grundig GmbH was founded in 1945 by Max Grundig and eventually headquartered in Nuremberg. It grew to become one of the leading radio, TV, recorder and other electronics goods manufacturers of Europe in the following decades of the 20th century. In the 1970s, Philips began acquiring Grundig AG's shares, leading to complete control in 1993. In 1998, Philips divested Grundig. In 2007, Koç Holding bought Grundig and put the brand under its home-appliances subsidiary Arcelik A.Ş. Koç is a publicly listed conglomerate with more than 80,000 employees. History Grundig began in 1945 with the establishment of a store named Fürth, Grundig & Wurzer (RVF), which sold radios and was headquartered in Fürth, northern Bavaria. After the Second World War, Max Grundig recognized the need for radios in Germany, and in 1947 produced a kit, while a factory and administration centre were built at Fürth. In 1951, the first television sets were manufactured at the new facility. At the time Grundig was the largest radio manufacturer in Europe. Divisions were established in Nuremberg, Frankfurt and Karlsruhe. In 2013, Grundig launched its home appliances (white goods) product range, becoming one of the mainstream manufacturers in Europe. Parent Arcelik A.Ş., has more than 27,000 employees worldwide. Grundig has manufacturing plants in several European cities that deliver their products to more than 65 countries around the world. 1940s Grundig started as a typical German company in 1945. Its early notability was due to Grundig radio. Max Grundig, a radio dealer, built a machine called "Heinzelmann", which was a radio that came without thermionic valves and as a do-it-yourself kit to circumvent post war rule
https://en.wikipedia.org/wiki/Material%20efficiency
Material efficiency is a description or metric (Mp) (the ratio of material used to the supplied material) which refers to decreasing the amount of a particular material needed to produce a specific product. Making a usable item out of thinner stock than a prior version increases the material efficiency of the manufacturing process. Material efficiency goes hand in hand with Green building and Energy conservation, as well as other ways of incorporating Renewable resources in the building process from start to finish. The motivations for material efficiency include reducing energy demand, reducing Greenhouse gas emissions, and other environmental impacts such as land use, water scarcity, air pollution, water pollution, and waste management. With a growing population and increasing wealth, demand for material extraction and processing will likely double in the next 40 years. The environmental impacts of the required processing will become critical in the transition to a sustainable future. Material efficiency aims to reduce the impacts associated with material consumption. Some technical strategies include increasing the life of existing products, using them more in entirety, re-using components to avoid waste, or reducing the amount of material through a lightweight product design. For example, making a usable item out of thinner stock than a prior version increases the material efficiency of the manufacturing process. Increasing material efficiency is a crucial opportunity to achieve the 1.5 °C goal by the Paris agreement. Manufacturing Material efficiency in manufacturing refers to reducing the amount of raw materials used for manufacturing a product, generating less waste per product, and improving waste management. Generally using building materials such as steel, reinforced concrete, and aluminum release during production. In 2015, materials manufacturing for building construction were responsible for 11% of global energy-related emissions. The largest ma
https://en.wikipedia.org/wiki/Stephen%20L.%20Adler
Stephen Louis Adler (born November 30, 1939) is an American physicist specializing in elementary particles and field theory. He is currently professor emeritus in the school of natural sciences at the Institute for Advanced Study in Princeton, New Jersey. Biography Adler was born in New York City. He received an A.B. degree at Harvard University in 1961, where he was a Putnam Fellow in 1959, and a Ph.D. from Princeton University in 1964. Adler completed his doctoral dissertation, titled High energy neutrino reactions and conservations hypotheses, under the supervision of Sam Treiman. He is the son of Irving Adler, an American author, teacher, mathematician, scientist and political activist, and Ruth Adler and older brother of Peggy Adler. Adler became a member of the Institute for Advanced Study in 1966, becoming a full professor of theoretical physics in 1969, and was named "New Jersey Albert Einstein Professor" at the institute in 1979. He was elected a member of the American Academy of Arts and Sciences in 1974, and a member of the National Academy of Sciences in 1975. He has won the J. J. Sakurai Prize from the American Physical Society in 1988, and the Dirac Medal of the International Centre for Theoretical Physics in 1998, among other awards. Adler's seminal papers on high energy neutrino processes, current algebra, soft pion theorems, sum rules, and perturbation theory anomalies helped lay the foundations for the current standard model of elementary particle physics. In 2012, Adler contributed to a family venture when he wrote the foreword for his then 99-year-old father's 87th book, Solving the Riddle of Phyllotaxis: Why the Fibonacci Numbers and the Golden Ratio Occur on Plants. The book's diagrams are by his sister Peggy. Trace dynamics In his book Quantum Theory as an Emergent Phenomenon, published 2004, Adler presented his trace dynamics, a framework in which quantum field theory emerges from a matrix theory. In this matrix theory, particles are
https://en.wikipedia.org/wiki/The%20Pirate%20Bay
The Pirate Bay (sometimes abbreviated as TPB) is an online index of digital content of entertainment media and software. Founded in 2003 by Swedish think tank Piratbyrån, The Pirate Bay allows visitors to search, download, and contribute magnet links and torrent files, which facilitate peer-to-peer, file sharing among users of the BitTorrent protocol. The Pirate Bay has sparked controversies and discussion about legal aspects of file sharing, copyright, and civil liberties and has become a platform for political initiatives against established intellectual property laws as well as a central figure in an anti-copyright movement. The website has faced several shutdowns and domain seizures, switching to a series of new web addresses to continue operating. In April 2009, the website's founders (Peter Sunde, Fredrik Neij, and Gottfrid Svartholm) were found guilty in the Pirate Bay trial in Sweden for assisting in copyright infringement and were sentenced to serve one year in prison and pay a fine. In some countries, Internet service providers (ISPs) have been ordered to block access to the website. Subsequently, proxy websites have been providing access to it. Founders Svartholm, Neij, and Sunde were all released by 2015 after serving shortened sentences. History The Pirate Bay was established in September 2003 by the Swedish anti-copyright organisation Piratbyrån (); it has been run as a separate organisation since October 2004. The Pirate Bay was first run by Gottfrid Svartholm and Fredrik Neij, who are known by their nicknames "anakata" and "TiAMO", respectively. They have both been accused of "assisting in making copyrighted content available" by the Motion Picture Association of America. On 31 May 2006, the website's servers in Stockholm were raided and taken away by Swedish police, leading to three days of downtime. The Pirate Bay claims to be a non-profit entity based in the Seychelles; however, this is disputed. The Pirate Bay has been involved in a number o
https://en.wikipedia.org/wiki/Fire%20ecology
Fire ecology is a scientific discipline concerned with the effects of fire on natural ecosystems. Many ecosystems, particularly prairie, savanna, chaparral and coniferous forests, have evolved with fire as an essential contributor to habitat vitality and renewal. Many plant species in fire-affected environments use fire to germinate, establish, or to reproduce. Wildfire suppression not only endangers these species, but also the animals that depend upon them. Wildfire suppression campaigns in the United States have historically molded public opinion to believe that wildfires are harmful to nature. Ecological research has shown, however, that fire is an integral component in the function and biodiversity of many natural habitats, and that the organisms within these communities have adapted to withstand, and even to exploit, natural wildfire. More generally, fire is now regarded as a 'natural disturbance', similar to flooding, windstorms, and landslides, that has driven the evolution of species and controls the characteristics of ecosystems. Fire suppression, in combination with other human-caused environmental changes, may have resulted in unforeseen consequences for natural ecosystems. Some large wildfires in the United States have been blamed on years of fire suppression and the continuing expansion of people into fire-adapted ecosystems as well as climate change. Land managers are faced with tough questions regarding how to restore a natural fire regime, but allowing wildfires to burn is likely the least expensive and most effective method in many situations. History Fire has played a major role in shaping the world's vegetation. The biological process of photosynthesis began to concentrate the atmospheric oxygen needed for combustion during the Devonian approximately 350 million years ago. Then, approximately 125 million years ago, fire began to influence the habitat of land plants. In the 20th century ecologist Charles Cooper made a plea for fire as an eco
https://en.wikipedia.org/wiki/Screen%20burn-in
Screen burn-in, image burn-in, ghost image, or shadow image, is a permanent discoloration of areas on an electronic display such as a cathode ray tube (CRT) in an old computer monitor or television set. It is caused by cumulative non-uniform use of the screen. Newer liquid-crystal displays (LCDs) may suffer from a phenomenon called image persistence instead, which is not permanent. One way to combat screen burn-in was the use of screensavers, which would move an image around to ensure that no one area of the screen remained illuminated for too long. Causes With phosphor-based electronic displays (for example CRT-type computer monitors, oscilloscope screens or plasma displays), non-uniform use of specific areas, such as prolonged display of non-moving images (text or graphics), repetitive contents in gaming graphics, or certain broadcasts with tickers and flags, can create a permanent ghost-like image of these objects or otherwise degrade image quality. This is because the phosphor compounds which emit light to produce images lose their luminance with use. This wear results in uneven light output over time, and in severe cases can create a ghost image of previous content. Even if ghost images are not recognizable, the effects of screen burn are an immediate and continual degradation of image quality. The length of time required for noticeable screen burn to develop varies due to many factors, ranging from the quality of the phosphors employed, to the degree of non-uniformity of sub-pixel use. It can take as little as a few weeks for noticeable ghosting to set in, especially if the screen displays a certain image (example: a menu bar at the top or bottom of the screen) constantly and displays it continually over time. In the rare case when horizontal or vertical deflection circuits fail, all output energy is concentrated to a vertical or horizontal line on the display which causes almost instant screen burn. CRT Phosphor burn-in is particularly prevalent with mo
https://en.wikipedia.org/wiki/Deutsches%20Forschungsnetz
Deutsches Forschungsnetz ("German Research Network"), usually abbreviated to DFN, is the German national research and education network (NREN) used for academic and research purposes. It is managed by the scientific community organized in the voluntary Association to Promote a German Education and Research Network (Verein zur Förderung eines Deutschen Forschungsnetzes e.V.) which was founded in 1984 by universities, non-university research institutions and research-oriented companies to stimulate computerized communication in Germany. DFN's "super core" backbone X-WiN network points of presence are - for example - based in Erlangen, Frankfurt, Hannover and Potsdam with more than 70 locations and can route up to 1TBit/s with over 10000 km of dedicated fibre connections. Many connections to other networks such as GÉANT2 or DECIX are 100G-based and are implemented at the super core. Today connections up to 200GBit are possible. Networks run by DFN e.V. WiN is short for Wissenschaftsnetz ("science network"). WiN (1989–1998) ERWIN (1990-1992) B-WiN (1996-2001) G-WiN (Gigabit-Wissenschaftsnetz) (2000-2005) X-WiN (since 2006) References External links Education in Germany Internet in Germany Internet mirror services National research and education networks Research institutes in Germany
https://en.wikipedia.org/wiki/GoBack
Norton GoBack (previously WildFile GoBack, Adaptec GoBack, and Roxio GoBack) is a disk utility for Microsoft Windows that can record up to 8 GB of disk changes. When the filesystem is idle for a few seconds, it marks these as "safe points". The product allows the disk drive to be restored to any point within the available history. It also allows older versions of files to be restored, and previous versions of the whole disk to be browsed. Depending on disk activity, the typical history might cover a few hours to a few days. Operation GoBack replaces the master boot record, and also replaces the partition table with a single partition. This allows a hard drive to be changed back, even in the event that the operating system is unable to boot, while also protecting the filesystem from alteration so that the revert information remains correct. GoBack is compatible with hardware RAID drives. Incompatible products Due to the changes made to the partition table, this can cause problems when dual booting other operating systems on the same hard disk. It is possible to retain dual-boot compatibility, but can involve saving the partition table before enabling GoBack, and after enabling GoBack, re-writing the partition table back to the disk (after booting from a different device, such as a Live CD). It may also be necessary to disable GoBack prior to using certain low level disk utilities, such as formatting software. Such low level utilities which do not first check for the presence of GoBack can (in combination with GoBack) cause data to become corrupted. Another example of an incompatible program is the Windows version of True Image For Windows. GoBack is also incompatible with products such as Drive Vaccine PC Restore and RollBack Rx - as these products require access to the master boot record. Usage precaution Several users at CNET Reviews have reported data loss after installing this product. CNET gave Norton GoBack 4.0 an editors' rating of 3.5 out of five star
https://en.wikipedia.org/wiki/Topological%20skeleton
In shape analysis, skeleton (or topological skeleton) of a shape is a thin version of that shape that is equidistant to its boundaries. The skeleton usually emphasizes geometrical and topological properties of the shape, such as its connectivity, topology, length, direction, and width. Together with the distance of its points to the shape boundary, the skeleton can also serve as a representation of the shape (they contain all the information necessary to reconstruct the shape). Skeletons have several different mathematical definitions in the technical literature, and there are many different algorithms for computing them. Various different variants of skeleton can also be found, including straight skeletons, morphological skeletons, etc. In the technical literature, the concepts of skeleton and medial axis are used interchangeably by some authors, while some other authors regard them as related, but not the same. Similarly, the concepts of skeletonization and thinning are also regarded as identical by some, and not by others. Skeletons are widely used in computer vision, image analysis, pattern recognition and digital image processing for purposes such as optical character recognition, fingerprint recognition, visual inspection or compression. Within the life sciences skeletons found extensive use to characterize protein folding and plant morphology on various biological scales. Mathematical definitions Skeletons have several different mathematical definitions in the technical literature; most of them lead to similar results in continuous spaces, but usually yield different results in discrete spaces. Quench points of the fire propagation model In his seminal paper, Harry Blum of the Air Force Cambridge Research Laboratories at Hanscom Air Force Base, in Bedford, Massachusetts, defined a medial axis for computing a skeleton of a shape, using an intuitive model of fire propagation on a grass field, where the field has the form of the given shape. If one "sets
https://en.wikipedia.org/wiki/AND%20gate
The AND gate is a basic digital logic gate that implements logical conjunction (∧) from mathematical logic AND gate behaves according to the truth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If not all inputs to the AND gate are HIGH, LOW output results. The function can be extended to any number of inputs. Symbols There are three symbols for AND gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. Additional inputs can be added as needed. For more information see Logic gate symbols article. It can also be denoted as symbol "^" or "&". The AND gate with inputs A and B and output C implements the logical expression . This expression also may be denoted as or . Implementations An AND gate can be designed using only N-channel (pictured) or P-channel MOSFETs, but is usually implemented with both (CMOS). The digital inputs a and b cause the output F to have the same result as the AND function. AND gates may be made from discrete components and are readily available as integrated circuits in several different logic families. Analytical representation is the analytical representation of AND gate: Alternatives If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are "universal gates" meaning that they can be used to make all the others. See also OR gate NOT gate NAND gate NOR gate XOR gate XNOR gate IMPLY gate Boolean algebra Logic gate References Logic gates
https://en.wikipedia.org/wiki/OR%20gate
The OR gate is a digital logic gate that implements logical disjunction. The OR gate outputs "true" if any of its inputs are "true"; otherwise it outputs "false". The input and output states are normally represented by different voltage levels. Description Any OR gate can be constructed with two or more inputs. It outputs a 1 if any of these inputs are 1, or outputs a 0 only if all inputs are 0. The inputs and outputs are binary digits ("bits") which have two possible logical states. In addition to 1 and 0, these states may be called true and false, high and low, active and inactive, or other such pairs of symbols. Thus it performs a logical disjunction (∨) from mathematical logic. The gate can be represented with the plus sign (+) because it can be used for logical addition. Equivalently, an OR gate finds the maximum between two binary digits, just as the AND gate finds the minimum. Together with the AND gate and the NOT gate, the OR gate is one of three basic logic gates from which any Boolean circuit may be constructed. All other logic gates may be made from these three gates; any function in binary mathematics may be implemented with them. It is sometimes called the inclusive OR gate to distinguish it from XOR, the exclusive OR gate. The behavior of OR is the same as XOR except in the case of a 1 for both inputs. In situations where this never arises (for example, in a full-adder) the two types of gates are interchangeable. This substitution is convenient when a circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip. Symbols There are two logic gate symbols currently representing the OR gate: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol. The DIN symbol is deprecated. The "≥1" on the IEC symbol indicates that the output is activated by at least one active input. Hardware description and pinout OR gates are basic logic gates, and are available in TTL and CMO
https://en.wikipedia.org/wiki/Per%20Martin-L%C3%B6f
Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer science. Since the late 1970s, Martin-Löf's publications have been mainly in logic. In philosophical logic, Martin-Löf has wrestled with the philosophy of logical consequence and judgment, partly inspired by the work of Brentano, Frege, and Husserl. In mathematical logic, Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics; Martin-Löf's work on type theory has influenced computer science. Until his retirement in 2009, Per Martin-Löf held a joint chair for Mathematics and Philosophy at Stockholm University. His brother Anders Martin-Löf is now emeritus professor of mathematical statistics at Stockholm University; the two brothers have collaborated in research in probability and statistics. The research of Anders and Per Martin-Löf has influenced statistical theory, especially concerning exponential families, the expectation-maximization method for missing data, and model selection. Per Martin-Löf received his PhD in 1970 from Stockholm University, under Andrey Kolmogorov. Martin-Löf is an enthusiastic bird-watcher; his first scientific publication was on the mortality rates of ringed birds. Randomness and Kolmogorov complexity In 1964 and 1965, Martin-Löf studied in Moscow under the supervision of Andrei N. Kolmogorov. He wrote a 1966 article The definition of random sequences that gave the first suitable definition of a random sequence. Earlier researchers such as Richard von Mises had attempted to formalize the notion of a test for randomness in order to define a random sequence as one that passed all tests for randomness; however, the precise notion of a randomness test was left vague. Martin-Löf's key insight was to use the theory of computation to define forma
https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov%20interpretation
In mathematical logic, the Brouwer–Heyting–Kolmogorov interpretation, or BHK interpretation, of intuitionistic logic was proposed by L. E. J. Brouwer and Arend Heyting, and independently by Andrey Kolmogorov. It is also sometimes called the realizability interpretation, because of the connection with the realizability theory of Stephen Kleene. It is the standard explanation of intuitionistic logic. The interpretation The interpretation states what is intended to be a proof of a given formula. This is specified by induction on the structure of that formula: A proof of is a pair where is a proof of and is a proof of . A proof of is either where is a proof of or where is a proof of . A proof of is a function that converts a proof of into a proof of . A proof of is a pair where is an element of and is a proof of . A proof of is a function that converts an element of into a proof of . The formula is defined as , so a proof of it is a function that converts a proof of into a proof of . There is no proof of , the absurdity or bottom type (nontermination in some programming languages). The interpretation of a primitive proposition is supposed to be known from context. In the context of arithmetic, a proof of the formula is a computation reducing the two terms to the same numeral. Kolmogorov followed the same lines but phrased his interpretation in terms of problems and solutions. To assert a formula is to claim to know a solution to the problem represented by that formula. For instance is the problem of reducing to ; to solve it requires a method to solve problem given a solution to problem . Examples The identity function is a proof of the formula , no matter what P is. The law of non-contradiction expands to : A proof of is a function that converts a proof of into a proof of . A proof of is a pair of proofs <a, b>, where is a proof of P, and is a proof of . A proof of is a function that converts a proof of P into a proo
https://en.wikipedia.org/wiki/Teletext
Teletext, or broadcast teletext, is a standard for displaying text and rudimentary graphics on suitably equipped television sets. Teletext sends data in the broadcast signal, hidden in the invisible vertical blanking interval area at the top and bottom of the screen. The teletext decoder in the television buffers this information as a series of "pages", each given a number. The user can display chosen pages using their remote control. In broad terms, it can be considered as Videotex, a system for the delivery of information to a user in a computer-like format, typically displayed on a television or a dumb terminal, but that designation is usually reserved for systems that provide bi-directional communication, such as Prestel or Minitel. Teletext was created in the United Kingdom in the early 1970s by John Adams, Philips' lead designer for video display units. Public teletext information services were introduced by major broadcasters in the UK, starting with the BBC's Ceefax service in 1974. It offered a range of text-based information, typically including news, weather and TV schedules. Also, paged subtitle (or closed captioning) information was transmitted using the same system. Similar systems were subsequently introduced by other television broadcasters in the UK and mainland Europe in the following years. Meanwhile, the UK's General Post Office introduced the Prestel system using the same display standards but run over telephone lines using bi-directional modems rather than the send-only system used with televisions. Teletext formed the basis for the World System Teletext standard (CCIR Teletext System B), an extended version of the original system. This standard saw widespread use across Europe starting in the 1980s, with almost all televisions sets including a decoder. Other standards were developed around the world, notably NABTS (CCIR Teletext System C) in the United States, Antiope (CCIR Teletext System A) in France and JTES (CCIR Teletext System D) in Ja
https://en.wikipedia.org/wiki/Naming%20convention%20%28programming%29
In computer programming, a naming convention is a set of rules for choosing the character sequence to be used for identifiers which denote variables, types, functions, and other entities in source code and documentation. Reasons for using a naming convention (as opposed to allowing programmers to choose any character sequence) include the following: To reduce the effort needed to read and understand source code; To enable code reviews to focus on issues more important than syntax and naming standards. To enable code quality review tools to focus their reporting mainly on significant issues other than syntax and style preferences. The choice of naming conventions can be a controversial issue, with partisans of each holding theirs to be the best and others to be inferior. Colloquially, this is said to be a matter of dogma. Many companies have also established their own set of conventions. Potential benefits Benefits of a naming convention can include the following: to provide additional information (i.e., metadata) about the use to which an identifier is put; to help formalize expectations and promote consistency within a development team; to enable the use of automated refactoring or search and replace tools with minimal potential for error; to enhance clarity in cases of potential ambiguity; to enhance the aesthetic and professional appearance of work product (for example, by disallowing overly long names, comical or "cute" names, or abbreviations); to help avoid "naming collisions" that might occur when the work product of different organizations is combined (see also: namespaces); to provide meaningful data to be used in project handovers which require submission of program source code and all relevant documentation; to provide better understanding in case of code reuse after a long interval of time. Challenges The choice of naming conventions (and the extent to which they are enforced) is often a contentious issue, with partisans holding their v
https://en.wikipedia.org/wiki/Authentication%20server
An authentication server provides a network service that applications use to authenticate the credentials, usually account names and passwords, of their users. When a client submits a valid set of credentials, it receives a cryptographic ticket that it can subsequently use to access various services. Authentication is used as the basis for authorization, which is the determination whether a privilege may be granted to a particular user or process, privacy, which keeps information from becoming known to non-participants, and non-repudiation, which is the inability to deny having done something that was authorized to be done based on the authentication. Major authentication algorithms include passwords, Kerberos, and public key encryption. See also TACACS+ RADIUS Multi-factor authentication Universal 2nd Factor References External links "Server authentication". www.ibm.com. Retrieved 2023-09-05. Computer network security Servers (computing)
https://en.wikipedia.org/wiki/Distributed%20File%20System%20%28Microsoft%29
Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency (via the namespace component) and Redundancy (via the file replication component). Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root". Microsoft's DFS is referred to interchangeably as 'DFS' and 'Dfs' by Microsoft and is unrelated to the DCE Distributed File System, which held the 'DFS' trademark but was discontinued in 2005. It is also called "MS-DFS" or "MSDFS" in some contexts, e.g. in the Samba user space project. Overview There is no requirement to use the two components of DFS together; it is perfectly possible to use the logical namespace component without using DFS file replication, and it is perfectly possible to use file replication between servers without combining them into one namespace. A DFS root can only exist on a server version of Windows (from Windows NT 4.0 and up) and OpenSolaris (in kernel space) or a computer running Samba (in user space.) The Enterprise and Datacenter Editions of Windows Server can host multiple DFS roots on the same server. OpenSolaris intends on supporting multiple DFS roots in "a future project based on Active Directory (AD) domain-based DFS namespaces". There are two ways of implementing DFS on a server: Standalone DFS namespace - allows for a DFS root that exists only on the local computer, and thus does not use Active Directory. A Standalone DFS can only be accessed on the computer on which it is created. It does not offer any fault tolerance and cannot be linked to any other DFS. This is the only option available on Windows NT 4.0 Server systems. Standalone DFS roots are rarely encountered because of their
https://en.wikipedia.org/wiki/Vespertine%20%28biology%29
Vespertine is a term used in the life sciences to indicate something of, relating to, or occurring in the evening. In botany, a vespertine flower is one that opens or blooms in the evening. In zoology, the term is used for a creature that becomes active at dusk, such as bats and owls. Strictly speaking, however, the term means that activity ceases during the hours of full darkness and does not resume until the next evening. Activity that continues throughout the night should be described as nocturnal. Vespertine behaviour is a special case of crepuscular behaviour; like crepuscular activity, vespertine activity is limited to dusk rather than full darkness. Unlike vespertine activity, crepuscular activity may resume in dim twilight before dawn. A related term is matutinal, referring to activity limited to the dawn twilight. The word vespertine is derived from the Latin word , an adjective meaning "evening". See also Crypsis Matutinal References Ethology Botany
https://en.wikipedia.org/wiki/Ben%20Green%20%28mathematician%29
Ben Joseph Green FRS (born 27 February 1977) is a British mathematician, specialising in combinatorics and number theory. He is the Waynflete Professor of Pure Mathematics at the University of Oxford. Early life and education Ben Green was born on 27 February 1977 in Bristol, England. He studied at local schools in Bristol, Bishop Road Primary School and Fairfield Grammar School, competing in the International Mathematical Olympiad in 1994 and 1995. He entered Trinity College, Cambridge in 1995 and completed his BA in mathematics in 1998, winning the Senior Wrangler title. He stayed on for Part III and earned his doctorate under the supervision of Timothy Gowers, with a thesis entitled Topics in arithmetic combinatorics (2003). During his PhD he spent a year as a visiting student at Princeton University. He was a research Fellow at Trinity College, Cambridge between 2001 and 2005, before becoming a Professor of Mathematics at the University of Bristol from January 2005 to September 2006 and then the first Herchel Smith Professor of Pure Mathematics at the University of Cambridge from September 2006 to August 2013. He became the Waynflete Professor of Pure Mathematics at the University of Oxford on 1 August 2013. He was also a Research Fellow of the Clay Mathematics Institute and held various positions at institutes such as Princeton University, University of British Columbia, and Massachusetts Institute of Technology. Mathematics The majority of Green's research is in the fields of analytic number theory and additive combinatorics, but he also has results in harmonic analysis and in group theory. His best known theorem, proved jointly with his frequent collaborator Terence Tao, states that there exist arbitrarily long arithmetic progressions in the prime numbers: this is now known as the Green–Tao theorem. Amongst Green's early results in additive combinatorics are an improvement of a result of Jean Bourgain of the size of arithmetic progressions in sumsets, a
https://en.wikipedia.org/wiki/Reference%20monitor
In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' (e.g., processes and users) ability to perform operations (e.g., read and write) on objects (e.g., files and sockets) on a system. The properties of a reference monitor are captured by the acronym NEAT, which means: The reference validation mechanism must be Non-bypassable, so that an attacker cannot bypass the mechanism and violate the security policy. The reference validation mechanism must be Evaluable, i.e., amenable to analysis and tests, the completeness of which can be assured (verifiable). Without this property, the mechanism might be flawed in such a way that the security policy is not enforced. The reference validation mechanism must be Always invoked. Without this property, it is possible for the mechanism to not perform when intended, allowing an attacker to violate the security policy. The reference validation mechanism must be Tamper-proof. Without this property, an attacker can undermine the mechanism itself and hence violate the security policy. For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed to contain a reference monitor, although it is not clear that its properties (tamperproof, etc.) have ever been independently verified, or what level of computer security it was intended to provide. The claim is that a reference validation mechanism that satisfies the reference monitor concept will correctly enforce a system's access control policy, as it must be invoked to mediate all security-sensitive operations, must not be tampered with, and has undergone complete analysis and testing to verify correctness. The abstract model of a reference monitor has been widely applied to any type of system that needs to enforce access control a
https://en.wikipedia.org/wiki/Conservation%20status
The conservation status of a group of organisms (for instance, a species) indicates whether the group still exists and how likely the group is to become extinct in the near future. Many factors are taken into account when assessing conservation status: not simply the number of individuals remaining, but the overall increase or decrease in the population over time, breeding success rates, and known threats. Various systems of conservation status are in use at international, multi-country, national and local levels, as well as for consumer use such as sustainable seafood advisory lists and certification. The two international systems are by the International Union for Conservation of Nature (IUCN) and The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). International systems IUCN Red List of Threatened Species The IUCN Red List of Threatened Species by the International Union for Conservation of Nature is the best known worldwide conservation status listing and ranking system. Species are classified by the IUCN Red List into nine groups set through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation. Also included are species that have gone extinct since 1500 CE. When discussing the IUCN Red List, the official term "threatened" is a grouping of three categories: critically endangered, endangered, and vulnerable. Extinct (EX) – No known living individuals Extinct in the wild (EW) – Known only to survive in captivity, or as a naturalized population outside its historic range Critically Endangered (CR) – Highest risk of extinction in the wild Endangered (EN) – Higher risk of extinction in the wild Vulnerable (VU) – High risk of extinction in the wild Near Threatened (NT) – Likely to become endangered in the near future Conservation Dependent (CD) – Low risk; is conserved to prevent being near threatened, certain events may lead it t
https://en.wikipedia.org/wiki/Flavour%20%28particle%20physics%29
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. Quantum numbers In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, s, c, b, t). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. Conservation laws All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eige
https://en.wikipedia.org/wiki/Haploinsufficiency
Haploinsufficiency in genetics describes a model of dominant gene action in diploid organisms, in which a single copy of the wild-type allele at a locus in heterozygous combination with a variant allele is insufficient to produce the wild-type phenotype. Haploinsufficiency may arise from a de novo or inherited loss-of-function mutation in the variant allele, such that it yields little or no gene product (often a protein). Although the other, standard allele still produces the standard amount of product, the total product is insufficient to produce the standard phenotype. This heterozygous genotype may result in a non- or sub-standard, deleterious, and (or) disease phenotype. Haploinsufficiency is the standard explanation for dominant deleterious alleles. In the alternative case of haplosufficiency, the loss-of-function allele behaves as above, but the single standard allele in the heterozygous genotype produces sufficient gene product to produce the same, standard phenotype as seen in the homozygote. Haplosufficiency accounts for the typical dominance of the "standard" allele over variant alleles, where the phenotypic identity of genotypes heterozygous and homozygous for the allele defines it as dominant, versus a variant phenotype produced only by the genotype homozygous for the alternative allele, which defines it as recessive. Mechanism The alteration in the gene dosage, which is caused by the loss of a functional allele, is also called allelic insufficiency. Haploinsufficiency in humans About 3,000 human genes cannot tolerate loss of one of the two alleles. An example of this is seen in the case of Williams syndrome, a neurodevelopmental disorder caused by the haploinsufficiency of genes at 7q11.23. The haploinsufficiency is caused by the copy-number variation (CNV) of 28 genes led by the deletion of ~1.6 Mb. These dosage-sensitive genes are vital for human language and constructive cognition. Another example is the haploinsufficiency of telomerase rever
https://en.wikipedia.org/wiki/PC/SC
PC/SC (short for "Personal Computer/Smart Card") is a specification for smart-card integration into computing environments. Microsoft has implemented PC/SC in Microsoft Windows 200x/XP and makes it available under Microsoft Windows NT/9x. A free implementation of PC/SC, PC/SC Lite, is available for Linux and other Unixes; a forked version comes bundled with Mac OS X. Work group Core members Gemalto Infineon Microsoft Toshiba Associate members Advanced Card Systems Alcor Micro Athena Smartcard Solutions Bloombase C3PO S.L. Cherry Electrical Products Cross S&T Inc.Dai Nippon Printing Co., Ltd. Feitian Technologies Kobil Systems GmbH Silitek Nidec Sankyo Corporation O2Micro, Inc. OMNIKEY (HID Global) Precise Biometrics Realtek Semiconductor Corp. Research In Motion Sagem Orga SCM Microsystems Siemens Teridian Semiconductor Corp. See also CT-API, an alternative API External links PC/SC Workgroup Free Implementation (PCSCLite) pcsc-tools free commandline tools for PC/SC Winscard Smart Card API functions in Microsoft Windows XP/2000 SMACADU open source smart card analyzing tools PC/SC Reader List PC/SC-standard readers de:PC/SC#Der PC/SC-Standard
https://en.wikipedia.org/wiki/Irreversible%20circuit
An irreversible circuit is a circuit whose inputs cannot be reconstructed from its outputs. Such a circuit, of necessity, consumes energy. See also Reversible computing Integrated circuits
https://en.wikipedia.org/wiki/GRIB
GRIB (GRIdded Binary or General Regularly-distributed Information in Binary form) is a concise data format commonly used in meteorology to store historical and forecast weather data. It is standardized by the World Meteorological Organization's Commission for Basic Systems, known under number GRIB FM 92-IX, described in WMO Manual on Codes No.306. Currently there are three versions of GRIB. Version 0 was used to a limited extent by projects such as TOGA, and is no longer in operational use. The first edition (current sub-version is 2) is used operationally worldwide by most meteorological centers, for Numerical Weather Prediction output (NWP). A newer generation has been introduced, known as GRIB second edition, and data is slowly changing over to this format. Some of the second-generation GRIB is used for derived products distributed in the Eumetcast of Meteosat Second Generation. Another example is the NAM (North American Mesoscale) model. Format GRIB files are a collection of self-contained records of 2D data, and the individual records stand alone as meaningful data, with no references to other records or to an overall schema. So collections of GRIB records can be appended to each other or the records separated. Each GRIB record has two components - the part that describes the record (the header), and the actual binary data itself. The data in GRIB-1 are typically converted to integers using scale and offset, and then bit-packed. GRIB-2 also has the possibility of compression. GRIB History GRIB superseded the Aeronautical Data Format (ADF). The World Meteorological Organization (WMO) Commission for Basic Systems (CBS) met in 1985 to create the GRIB (GRIdded Binary) format. The Working Group on Data Management (WGDM) in February 1994, after major changes, approved revision 1 of the GRIB format. GRIB Edition 2 format was approved in 2003 at Geneva. Problems with GRIB There is no way in GRIB to describe a collection of GRIB records Each record is inde
https://en.wikipedia.org/wiki/Impredicativity
In mathematics, logic and philosophy of mathematics, something that is impredicative is a self-referencing definition. Roughly speaking, a definition is impredicative if it invokes (mentions or quantifies over) the set being defined, or (more commonly) another set that contains the thing being defined. There is no generally accepted precise definition of what it means to be predicative or impredicative. Authors have given different but related definitions. The opposite of impredicativity is predicativity, which essentially entails building stratified (or ramified) theories where quantification over lower levels results in variables of some new type, distinguished from the lower types that the variable ranges over. A prototypical example is intuitionistic type theory, which retains ramification so as to discard impredicativity. Russell's paradox is a famous example of an impredicative construction—namely the set of all sets that do not contain themselves. The paradox is that such a set cannot exist: If it would exist, the question could be asked whether it contains itself or not — if it does then by definition it should not, and if it does not then by definition it should. The greatest lower bound of a set , , also has an impredicative definition: if and only if for all elements of , is less than or equal to , and any less than or equal to all elements of is less than or equal to . This definition quantifies over the set (potentially infinite, depending on the order in question) whose members are the lower bounds of , one of which being the glb itself. Hence predicativism would reject this definition. History The terms "predicative" and "impredicative" were introduced by , though the meaning has changed a little since then. Solomon Feferman provides a historical review of predicativity, connecting it to current outstanding research problems. The vicious circle principle was suggested by Henri Poincaré (1905-6, 1908) and Bertrand Russell in the wake of
https://en.wikipedia.org/wiki/Outer%20London%20Defence%20Ring
The Outer London Defence Ring was a defensive ring built around London during the early part of the Second World War. It was intended as a defence against a German invasion, and was part of a national network of similar "Stop Lines". In June 1940 under the direction of General Edmund Ironside, concentric rings of anti-tank defences and pillboxes were constructed in and around London. They comprised: The London Inner Keep, London Stop Line Inner (Line C), London Stop Line Central (Line B) and London Stop Line Outer (Line A). The Outer London Ring was the strongest and best developed of these, mainly because it could be constructed in open countryside. Work on all the lines was halted weeks later by Ironside's successor, General Alan Brooke, who favoured mobile warfare above static defence. The ring used a mixture of natural rivers and artificial ditches up to wide and deep, encircling London completely. North of London the ring followed a path similar to the route now taken by the M25 motorway, from Watford, following the River Colne, through Potters Bar, Cuffley, Nazeing, then running south through Epping Forest, Loughton and Chigwell. Many pillboxes and anti-tank traps are still visible at points along the ring, but in the majority of places the ditch is no longer visible, covered by the M25 or London suburbs. Gallery See also Fortifications of London British anti-invasion preparations of World War II GHQ Line Taunton Stop Line Coquet Stop Line London Defence Positions - a Victorian approach to the same problem. Traffic and Environmental Zone, the "ring of steel" erected in the 1990s as a defence against terrorism. External links Essex County Council - Unlocking Essex's Past: Outer London Defence Ring Essex County Council - Unlocking Essex's Past: Anti-Tank Ditch and Earthwork in Epping Forest UK Pillboxes and Invasion Defence Remains. Remains of defences in Hertfordshire References British World War II defensive lines Fortifications of London Defence o
https://en.wikipedia.org/wiki/Crookes%20tube
A Crookes tube (also Crookes–Hittorf tube) is an early experimental electrical discharge tube, with partial vacuum, invented by English physicist William Crookes and others around 1869-1875, in which cathode rays, streams of electrons, were discovered. Developed from the earlier Geissler tube, the Crookes tube consists of a partially evacuated glass bulb of various shapes, with two metal electrodes, the cathode and the anode, one at either end. When a high voltage is applied between the electrodes, cathode rays (electrons) are projected in straight lines from the cathode. It was used by Crookes, Johann Hittorf, Julius Plücker, Eugen Goldstein, Heinrich Hertz, Philipp Lenard, Kristian Birkeland and others to discover the properties of cathode rays, culminating in J.J. Thomson's 1897 identification of cathode rays as negatively charged particles, which were later named electrons. Crookes tubes are now used only for demonstrating cathode rays. Wilhelm Röntgen discovered X-rays using the Crookes tube in 1895. The term Crookes tube is also used for the first generation, cold cathode X-ray tubes, which evolved from the experimental Crookes tubes and were used until about 1920. Operation Crookes tubes are cold cathode tubes, meaning that they do not have a heated filament in them that releases electrons as the later electronic vacuum tubes usually do. Instead, electrons are generated by the ionization of the residual air by a high DC voltage (from a few kilovolts to about 100 kilovolts) applied between the cathode and anode electrodes in the tube, usually by an induction coil (a "Ruhmkorff coil"). The Crookes tubes require a small amount of air in them to function, from about 10−6 to 5×10−8 atmosphere (7×10−4 - 4×10−5 torr or 0.1-0.006 pascal). When high voltage is applied to the tube, the electric field accelerates the small number of electrically charged ions and free electrons always present in the gas, created by natural processes like photoionization and radioac
https://en.wikipedia.org/wiki/Critical%20dimension
In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory. An elegant criterion to obtain the critical dimension within mean field theory is due to V. Ginzburg. Since the renormalization group sets up a relation between a phase transition and a quantum field theory, this has implications for the latter and for our larger understanding of renormalization in general. Above the upper critical dimension, the quantum field theory which belongs to the model of the phase transition is a free field theory. Below the lower critical dimension, there is no field theory corresponding to the model. In the context of string theory the meaning is more restricted: the critical dimension is the dimension at which string theory is consistent assuming a constant dilaton background without additional confounding permutations from background radiation effects. The precise number may be determined by the required cancellation of conformal anomaly on the worldsheet; it is 26 for the bosonic string theory and 10 for superstring theory. Upper critical dimension in field theory Determining the upper critical dimension of a field theory is a matter of linear algebra. It is worthwhile to formalize the procedure because it yields the lowest-order approximation for scaling and essential input for the renormalization group. It also reveals conditions to have a critical model in the first place. A Lagrangian may be written as a sum of terms, each consisting of an integral over a monomial of coordinates and fields . Examples are the standard -model and the isotropic Lifshitz tricritical point with Lagrangians see also the figure on the right. This simple structure may be compatible with a scale
https://en.wikipedia.org/wiki/Atari%20ST%20BASIC
Atari ST BASIC (or ST Basic) was the first dialect of BASIC that was produced for the Atari ST line of computers. This BASIC interpreter was bundled with all new STs in the early years of the ST's lifespan, and quickly became the standard BASIC for that platform. However, many users disliked it, and improved dialects of BASIC quickly came out to replace it. Development Atari Corporation commissioned MetaComCo to write a version of BASIC that would take advantage of the GEM environment on the Atari ST. This was based on a version already written for Digital Research called DR-Basic, which was bundled with DR's CP/M-86 operating system. The result was called ST BASIC. At the time the ST was launched, ST BASIC was bundled with all new STs. A further port of the same language called ABasiC ended up being supplied for a time with the Amiga, but Commodore quickly replaced it with the Microsoft-developed AmigaBASIC. Interface The user interface consists of four windows: EDIT, for entering source code LIST, where the source code can be browsed COMMAND, where instructions are entered and immediately executed OUTPUT The windows can only be selected with the mouse. Bugs ST BASIC has many bugs. Compute! in September 1987 reported on one flaw that it described as "among the worst BASIC bugs of all time". Typing x = 18.9 results in function not yet done System error #%N, please restart Similar commands, such as x = 39.8 or x = 4.725, crash the computer; the magazine described the results of the last command as "as bad a crash as you can get on the ST without seeing the machine rip free from its cables, drag itself to the edge of the desk, and leap into the trash bin". After citing other flaws (such as ? 257 * 257 and ? 257 ^ 2 not being equivalent) the magazine recommended "avoid[ing] ST BASIC for serious programming". Regarding reports that MetaComCo was "one bug away" from releasing a long-delayed update to the language, it jokingly wondered "whether Atari has only one m
https://en.wikipedia.org/wiki/Name%20Service%20Switch
The Name Service Switch (NSS) connects the computer with a variety of sources of common configuration databases and name resolution mechanisms. These sources include local operating system files (such as , , and ), the Domain Name System (DNS), the Network Information Service (NIS, NIS+), and LDAP. This operating system mechanism, used in billions of computers, including all Unix-like operating systems, is indispensable to functioning as part of the networked organization and the Internet. Among other things, it is invoked every time a computer user clicks on or types a website address in the web browser or responds to the password challenge to be authorized access to the computer and the Internet. nsswitch.conf A system administrator usually configures the operating system's name services using the file . This file lists databases (such as passwd, shadow and group), and one or more sources for obtaining that information. Examples for sources are files for local files, ldap for the Lightweight Directory Access Protocol, nis for the Network Information Service, nisplus for NIS+, dns for the Domain Name System (DNS), and wins for Windows Internet Name Service. The nsswitch.conf file has line entries for each service consisting of a database name in the first field, terminated by a colon, and a list of possible source databases in the second field. A typical file might look like: passwd: files ldap shadow: files group: files ldap hosts: dns nis files ethers: files nis netmasks: files nis networks: files nis protocols: files nis rpc: files nis services: files nis automount: files aliases: files The order of the source databases determines the order the NSS will attempt to look up those sources to resolve queries for the specified service. A bracketed list of criteria may be specified following each source name to govern the conditions under which the NSS will proceed to querying the next source based on the preceding
https://en.wikipedia.org/wiki/Windows%20Fundamentals%20for%20Legacy%20PCs
Windows Fundamentals for Legacy PCs ("WinFLP") is a thin client release of the Windows NT operating system developed by Microsoft and optimized for older, less powerful hardware. It was released on July 8, 2006, nearly two years after its Windows XP SP2 counterpart was released in August 2004, and is not marketed as a full-fledged general purpose operating system, although it is functionally able to perform most of the tasks generally associated with one. It includes only certain functionality for local workloads such as security, management, document viewing related tasks and the .NET Framework. It is designed to work as a client–server solution with RDP clients or other third party clients such as Citrix ICA. Windows Fundamentals for Legacy PCs reached end of support on April 8, 2014 along with most other Windows XP editions. History Windows Fundamentals for Legacy PCs was originally announced with the code name "Eiger" on 12 May 2005. ("Mönch" was announced as a potential follow-up project at about the same time.) The name "Windows Fundamentals for Legacy PCs" appeared in a press release in September 2005, when it was introduced as "formerly code-named “Eiger”" and described as "an exclusive benefit to SA [Microsoft Software Assurance] customers". A Gartner evaluation from April 2006 stated that: The RTM version of Windows Fundamentals for Legacy PCs, which was released on July 8, 2006, was built from the Windows XP Service Pack 2 codebase. The release was announced to the press on July 12, 2006. Because Windows Fundamentals for Legacy PCs comes from a codebase of 32-bit Windows XP, its service packs are also developed separately. For the same reason, Service Pack 3 for Windows Fundamentals for Legacy PCs, released on October 7, 2008, is the same as Service Pack 3 for 32-bit (x86) editions of Windows XP. In fact, due to the earlier release date of the 32-bit version, many of the key features introduced by Service Pack 2 for 32-bit (x86) editions of Windows XP
https://en.wikipedia.org/wiki/Brinkmann%20coordinates
Brinkmann coordinates are a particular coordinate system for a spacetime belonging to the family of pp-wave metrics. They are named for Hans Brinkmann. In terms of these coordinates, the metric tensor can be written as where , the coordinate vector field dual to the covector field , is a null vector field. Indeed, geometrically speaking, it is a null geodesic congruence with vanishing optical scalars. Physically speaking, it serves as the wave vector defining the direction of propagation for the pp-wave. The coordinate vector field can be spacelike, null, or timelike at a given event in the spacetime, depending upon the sign of at that event. The coordinate vector fields are both spacelike vector fields. Each surface can be thought of as a wavefront. In discussions of exact solutions to the Einstein field equation, many authors fail to specify the intended range of the coordinate variables . Here we should take to allow for the possibility that the pp-wave develops a null curvature singularity. References Coordinate charts in general relativity
https://en.wikipedia.org/wiki/Russian%20tortoise
The Russian tortoise (Testudo horsfieldii), also commonly known as the Afghan tortoise, the Central Asian tortoise, Horsfield's tortoise, four-clawed tortoise, and the (Russian) steppe tortoise, as well as the "Four-Toed Tortoise" is a threatened species of tortoise in the family Testudinidae. The species is endemic to Central Asia from the Caspian Sea south through Iran, Pakistan and Afghanistan, and east across Kazakhstan to Xinjiang, China. Human activities in its native habitat contribute to its threatened status. Etymology Both the specific name, horsfieldii, and the common name "Horsfield's tortoise" are in honor of the American naturalist Thomas Horsfield. He worked in Java (1796) and for the East India Company and later became a friend of Sir Thomas Raffies. Systematics This species is traditionally placed in Testudo. Due to distinctly different morphological characteristics, the monotypic genus Agrionemys was proposed for it in 1966, and was accepted for several decades, although not unanimously. DNA sequence analysis generally concurred, but not too robustly so. However, in 2021, it was again reclassified in Testudo by the Turtle Taxonomy Working Group and the Reptile Database, with Agrionemys being relegated to a distinct subgenus that T. horsfieldii belonged to. The Turtle Taxonomy Working Group lists five separate subspecies of Russian tortoise, but they are not widely accepted by taxonomists: T. h. bogdanovi Chkhikvadze, 2008 – southern Kyrgyzstan, Tajikistan, Uzbekistan, Turkmenistan T. h. horsfieldii (Gray, 1844) – Afghanistan/Pakistan and southern Central Asia T. h. kazachstanica Chkhikvadze, 1988 – Kazakhstan/Karakalpakhstan T. h. kuznetzovi Chkhikvadze, Ataev, Shammakov & Zatoka, 2009 – northern Turkmenistan, southern Uzbekistan T. h. rustamovi Chkhikvadze, Amiranschwili & Atajew, 1990 – southwestern Turkmenistan Description The Russian tortoise is a small tortoise species, with a size range of . Females grow slightly larger () to accommod
https://en.wikipedia.org/wiki/Exile%20%281988%20video%20game%20series%29
is an action role-playing video game series developed by Telenet Japan. The first two games in the series, XZR and XZR II were both released in Japan in 1988, with versions available for the NEC PC-8801, NEC PC-9801, MSX2 and the X1 turbo (for the first game only). In 1991, a remake of XZR II simply titled Exile was released for the PC Engine and Mega Drive. These versions were both released in North America the following year, with Working Designs handling the localization for the TurboGrafx-CD version, while Renovation Products published the Genesis version. A sequel exclusive to the Super CD-ROM2 format, titled Exile: Wicked Phenomenon, was released in 1992, which was also localized by Working Designs for the North American market. The Exile series centers on Sadler, a Syrian Assassin, who is the main character of each game. The original computer versions were notorious for featuring various references to religious historical figures, modern political leaders, iconography, drugs, and time-traveling assassins, although some of these aspects were considerably toned down or omitted in the later console games, with the English versions rewriting all the historical religious organizations into fictional groups. Games XZR: Hakai no Gūzō , the first game in the series, was originally released for the NEC PC-8801 in July 1988. It was subsequently released for the MSX2 and NEC PC-9801 on August and for the X1 turbo on September. The gameplay included action-platform elements, switching between an overhead perspective and side-scrolling sections. The plot of the original XZR has been compared to the later Assassin's Creed video game series. The soundtrack for the PC-8801 version was composed by Yujiroh, Shinobu Ogawa and Tenpei Sato. The game centers on Sadler, a Syrian Assassin (a Shia Islamic sect) who is on a journey to kill the Caliph. The intro sequence briefly covers the history of the Middle East from 622 CE, the first year of the Islamic calendar, including a
https://en.wikipedia.org/wiki/Epigram%20%28programming%20language%29
Epigram is a functional programming language with dependent types, and the integrated development environment (IDE) usually packaged with the language. Epigram's type system is strong enough to express program specifications. The goal is to support a smooth transition from ordinary programming to integrated programs and proofs whose correctness can be checked and certified by the compiler. Epigram exploits the Curry–Howard correspondence, also termed the propositions as types principle, and is based on intuitionistic type theory. The Epigram prototype was implemented by Conor McBride based on joint work with James McKinna. Its development is continued by the Epigram group in Nottingham, Durham, St Andrews, and Royal Holloway, University of London in the United Kingdom (UK). The current experimental implementation of the Epigram system is freely available together with a user manual, a tutorial and some background material. The system has been used under Linux, Windows, and macOS. It is currently unmaintained, and version 2, which was intended to implement Observational Type Theory, was never officially released but exists in GitHub. Syntax Epigram uses a two-dimensional, natural deduction style syntax, with versions in LaTeX and ASCII. Here are some examples from The Epigram Tutorial: Examples The natural numbers The following declaration defines the natural numbers: The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat". In LaTeX, the code is displayed as: The horizontal-line notation can be read as "assuming (what is on the top) is true, we can infer that (what is on the bottom) is true." For example, "assuming n is of type Nat, then suc n is of type Nat." If nothing is on the top, then the bottom statement is always true: "zero is of type Nat (in all case
https://en.wikipedia.org/wiki/Swivel
A swivel is a connection that allows the connected object, such as a gun, chair, swivel caster, or an anchor rode to rotate horizontally or vertically. Swivel designs A common design for a swivel is a cylindrical rod that can turn freely within a support structure. The rod is usually prevented from slipping out by a nut, washer or thickening of the rod. The device can be attached to the ends of the rod or the center. Another common design is a sphere that is able to rotate within a support structure. The device is attached to the sphere. A third design is a hollow cylindrical rod that has a rod that is slightly smaller than its inside diameter inside of it. They are prevented from coming apart by flanges. The device may be attached to either end. A swivel joint for a pipe is often a threaded connection in between which at least one of the pipes is curved, often at an angle of 45 or 90 degrees. The connection is tightened enough to be water- or air-tight and then tightened further so that it is in the correct position. Anchor rode swivel Swivels are also used in the nautical sector as an element of the anchor rode and in a boat mooring systems. With yachts, the swivel is most commonly used between the anchor and chain. There is a school of thought that anchor swivels should not be connected to the anchor itself, but should be somewhere in the chain rode. The anchor swivel is expected to fulfill two purposes: If the boat swings in a circle the chain may become twisted and the swivel may alleviate this problem. If the anchor comes up turned around, some swivels may right it. Concerns The biggest concern about anchor swivels is that they might introduce a weak link to the rode. With most swivels the shaft is nice and tidily embedded in the other half of the swivel as in the example of the stainless steel anchor swivel shown here. When used in marine applications, and worse in tropical climates, this is a cause for corrosion, even in stainless steel. The chr
https://en.wikipedia.org/wiki/Scramdisk
Scramdisk is a free on-the-fly encryption program for Windows 95, Windows 98, and Windows Me. A non-free version was also available for Windows NT. The original Scramdisk is no longer maintained; its author, Shaun Hollingworth, joined Paul Le Roux (the author of E4M) to produce Scramdisk's commercial successor, DriveCrypt. The author of Scramdisk provided a driver for Windows 9x, and the author of E4M provided a driver for Windows NT, enabling cross-platform versions of both programs. There is a new project called Scramdisk 4 Linux which provides access to Scramdisk and TrueCrypt containers. Older versions of TrueCrypt included support for Scramdisk. Licensing Although Scramdisk's source code is still available, it's stated that it was only released and licensed for private study and not for further development. However, because it contains an implementation of the MISTY1 Encryption Algorithm (by Hironobu Suzuki, a.k.a. H2NP) licensed under the GNU GPL Version 2, it is in violation of the GPL. See also Disk encryption Disk encryption software Comparison of disk encryption software References External links Scramdisk @ SamSimpson.com - Original Scramdisk web site from the Internet Archive Official WWW site- though Scramdisk no longer available Scramdisk 4 Linux Cryptographic software Disk encryption Windows security software
https://en.wikipedia.org/wiki/Network%20tap
A network tap is a system that monitors events on a local network. A tap is typically a dedicated hardware device, which provides a way to access the data flowing across a computer network. The network tap has (at least) three ports: an A port, a B port, and a monitor port. A tap inserted between A and B passes all traffic (send and receive data streams) through unimpeded in real time, but also copies that same data to its monitor port, enabling a third party to listen. Network taps are commonly used for network intrusion detection systems, VoIP recording, network probes, RMON probes, packet sniffers, and other monitoring and collection devices and software that require access to a network segment. Taps are used in security applications because they are non-obtrusive, are not detectable on the network (having no physical or logical address), can deal with full-duplex and non-shared networks, and will usually pass through or bypass traffic even if the tap stops working or loses power. Terminology The term network tap is analogous to phone tap or vampire tap. Some vendors define TAP as an acronym for test access point or terminal access point; however, those are backronyms. The monitored traffic is sometimes referred to as the pass-through traffic, while the ports that are used for monitoring are the monitor ports. There may also be an aggregation port for full-duplex traffic, wherein the A traffic is aggregated with the B traffic, resulting in one stream of data for monitoring the full-duplex communication. The packets must be aligned into a single stream using a time-of-arrival algorithm. Vendors will tend to use terms in their marketing such as breakout, passive, aggregating, regeneration, bypass, active, inline power, and others; Unfortunately, vendors do not use such terms consistently. Before buying any product it is important to understand the available features, and check with vendors or read the product literature closely to figure out how marketing ter
https://en.wikipedia.org/wiki/Original%20antigenic%20sin
Original antigenic sin, also known as antigenic imprinting, the Hoskins effect, or immunological imprinting, is the propensity of the immune system to preferentially use immunological memory based on a previous infection when a second slightly different version of that foreign pathogen (e.g. a virus or bacterium) is encountered. This leaves the immune system "trapped" by the first response it has made to each antigen, and unable to mount potentially more effective responses during subsequent infections. Antibodies or T-cells induced during infections with the first variant of the pathogen are subject to repertoire freeze, a form of original antigenic sin. The phenomenon has been described in relation to influenza virus, SARS-CoV-2, dengue fever, human immunodeficiency virus (HIV) and to several other viruses. History This phenomenon was first described in 1960 by Thomas Francis Jr. in the article "On the Doctrine of Original Antigenic Sin". It is named by analogy to the Christian theological concept of original sin. According to Francis as cited by Richard Krause: The antibody of childhood is largely a response to dominant antigen of the virus causing the first type A influenza infection of the lifetime. [...] The imprint established by the original virus infection governs the antibody response thereafter. This we have called the Doctrine of the Original Antigenic Sin. In B cells During a primary infection, long-lived memory B cells are generated, which remain in the body and protect from subsequent infections. These memory B cells respond to specific epitopes on the surface of viral proteins to produce antigen-specific antibodies and can respond to infection much faster than naive B cells can to novel antigens. This effect lessens time needed to clear subsequent infections. Between primary and secondary infections or following vaccination, a virus may undergo antigenic drift, in which the viral surface proteins (the epitopes) change through natural mutatio
https://en.wikipedia.org/wiki/List%20of%20Mozilla%20products
The following is a list of Mozilla Foundation / Mozilla Corp. products. All products, unless specified, are cross-platform by design. Client applications Firefox Browser - An open-source web browser. Firefox Focus - A privacy-focused mobile web browser. Firefox Reality - A web browser optimized for virtual reality. Firefox for Android (also Firefox Daylight) - A web browser for mobile phones and smaller non-PC devices. Firefox Monitor - An online service for alerting the user when their email addresses and passwords have been leaked in data breaches. Firefox Relay - A privacy focused email masking service which allows for the creation of disposable email aliases Mozilla Thunderbird - An email and news client. Mozilla VPN - A virtual private network client. SeaMonkey (formerly Mozilla Application Suite) - An Internet suite. ChatZilla - The IRC component, also available as a Firefox extension. Mozilla Calendar - Originally planned to be a calendar component for the suite; became the base of Mozilla Sunbird. Mozilla Composer - The HTML editor component. Mozilla Mail & Newsgroups - The email and news component. Components DOM Inspector - An inspector for DOM. Gecko - The layout engine. Necko - The network library. Rhino - The JavaScript engine written in Java programming language. Servo - A layout engine. SpiderMonkey - The JavaScript engine written in C programming language. Venkman - A JavaScript debugger. Development tools Bugzilla - A bugtracker. Rust (programming language) Skywriter - An extensible and interoperable web-based framework for code editing. Treeherder - A detective tool that allows developers to manage software builds and to correlate build failures on various platforms and configurations with particular code changes (Predecessors: TBPL and Tinderbox). API/Libraries Netscape Portable Runtime (NSPR) - A platform abstraction layer that makes operating systems appear the same. Network Security Services (NSS) - A set of libr
https://en.wikipedia.org/wiki/Unicore
Unicore is the name of a computer instruction set architecture designed by the Microprocessor Research and Development Center (MPRC) of Peking University in the PRC. The computer built on this architecture is called the Unity-863. The CPU is integrated into a fully functional SoC to make a PC-like system. The processor is very similar to the ARM architecture, but uses a different instruction set. It is supported by the Linux kernel as of version 2.6.39. Support will be removed in Linux kernel version 5.9 as nobody seems to maintain it and the code is falling behind the rest of the kernel code and compiler requirements. Instruction set The instructions are almost identical to the standard ARM formats, except that conditional execution has been removed, and the bits reassigned to expand all the register specifiers to 5 bits. Likewise, the immediate format is 9 bits rotated by a 5-bit amount (rather than 8 bit rotated by 4), the load/store offset sizes are 14 bits for byte/word and 10 bits for signed byte or half-word. Conditional moves are provided by encoding the condition in the (unused by ARM) second source register field Rn for MOV and MVN instructions. The meaning of various flag bits (such as S=1 enables setting the condition codes) is identical to the ARM instruction set. The load/store multiple instruction can only access half of the register set, depending on the H bit. If H=0, the 16 bits indicate R0–R15; if H=1, R16–R31. References Instruction processing Instruction set architectures Science and technology in China
https://en.wikipedia.org/wiki/Explicit%20symmetry%20breaking
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. Usually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory. An example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamiltonian of the atoms involved. Explicit symmetry breaking differs from spontaneous symmetry breaking. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it. Explicit symmetry breaking is also associated with electromagnetic radiation. A system of accelerated charges results in electromagnetic radiation when the geometric symmetry of the electric field in free space is explicitly broken by the associated electrodynamic structure under time varying excitation of the given system. This is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation. Perturbation theory in quantum mechanics A common setting for explicit symmetry breaking is perturbation theory in quantum mechanics. The symmetry is evident in a base Hamiltonian . This is often an integrable Hamiltonian, admitting symmetries which in some sense make the Hamiltonian integrable. The base Hamiltonian might be chosen to provide a starting point close to the system being modelled. Mathematically, the symmetries can be described by a smooth symmetry group . Under the action of this group, is invariant. The explicit symmetry breaking then comes from a second term in the Hamiltonian, , which is not invariant under the action of . This is sometimes interpre
https://en.wikipedia.org/wiki/Maynard%20operation%20sequence%20technique
Maynard operation sequence technique (MOST) is a predetermined motion time system that is used primarily in industrial settings to set the standard time in which a worker should perform a task. To calculate this, a task is broken down into individual motion elements, and each is assigned a numerical time value in units known as time measurement units, or TMUs, where 100,000 TMUs is equivalent to one hour. All the motion element times are then added together and any allowances are added, and the result is the standard time. It is more common in Asia whereas the original and more sophisticated Methods Time Measurement technique, better known as MTM, is a global standard. The most commonly used form of MOST is BasicMOST, which was released in Sweden in 1972 and in the United States in 1974. Two other variations were released in 1980, called MiniMOST and MaxiMOST. The difference between the three is their level of focus—the motions recorded in BasicMOST are on the level of tens of TMUs, while MiniMOST uses individual TMUs and MaxiMOST uses hundreds of TMUs. This allows for a variety of applications—MiniMOST is commonly used for short (less than about a minute), repetitive cycles, and MaxiMOST for longer (more than several minutes), non-repetitive operations. BasicMost is in the position between them, and can be used accurately for operations ranging from less than a minute to about ten minutes. Another variation of MOST is known as AdminMOST. Originally developed and released under the name ClericalMOST in the 1970s, it was recently updated to include modern administrative tasks and renamed. It is on the same level of focus as BasicMOST. Up until 16bit programs stopped working with Windows, it was possible to use AutoMOST. AutoMOST was a knowledge based system employing decision trees. Developers created logic trees. These trees could then be used by non IE trained operators to generate Standard Times. The user answered a series of logic questions to route t
https://en.wikipedia.org/wiki/Microconnect%20distributed%20antenna
Microconnect distributed antennas (MDA) are small-cell local area (100 metre radius) transmitter-receivers usually fitted to lampposts and other street furniture in order to provide Wireless LAN, GSM and GPRS connectivity. They are therefore less obtrusive than the usual masts and antennas used for these purposes and meet with less public opposition. Service provided The service provided by microconnect distributed antennas cover a market in heavily populated urban area addressing mobile and radio connection. Also MDA is suited for bustling cities and historical areas where mobile connection and ability is impaired. Having many low power, small antennae preforms and covers an area equal to or better than a traditional Macrocellular site. The centrally located radio base station connects to the antennae by fibre optical cable. Each antenna point contains a 63–65 GHz wireless unit alongside a large memory store providing proxy and cache services. Also users will be able to obtain 64 kbit uplink/ 384kbit downlink service. Multiple operators can share this infrastructure. So that different service providers can this technology to benefit their customers. Four part MDA System The four part MDA system is, the DAS (Distributed Antenna System) Master unit, access network optical fibre, and the Remote Radio over Fibre (RoF) Unit (Remote Antennae Points). Followed by the Supervisory and Management facilities. This system is compatible GSM (2g and 2.5G) and 3G network requirements of mobile users. The MDA is an economical device that gives a somewhat low-cost solution to give more people access to mobile and broadband connection. This solution also has a low environmental impact that might not clutter up a historical part of an urban area. As communities become more and more dependent on technology solutions like the MDA system is perfect for protecting the natural beauty. See also Distributed antenna system In-Building Cellular Enhancement System References External
https://en.wikipedia.org/wiki/Ocean%20Way%20Recording
Ocean Way Recording was a series of recording studios established by recording engineer and producer Allen Sides with locations in Los Angeles, California, Nashville, Tennessee, and Saint Barthélemy. Ocean Way Recording no longer operates recording facilities, but Ocean Way Nashville continues to operate under the ownership of Belmont University. History Background In 1972, Ocean Way founder Allen Sides opened a studio he had built in a 3 1/2-car garage on Ocean Way in Santa Monica, California for the purpose of demonstrating tri-amplified loudspeakers of his own design. In 1977, Sides, who had worked as a runner at United Western Recorders in the late 1960s, purchased enough equipment from Putnam's company UREI to completely fill the garage space for just $6,000, attracting the attention of Putnam. Sides and Putnam became friends and business partners, and Putnam offered Sides exclusive rights to sell UREI and United Western Studios' surplus equipment, providing Sides and his studio with a wide variety of studio equipment. Ocean Way Hollywood In 1976, after the lease on Sides' garage studio was abruptly canceled, Putnam provided space at United Western Recorders, leasing United Studio B to Sides. Six years later, Sides lease at United Western expanded to include United Studio A. When Putnam sold his companies to Harman in 1984, Harman sold the Western building and its contents to Sides, who later also acquired the United building. Sides renamed the United Western Studios complex Ocean Way Recording after the location of his former garage studio. Ocean Way Studios operated the two-building complex from 1985 until 1999, when the former Western Studios building at 6000 Sunset Boulevard was partitioned, sold, and renamed Cello Studios. In 2006, the studio again changed ownership, and has since been in operation as EastWest Studios. Ocean Way Studios continued operations in the building at 6050 Sunset Boulevard until 2013, when it was sold to Hudson Pacific Prope
https://en.wikipedia.org/wiki/Q-matrix
In mathematics, a Q-matrix is a square matrix whose associated linear complementarity problem LCP(M,q) has a solution for every vector q. Properties M is a Q-matrix if there exists d > 0 such that LCP(M,0) and LCP(M,d) have a unique solution. Any P-matrix is a Q-matrix. Conversely, if a matrix is a Z-matrix and a Q-matrix, then it is also a P-matrix. See also P-matrix Z-matrix References Matrix theory Matrices
https://en.wikipedia.org/wiki/Functional%20integration%20%28neurobiology%29
Functional integration is the study of how brain regions work together to process information and effect responses. Though functional integration frequently relies on anatomic knowledge of the connections between brain areas, the emphasis is on how large clusters of neurons – numbering in the thousands or millions – fire together under various stimuli. The large datasets required for such a whole-scale picture of brain function have motivated the development of several novel and general methods for the statistical analysis of interdependence, such as dynamic causal modelling and statistical linear parametric mapping. These datasets are typically gathered in human subjects by non-invasive methods such as EEG/MEG, fMRI, or PET. The results can be of clinical value by helping to identify the regions responsible for psychiatric disorders, as well as to assess how different activities or lifestyles affect the functioning of the brain. Imaging techniques A study's choice of imaging modality depends on the desired spatial and temporal resolution. fMRI and PET offer relatively high spatial resolution, with voxel dimensions on the order of a few millimeters, but their relatively low sampling rate hinders the observation of rapid and transient interactions between distant regions of the brain. These temporal limitations are overcome by MEG, but at the cost of only detecting signals from much larger clusters of neurons. fMRI Functional magnetic resonance imaging (fMRI) is a form of MRI that is most frequently used to take advantage of a difference in magnetism between oxy- and deoxyhemoglobin to assess blood flow to different parts of the brain. Typical sampling rates for fMRI images are in the tenths of seconds. MEG Magnetoencephalography (MEG) is an imaging modality that uses very sensitive magnetometers to measure the magnetic fields resulting from ionic currents flowing through neurons in the brain. High-quality MEG machines allow for sub-millisecond sampling rates.
https://en.wikipedia.org/wiki/Proxy%20re-encryption
Proxy re-encryption (PRE) schemes are cryptosystems which allow third parties (proxies) to alter a ciphertext which has been encrypted for one party, so that it may be decrypted by another. Examples of use A proxy re-encryption is generally used when one party, say Bob, wants to reveal the contents of messages sent to him and encrypted with his public key to a third party, Charlie, without revealing his private key to Charlie. Bob does not want the proxy to be able to read the contents of his messages. Bob could designate a proxy to re-encrypt one of his messages that is to be sent to Charlie. This generates a new key that Charlie can use to decrypt the message. Now if Bob sends Charlie a message that was encrypted under Bob's key, the proxy will alter the message, allowing Charlie to decrypt it. This method allows for a number of applications such as e-mail forwarding, law-enforcement monitoring, and content distribution. A weaker re-encryption scheme is one in which the proxy possesses both parties' keys simultaneously. One key decrypts a plaintext, while the other encrypts it. Since the goal of many proxy re-encryption schemes is to avoid revealing either of the keys or the underlying plaintext to the proxy, this method is not ideal. Defining functions Proxy re-encryption schemes are similar to traditional symmetric or asymmetric encryption schemes, with the addition of two functions: Delegation – allows a message recipient (keyholder) to generate a re-encryption key based on his secret key and the key of the delegated user. This re-encryption key is used by the proxy as input to the re-encryption function, which is executed by the proxy to translate ciphertexts to the delegated user's key. Asymmetric proxy re-encryption schemes come in bi-directional and uni-directional varieties. In a bi-directional scheme, the re-encryption scheme is reversible—that is, the re-encryption key can be used to translate messages from Bob to Charlie, as well as from Char
https://en.wikipedia.org/wiki/Standard%20cell
In semiconductor design, standard-cell methodology is a method of designing application-specific integrated circuits (ASICs) with mostly digital-logic features. Standard-cell methodology is an example of design abstraction, whereby a low-level very-large-scale integration (VLSI) layout is encapsulated into an abstract logic representation (such as a NAND gate). Cell-based methodology – the general class to which standard cells belong – makes it possible for one designer to focus on the high-level (logical function) aspect of digital design, while another designer focuses on the implementation (physical) aspect. Along with semiconductor manufacturing advances, standard-cell methodology has helped designers scale ASICs from comparatively simple single-function ICs (of several thousand gates), to complex multi-million gate system-on-a-chip (SoC) devices. Construction of a standard cell A standard cell is a group of transistor and interconnect structures that provides a boolean logic function (e.g., AND, OR, XOR, XNOR, inverters) or a storage function (flipflop or latch). The simplest cells are direct representations of the elemental NAND, NOR, and XOR boolean function, although cells of much greater complexity are commonly used (such as a 2-bit full-adder, or muxed D-input flipflop.) The cell's boolean logic function is called its logical view: functional behavior is captured in the form of a truth table or Boolean algebra equation (for combinational logic), or a state transition table (for sequential logic). Usually, the initial design of a standard cell is developed at the transistor level, in the form of a transistor netlist or schematic view. The netlist is a nodal description of transistors, of their connections to each other, and of their terminals (ports) to the external environment. A schematic view may be generated with a number of different Computer Aided Design (CAD) or Electronic Design Automation (EDA) programs that provide a Graphical User Interface (
https://en.wikipedia.org/wiki/VideoNow
The VideoNow is a portable video player produced by Hasbro and released by their subsidiary Tiger Electronics in 2003, and was considered the most popular product in Tiger's line of Now consumer products. The systems use discs called PVDs (which stands for Personal Video Disc), which can store about 30 minutes (half an hour) of video, the length of an average TV show with commercials (a typical TV episode is about 20–23 minutes without them), so each PVD contains only one episode, with trailers at the end to use the leftover time on most PVDs, including Nickelodeon PVDs. Video data is stored on the left audio channel with audio on the right channel, thus making it impossible to achieve stereo sound on the system, which only plays in black and white. The video plays at 15fps. Most of the shows were from Nickelodeon, such as SpongeBob SquarePants and The Fairly OddParents, and later they released shows from Cartoon Network, such as Ed, Edd n Eddy and Dexter's Laboratory, Disney only mostly released episodes of America’s Funniest Home Videos and one Hannah Montana music video. A small amount of movies were also released on the system, but due to the limited space on a PVD, said movies would have to be released on at least three discs, depending on the length of said film. Hasbro also produced editing software for creating custom VideoNow Color PVDs called the VideoNow Media Wizard in 2005, which came with blank PVD media. A number of unofficial solutions are available for creating the oddly-formatted VideoNow files, including a plug-in for the popular video processing program Virtual Dub. The files can then be burned to a CD-R using standard CD burning software, and the disc cut down to the required size. As the VideoNow Color does not accept standard 8 cm mini-CDs, some creative users have resorted to cutting down standard 12 cm CD-R discs, though not without problems. Hasbro made recordable PVDs available without the Media Wizard from their online store. However