source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/System%20usability%20scale
In systems engineering, the system usability scale (SUS) is a simple, ten-item attitude Likert scale giving a global view of subjective assessments of usability. It was developed by John Brooke at Digital Equipment Corporation in the UK in 1986 as a tool to be used in usability engineering of electronic office systems. The usability of a system, as defined by the ISO standard ISO 9241 Part 11, can be measured only by taking into account the context of use of the system—i.e., who is using the system, what they are using it for, and the environment in which they are using it. Furthermore, measurements of usability have several different aspects: effectiveness (can users successfully achieve their objectives) efficiency (how much effort and resource is expended in achieving those objectives) satisfaction (was the experience satisfactory) Measures of effectiveness and efficiency are also context specific. Effectiveness in using a system for controlling a continuous industrial process would generally be measured in very different terms to, say, effectiveness in using a text editor. Thus, it can be difficult, if not impossible, to answer the question "is system A more usable than system B", because the measures of effectiveness and efficiency may be very different. However, it can be argued that given a sufficiently high-level definition of subjective assessments of usability, comparisons can be made between systems. SUS has generally been seen as providing this type of high-level subjective view of usability and is thus often used in carrying out comparisons of usability between systems. Because it yields a single score on a scale of 0–100, it can be used to compare even systems that are outwardly dissimilar. This one-dimensional aspect of the SUS is both a benefit and a drawback, because the questionnaire is necessarily quite general. Recently, Lewis and Sauro suggested a two-factor orthogonal structure, which practitioners may use to score the SUS on independen
https://en.wikipedia.org/wiki/DermAtlas
DermAtlas is an open-access website devoted to dermatology that is hosted by Johns Hopkins University's Bernard A. Cohen and Christoph U. Lehmann. Its goal is to build a large-high-quality dermatologic atlas, a database of images of skin conditions, and it encourages its users to submit their dermatology images and links for inclusion. It is edited in a collaborative fashion by physicians around the globe and includes an online Dermatology Quiz, that allows anyone to test their dermatology knowledge. The database currently includes over 10,500 images and consists of both clinical images and histological images. Great emphasis is placed on dermatological conditions in pediatric patients. References External links Interactive Dermatology Atlas Online databases Johns Hopkins University Johns Hopkins Hospital American medical websites
https://en.wikipedia.org/wiki/Sauerbrey%20equation
The Sauerbrey equation was developed by the German Günter Sauerbrey in 1959, while working on his doctoral thesis at the Technical University of Berlin, Germany. It is a method for correlating changes in the oscillation frequency of a piezoelectric crystal with the mass deposited on it. He simultaneously developed a method for measuring the characteristic frequency and its changes by using the crystal as the frequency determining component of an oscillator circuit. His method continues to be used as the primary tool in quartz crystal microbalance (QCM) experiments for conversion of frequency to mass and is valid in nearly all applications. The equation is derived by treating the deposited mass as though it were an extension of the thickness of the underlying quartz. Because of this, the mass to frequency correlation (as determined by Sauerbrey’s equation) is largely independent of electrode geometry. This has the benefit of allowing mass determination without calibration, making the set-up desirable from a cost and time investment standpoint. The Sauerbrey equation is defined as: where: – Resonant frequency of the fundamental mode (Hz) – normalized frequency change (Hz) – Mass change (g) – Piezoelectrically active crystal area (Area between electrodes, cm2) – Density of quartz ( = 2.648 g/cm3) – Shear modulus of quartz for AT-cut crystal ( = 2.947x1011 g·cm−1·s−2) The normalized frequency is the nominal frequency shift of that mode divided by its mode number (most software outputs normalized frequency shift by default). Because the film is treated as an extension of thickness, Sauerbrey’s equation only applies to systems in which the following three conditions are met: the deposited mass must be rigid, the deposited mass must be distributed evenly and the frequency change < 0.05. If the change in frequency is greater than 5%, that is, > 0.05, the Z-match method must be used to determine the change in mass. The formula for the Z-match method is: Equa
https://en.wikipedia.org/wiki/Prefactoring
Prefactoring is the application of experience to the creation of new software systems. Its relationship to its namesake refactoring is that lessons learned from refactoring are part of that experience. Experience is captured in guidelines that can be applied to a development process. The guidelines have come from a number of sources, including Jerry Weinberg, Norm Kerth, and Scott Ambler. These guidelines include: "When you're abstract, be abstract all the way" "Splitters can be lumped more easily than lumpers can be split" "Use the client’s language" References Further reading (this book won the Jolt award in 2006) External links What Is Prefactoring? Code refactoring
https://en.wikipedia.org/wiki/Post%E2%80%93Turing%20machine
A Post–Turing machine is a "program formulation" of a type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation. Post's model and Turing's model, though very similar to one another, were developed independently. Turing's paper was received for publication in May 1936, followed by Post's in October. A Post–Turing machine uses a binary alphabet, an infinite sequence of binary storage locations, and a primitive programming language with instructions for bi-directional movement among the storage locations and alteration of their contents one at a time. The names "Post–Turing program" and "Post–Turing machine" were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the name "Turing–Post program" (Davis, in Steen p. 241). 1936: Post model In his 1936 paper "Finite Combinatory Processes—Formulation 1", Emil Post described a model of which he conjectured is "logically equivalent to recursiveness". Post's model of a computation differs from the Turing-machine model in a further "atomization" of the acts a human "computer" would perform during a computation. Post's model employs a "symbol space" consisting of a "two-way infinite sequence of spaces or boxes", each box capable of being in either of two possible conditions, namely "marked" (as by a single vertical stroke) and "unmarked" (empty). Initially, finitely-many of the boxes are marked, the rest being unmarked. A "worker" is then to move among the boxes, being in and operating in only one box at a time, according to a fixed finite "set of directions" (instructions), which are numbered in order (1,2,3,...,n). Beginning at a box "singled out as the starting point", the worker is to follow the set of instructions one at a time, beginning with instruction 1. There are five different primitive operations that the worker can perform: (a) Marking the box it is in, if it is empty (b) Erasing the mark in the box it is in, if it is marked (c) Mov
https://en.wikipedia.org/wiki/Drive%20Image%20%28software%29
Drive Image (PQDI) is a software disk cloning package for Intel-based computers. The software was developed and distributed by the former PowerQuest Corporation. Drive Image version 7 became the basis for Norton Ghost 9.0, which was released to retail markets in August 2004. Ghost was a competing product, developed by Binary Research, before Symantec bought the company in 1998. This also explains the different file extensions used for Ghost image files: formerly it was .gho, now in versions 9.0 and above it is .v2i. Product history Drive Image version 7 was the last version published under the PowerQuest corporate banner. It was also the first version to include a native Windows interface for cloning an active system partition; prior versions required a reboot into a DOS-like environment in order to clone the active partition. In order to clone active partitions without requiring a reboot, Drive Image 7 employed a volume snapshot device driver which was licensed from StorageCraft Technology Corporation. Drive Image 2002 (version 6) is the last release that allows the creation of a rescue set on floppy disk, which can be used to create and restore an image. See also List of disk cloning software References External links Symantec Corporation website Drive Image v7 review by PCWorld Storage software Gen Digital software
https://en.wikipedia.org/wiki/Design%20closure
Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit (i.e. VLSI) design is modified from its initial description to meet a growing list of design constraints and objectives. Every step in the IC design (such as static timing analysis, placement, routing, and so on) is already complex and often forms its own field of study. This article, however, looks at the overall design closure process, which takes a chip from its initial design state to the final form in which all of its design constraints are met. Introduction Every chip starts off as someone’s idea of a good thing: "If we can make a part that performs function X, we will all be rich!" Once the concept is established, someone from marketing says "To make this chip profitably, it must cost $C and run at frequency F." Someone from manufacturing says "To meet this chip’s targets, it must have a yield of Y%." Someone from packaging says “It must fit in the P package and dissipate no more than W watts.” Eventually, the team generates an extensive list of all the constraints and objectives they must meet to manufacture a product that can be sold profitably. The management then forms a design team, which consists of chip architects, logic designers, functional verification engineers, physical designers, and timing engineers, and assigns them to create a chip to the specifications. Constraints vs Objectives The distinction between constraints and objectives is straightforward: a constraint is a design target that must be met for the design to be successful. For example, a chip may be required to run at a specific frequency so it can interface with other components in a system. In contrast, an objective is a design target where more (or less) is better. For example, yield is generally an objective, which is maximized to lower manufacturing cost. For the purposes of design closure, the distinction between constraints and objectives is not important; this artic
https://en.wikipedia.org/wiki/Legacy%20mode
In computing, legacy mode is a state in which a computer system, component, or software application behaves in a way that is different from its standard operation in order to support older software, data, or expected behavior. It differs from backward compatibility in that an item in legacy mode will often sacrifice newer features or performance, or be unable to access data or run programs it normally could, in order to provide continued access to older data or functionality. Sometimes it can allow newer technologies that replaced the old to emulate them when running older operating systems. Examples x86-64 processors can be run in one of two states: long mode provides larger physical address spaces and the ability to run 64-bit applications which can use larger virtual address spaces and more registers, and legacy mode. These processors' legacy mode allows these processors to act as if they were 16- or 32-bit x86 processors with all of the abilities and limitations of them in order to run legacy 16-bit and 32-bit operating systems, and to run programs requiring virtual 8086 mode to run in Windows. 32-bit x86 processors themselves have two legacy modes: real mode and virtual 8086 mode. Real mode causes the processor to mostly act as if it was an original 8086, while virtual 8086 mode allows the creation of a virtual machine to allow the running of programs that require real mode in order to run under a protected mode environment. Protected mode is the non-legacy mode of 32-bit x86 processors and the 80286. Most PC graphic cards have a VGA and a SVGA mode that allows them to be used on systems that have not loaded the device driver necessary to take advantage of their more advanced features. Operating systems often have a special mode allowing them to emulate an older release in order to support software applications dependent on the specific interfaces and behavior of that release. Windows XP can be configured to emulate Windows 2000 and Windows 98; Mac OS X c
https://en.wikipedia.org/wiki/Protein-fragment%20complementation%20assay
Within the field of molecular biology, a protein-fragment complementation assay, or PCA, is a method for the identification and quantification of protein–protein interactions. In the PCA, the proteins of interest ("bait" and "prey") are each covalently linked to fragments of a third protein (e.g. DHFR, which acts as a "reporter"). Interaction between the bait and the prey proteins brings the fragments of the reporter protein in close proximity to allow them to form a functional reporter protein whose activity can be measured. This principle can be applied to many different reporter proteins and is also the basis for the yeast two-hybrid system, an archetypical PCA assay. Split protein assays Any protein that can be split into two parts and reconstituted non-covalently to form a functional protein may be used in a PCA. The two fragments however have low affinity for each other and must be brought together by other interacting proteins fused to them (often called "bait" and "prey" since the bait protein can be used to identify a prey protein, see figure). The protein that produces a detectable readout is called "reporter". Usually enzymes which confer resistance to nutrient deprivation or antibiotics, such as dihydrofolate reductase or beta-lactamase respectively, or proteins that give colorimetric or fluorescent signals are used as reporters. When fluorescent proteins are reconstituted the PCA is called Bimolecular fluorescence complementation assay. The following proteins have been used in split protein PCAs: Beta-lactamase Dihydrofolate reductase (DHFR) Focal adhesion kinase (FAK) Gal4, a yeast transcription factor (as in the classical yeast two-hybrid system) GFP (split-GFP), e.g. EGFP (enhanced green fluorescent protein) Horseradish peroxidase Infrared fluorescent protein IFP1.4, an engineered chromophore-binding domain (CBD) of a bacteriophytochrome from Deinococcus radiodurans LacZ (beta-galactosidase) Luciferase, including ReBiL (recombinase enhanc
https://en.wikipedia.org/wiki/Flat%20Display%20Mounting%20Interface
The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, televisions, and other displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and televisions. As well as being used for mounting monitors, the standards can be used to attach a small PC to the monitor mount. The first standard in this family was introduced in 1997 and was originally called Flat Panel Monitor Physical Mounting Interface (FPMPMI), it corresponds to part D of the current standard. Variants Most sizes of VESA mount have four screw-holes arranged in a square on the mount, with matching tapped holes on the device. The horizontal and vertical distance between the screw centres respectively labelled as 'A', and 'B'. The original layout was a square of 100mm. A was defined for smaller displays. Later, variants were added for screens with as small as a diagonal. The FDMI was extended in 2006 with additional screw patterns that are more appropriate for larger TV screens. Thus the standard now specifies seven sizes, each with more than one variant. These are referenced as parts B to F of the standard or with official abbreviations, usually prefixed by the word "VESA". Unofficially, the variants are sometimes referenced as just "VESA" followed by the pattern size in mm, which is slightly ambiguous for the names "VESA 50" (four possibilities), "VESA 75" (two possibilities) and "VESA 200" (three possibilities). However, if "VESA 100" is accepted as meaning the original variant ("VESA MIS-D, 100"), then all but "VESA MIS-E" and "VESA MIS-F, 200" have at least one unique dimension that can be used in this way, as can be seen from the tables below. Notes If a screen is heavier or larger than specified in table 1, it should use a larger variant from the table, for instance, a 30-in L
https://en.wikipedia.org/wiki/List%20of%20first-order%20theories
In first-order logic, a first-order theory is given by a set of axioms in some language. This entry lists some of the more common examples used in model theory and some of their properties. Preliminaries For every natural mathematical structure there is a signature σ listing the constants, functions, and relations of the theory together with their arities, so that the object is naturally a σ-structure. Given a signature σ there is a unique first-order language Lσ that can be used to capture the first-order expressible facts about the σ-structure. There are two common ways to specify theories: List or describe a set of sentences in the language Lσ, called the axioms of the theory. Give a set of σ-structures, and define a theory to be the set of sentences in Lσ holding in all these models. For example, the "theory of finite fields" consists of all sentences in the language of fields that are true in all finite fields. An Lσ theory may: be consistent: no proof of contradiction exists; be satisfiable: there exists a σ-structure for which the sentences of the theory are all true (by the completeness theorem, satisfiability is equivalent to consistency); be complete: for any statement, either it or its negation is provable; have quantifier elimination; eliminate imaginaries; be finitely axiomatizable; be decidable: There is an algorithm to decide which statements are provable; be recursively axiomatizable; be model complete or sub-model complete; be κ-categorical: All models of cardinality κ are isomorphic; be stable or unstable; be ω-stable (same as totally transcendental for countable theories); be superstable have an atomic model; have a prime model; have a saturated model. Pure identity theories The signature of the pure identity theory is empty, with no functions, constants, or relations. Pure identity theory has no (non-logical) axioms. It is decidable. One of the few interesting properties that can be stated in the language of pure identity theory
https://en.wikipedia.org/wiki/MI1
MI1 or British Military Intelligence, Section 1 was a department of the British Directorate of Military Intelligence, part of the War Office. It was set up during World War I. It contained "C&C", which was responsible for code breaking. Its subsections in World War I were: MI1a: Distribution of reports, intelligence records. MI1b: Interception and cryptanalysis. MI1c: The Secret Service/SIS. MI1d: Communications security. MI1e: Wireless telegraphy. MI1f: Personnel and finance. MI1g: Security, deception and counter intelligence. In 1919 MI1b and the Royal Navy's (NID25) "Room 40" were closed down and merged into the inter-service Government Code and Cypher School (GC&CS), which subsequently developed into the Government Communications Headquarters (GCHQ) at Cheltenham. From 1915, MI1(b) was headed by Malcolm Vivian Hay. Oliver Strachey was in MI1 during World War I. He transferred to GC&CS and served there during World War II. John Tiltman was seconded to MI1 shortly before it merged with Room 40. Notes References What happened to MI1 - MI4? Updated and extended version of Action This Day: From Breaking of the Enigma Code to the Birth of the Modern Computer Bantam Press 2001 Gannon, Paul, Inside Room 40: The Codebreakers of World War I, Ian Allan Publishing, 2011, Cryptography organizations Defunct United Kingdom intelligence agencies 1910s establishments in the United Kingdom Military units and formations disestablished in 1919 United Kingdom in World War I Military communications of the United Kingdom
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20effect
The Eötvös effect is the change in measured Earth's gravity caused by the change in centrifugal acceleration resulting from eastbound or westbound velocity. When moving eastbound, the object's angular velocity is increased (in addition to Earth's rotation), and thus the centrifugal force also increases, causing a perceived reduction in gravitational force. Discovery In the early 1900s, a German team from the Geodetic Institute of Potsdam carried out gravity measurements on moving ships in the Atlantic, Indian, and Pacific oceans. While studying their results, the Hungarian nobleman and physicist Baron Roland von Eötvös (Loránd Eötvös) noticed that the readings were lower when the boat moved eastwards, higher when it moved westward. He identified this as primarily a consequence of Earth's rotation. In 1908, new measurements were made in the Black Sea on two ships, one moving eastward and one westward. The results substantiated Eötvös' claim. Formulation Geodesists use the following formula to correct for velocity relative to Earth during a gravimetric run. Here, is the relative acceleration is the rotation rate of the Earth is the velocity in longitudinal direction (east-west) is the latitude where the measurements are taken. is the velocity in latitudinal direction (north-south) is the radius of the Earth The first term in the formula, 2Ωu cos(ϕ), corresponds to the Eötvös effect. The second term is a refinement that under normal circumstances is much smaller than the Eötvös effect. Physical explanation The most common design for a gravimeter for field work is a spring-based design; a spring that suspends an internal weight. The suspending force provided by the spring counteracts the gravitational force. A well-manufactured spring has the property that the amount of force that the spring exerts is proportional to the extension of the spring from its equilibrium position (Hooke's law). The stronger the effective gravity at a particular location, the mor
https://en.wikipedia.org/wiki/Engineering%20Research%20Center%20for%20Wireless%20Integrated%20Microsystems
The NSF Engineering Research Center for Wireless Integrated Microsystems (ERC WIMS) was formed in 2000 in Michigan — through the collaboration of the University of Michigan (UM), Michigan State University (MSU), and Michigan Technological University. The center is funded by the National Science Foundation. Additional contributions came from the state of Michigan, the three partnering core universities, other federal agencies, and a consortium of about twenty companies. Purpose The center researches innovations for wireless integrated microsystemss. The ERC WIMS works on merging micropower circuits, wireless interfaces, biomedical, and environmental sensors and subsystems, and advanced packaging to create microsystems that will have a pervasive impact on society during the next two decades. The partnership combined UM's programs in sensors and microsystems with MSU's leadership in materials, especially in diamond and in carbon nanotubes, and Michigan Tech's expertise in packaging, micromilling, and hot embossing. See also External links NSF Engineering Research Center for Wireless Integrated Microsystems at the University of Michigan Engineering research institutes Science and technology in Michigan Michigan State University Michigan Technological University University of Michigan Economy of Metro Detroit Wireless network organizations Research institutes established in 2000 2000 establishments in Michigan
https://en.wikipedia.org/wiki/Bifidobacterium%20animalis
Bifidobacterium animalis is a gram-positive, anaerobic, rod-shaped bacterium of the Bifidobacterium genus which can be found in the large intestines of most mammals, including humans. Bifidobacterium animalis and Bifidobacterium lactis were previously described as two distinct species. Presently, both are considered B. animalis with the subspecies Bifidobacterium animalis subsp. animalis and Bifidobacterium animalis subsp. lactis. Both old names B. animalis and B. lactis are still used on product labels, as this species is frequently used as a probiotic. In most cases, which subspecies is used in the product is not clear. Trade names Several companies have attempted to trademark particular strains, and as a marketing technique, have invented scientific-sounding names for the strains. Danone (Dannon in the United States) markets the subspecies strain as Bifidus Digestivum (UK), Bifidus Regularis (US and Mexico), Bifidobacterium Lactis or B.L. Regularis (Canada), DanRegularis (Brazil), Bifidus Actiregularis (Argentina, Austria, Belgium, Bulgaria, Chile, Czech Republic, France, Germany, Greece, Hungary, Israel, Italy, Kazakhstan, Netherlands, Portugal, Romania, Russia, South Africa, Spain and the UK), and Bifidus Essensis in the Middle East (and formerly in Hungary, Bulgaria, Romania and The Netherlands) through Activia from Safi Danone KSA. Chr. Hansen A/S from Denmark has a similar claim on a strain of Bifidobacterium animalis subsp. lactis, marketed under the trademark BB-12. Lidl lists "Bifidobacterium BB-12" in its "Proviact" yogurt. Bifidobacterium lactis Bl-04 and Bi-07 are strains from DuPont's Danisco FloraFIT range. They are used in many dietary probiotic supplements. Theralac contains the strains Bifidobacterium lactis BI-07 and Bifidobacterium lactis BL-34 (also called BI-04) in its probiotic capsule. Bifidobacterium lactis HN019 is a strain from Fonterra licensed to DuPont, which markets it as HOWARU Bifido. It is sold in a variety of commercial
https://en.wikipedia.org/wiki/Wythoff%20symbol
In geometry, the Wythoff symbol is a notation representing a Wythoff construction of a uniform polyhedron or plane tiling within a Schwarz triangle. It was first used by Coxeter, Longuet-Higgins and Miller in their enumeration of the uniform polyhedra. Later the Coxeter diagram was developed to mark uniform polytopes and honeycombs in n-dimensional space within a fundamental simplex. A Wythoff symbol consists of three numbers and a vertical bar. It represents one uniform polyhedron or tiling, although the same tiling/polyhedron can have different Wythoff symbols from different symmetry generators. For example, the regular cube can be represented by 3 | 2 4 with Oh symmetry, and 2 4 | 2 as a square prism with 2 colors and D4h symmetry, as well as 2 2 2 | with 3 colors and D2h symmetry. With a slight extension, Wythoff's symbol can be applied to all uniform polyhedra. However, the construction methods do not lead to all uniform tilings in Euclidean or hyperbolic space. Description The Wythoff construction begins by choosing a generator point on a fundamental triangle. This point must be chosen at equal distance from all edges that it does not lie on, and a perpendicular line is then dropped from it to each such edge. The three numbers in Wythoff's symbol, p, q, and r, represent the corners of the Schwarz triangle used in the construction, which are , , and radians respectively. The triangle is also represented with the same numbers, written (p q r). The vertical bar in the symbol specifies a categorical position of the generator point within the fundamental triangle according to the following: indicates that the generator lies on the corner p, indicates that the generator lies on the edge between p and q, indicates that the generator lies in the interior of the triangle. In this notation the mirrors are labeled by the reflection-order of the opposite vertex. The p, q, r values are listed before the bar if the corresponding mirror is active. A special use is
https://en.wikipedia.org/wiki/Insertion%20sequence
Insertion element (also known as an IS, an insertion sequence element, or an IS element) is a short DNA sequence that acts as a simple transposable element. Insertion sequences have two major characteristics: they are small relative to other transposable elements (generally around 700 to 2500 bp in length) and only code for proteins implicated in the transposition activity (they are thus different from other transposons, which also carry accessory genes such as antibiotic resistance genes). These proteins are usually the transposase which catalyses the enzymatic reaction allowing the IS to move, and also one regulatory protein which either stimulates or inhibits the transposition activity. The coding region in an insertion sequence is usually flanked by inverted repeats. For example, the well-known IS911 (1250 bp) is flanked by two 36bp inverted repeat extremities and the coding region has two genes partially overlapping orfA and orfAB, coding the transposase (OrfAB) and a regulatory protein (OrfA). A particular insertion sequence may be named according to the form ISn, where n is a number (e.g. IS1, IS2, IS3, IS10, IS50, IS911, IS26 etc.); this is not the only naming scheme used, however. Although insertion sequences are usually discussed in the context of prokaryotic genomes, certain eukaryotic DNA sequences belonging to the family of Tc1/mariner transposable elements may be considered to be, insertion sequences. In addition to occurring autonomously, insertion sequences may also occur as parts of composite transposons; in a composite transposon, two insertion sequences flank one or more accessory genes, such as an antibiotic resistance gene (e.g. Tn10, Tn5). Nevertheless, there exist another sort of transposons, called unit transposons, that do not carry insertion sequences at their extremities (e.g. Tn7). A complex transposon does not rely on flanking insertion sequences for resolvase. The resolvase is part of the tns genome and cuts at flanking inverted rep
https://en.wikipedia.org/wiki/KAME%20project
The KAME project, a sub-project of the WIDE Project, was a joint effort of six organizations in Japan which aimed to provide a free IPv6 and IPsec (for both IPv4 and IPv6) protocol stack implementation for variants of the BSD Unix computer operating-system. The project began in 1998 and on November 7, 2005 it was announced that the project would be finished at the end of March 2006. The name KAME is a short version of Karigome, the location of the project's offices beside Keio University SFC. KAME Project's code is based on "WIDE Hydrangea" IPv6/IPsec stack by WIDE Project. The following organizations participated in the project: ALAXALA Networks Corporation Fujitsu, Ltd. Hitachi, Ltd. Internet Initiative Japan Inc. Keio University NEC Corporation University of Tokyo Toshiba Corporation Yokogawa Electric Corporation FreeBSD, NetBSD and DragonFly BSD integrated IPsec and IPv6 code from the KAME project; OpenBSD integrated just IPv6 code rather than both (having developed their own IPsec stack). Linux also integrated code from the project in its native IPsec implementation. The KAME project collaborated with the TAHI Project (which develops and provides verification-technology for IPv6), the USAGI Project and the WIDE Project. Racoon racoon, KAME's user-space daemon, handles Internet Key Exchange (IKE). In Linux systems it forms part of the ipsec-tools package. References External links Internet protocols BSD software Free software projects Cryptographic software Key management Virtual private networks IPv6
https://en.wikipedia.org/wiki/Michigan%20Life%20Sciences%20Corridor
The Michigan Life Sciences Corridor (MLSC) is a $1 billion biotechnology initiative in the U.S. state of Michigan. The MLSC invests in biotech research at four Michigan institutions: the University of Michigan in Ann Arbor; Michigan State University in East Lansing; Wayne State University in Detroit; and the Van Andel Institute in Grand Rapids. The Michigan Economic Development Corporation administers the program. It began in 1999 with money from the state's settlement with the tobacco industry. When the program's funds distributions are completed in 2019, the goal is that the investments in high tech research will have notably expanded the state's economic base. History In 1998, the State of Michigan, along with 45 other states, reached the $8.5 billion Tobacco Master Settlement Agreement, a settlement with the U.S. tobacco industry. Former Governor John Engler created the Michigan Life Sciences Corridor in 1999 when he signed Public Act 120 of 1999. The bill appropriated money from the state's settlement with the tobacco industry to fund biotech research at four of Michigan's largest research institutions. Under the management of the Michigan Economic Development Corporation, the MLSC allocated $1 billion over the course of 20 years, including $50 million in 1999 to fund research on aging. The following year, the MLSC awarded $100 million to 63 Michigan universities. In 2002, Governor Jennifer Granholm incorporated the MLSC into the Michigan Technology Tri-Corridor, adding funding for homeland security and alternative fuel research. In 2009, the University of Michigan added a 30-building, North Campus Research Complex by acquiring the former Pfizer pharmaceutical corporation facility. A BioEnterprise Midwest Healthcare Venture report found that Michigan attracted $451.8 million in new biotechnology venture capital investments from 2005 to 2009. See also University Research Corridor References External links Michigan Economic Development Corporation
https://en.wikipedia.org/wiki/Joint%20capsule
In anatomy, a joint capsule or articular capsule is an envelope surrounding a synovial joint. Each joint capsule has two parts: an outer fibrous layer or membrane, and an inner synovial layer or membrane. Membranes Each capsule consists of two layers or membranes: an outer (fibrous membrane, fibrous stratum) composed of avascular white fibrous tissue an inner (synovial membrane, synovial stratum) which is a secreting layer On the inside of the capsule, articular cartilage covers the end surfaces of the bones that articulate within that joint. The outer layer is highly innervated by the same nerves which perforate through the adjacent muscles associated with the joint. Fibrous membrane The fibrous membrane of the joint capsule is attached to the whole circumference of the articular end of each bone entering into the joint, and thus entirely surrounds the articulation. It is made up of dense connective tissue. It's a long spongy tissue. Clinical significance Frozen shoulder (adhesive capsulitis) is a disorder in which the shoulder capsule becomes inflamed. Plica syndrome is a disorder in which the synovial plica becomes inflamed and causes abnormal biomechanics in the knee. Gallery See also Articular capsule of the humerus Articular capsule of the knee joint Atlanto-axial joint Capsule of atlantooccipital articulation Capsule of hip joint Capsule of temporomandibular joint References External links Anatomy
https://en.wikipedia.org/wiki/Meat%20extract
Meat extract is highly concentrated meat stock, usually made from beef or chicken. It is used to add meat flavor in cooking, and to make broth for soups and other liquid-based foods. Meat extract was invented by Baron Justus von Liebig, a German 19th-century organic chemist. Liebig specialised in chemistry and the classification of food and wrote a paper on how the nutritional value of a meat is lost by boiling. Liebig's view was that meat juices, as well as the fibres, contained much important nutritional value and that these were lost by boiling or cooking in unenclosed vessels. Fuelled by a desire to help feed the undernourished, in 1840 he developed a concentrated beef extract, Extractum carnis Liebig, to provide a nutritious meat substitute for those unable to afford the real thing. However, it took 30 kg of meat to produce 1 kg of extract, making the extract too expensive. Commercialization Liebig's Extract of Meat Company Liebig went on to co-found the Liebig's Extract of Meat Company, (later Oxo), in London whose factory, opened in 1865 in Fray Bentos, a port in Uruguay, took advantage of meat from cattle being raised for their hides — at one third the price of British meat. Before that, it was the Giebert et Compagnie (April 1863). Bovril In the 1870s, John Lawson Johnston invented 'Johnston's Fluid Beef', later renamed Bovril. Unlike Liebig's meat extract, Bovril also contained flavourings. It was manufactured in Argentina and Uruguay which could provide cheap cattle. Effects Liebig and Bovril were important contributors to the beef industry in South America. Bonox On the market in 1919 and created by the Fred Walker and Company Bonox is manufactured in Australia. When it was created it was often offered as an alternative hot drink with it being common to offer "Coffee, tea or Bonox". Today Meat extracts have largely been supplanted by bouillon cubes and yeast extract. Some brands of meat extract, such as Oxo and Bovril, now contain yeast extrac
https://en.wikipedia.org/wiki/Pseudo%20algebraically%20closed%20field
In mathematics, a field is pseudo algebraically closed if it satisfies certain properties which hold for algebraically closed fields. The concept was introduced by James Ax in 1967. Formulation A field K is pseudo algebraically closed (usually abbreviated by PAC) if one of the following equivalent conditions holds: Each absolutely irreducible variety defined over has a -rational point. For each absolutely irreducible polynomial with and for each nonzero there exists such that and . Each absolutely irreducible polynomial has infinitely many -rational points. If is a finitely generated integral domain over with quotient field which is regular over , then there exist a homomorphism such that for each . Examples Algebraically closed fields and separably closed fields are always PAC. Pseudo-finite fields and hyper-finite fields are PAC. A non-principal ultraproduct of distinct finite fields is (pseudo-finite and hence) PAC. Ax deduces this from the Riemann hypothesis for curves over finite fields. Infinite algebraic extensions of finite fields are PAC. The PAC Nullstellensatz. The absolute Galois group of a field is profinite, hence compact, and hence equipped with a normalized Haar measure. Let be a countable Hilbertian field and let be a positive integer. Then for almost all -tuples , the fixed field of the subgroup generated by the automorphisms is PAC. Here the phrase "almost all" means "all but a set of measure zero". (This result is a consequence of Hilbert's irreducibility theorem.) Let K be the maximal totally real Galois extension of the rational numbers and i the square root of −1. Then K(i) is PAC. Properties The Brauer group of a PAC field is trivial, as any Severi–Brauer variety has a rational point. The absolute Galois group of a PAC field is a projective profinite group; equivalently, it has cohomological dimension at most 1. A PAC field of characteristic zero is C1. References Algebraic geometry Field (mathematics)
https://en.wikipedia.org/wiki/Self-discharger
A self-discharger (or self-unloader) is a ship that is able to discharge its cargo using its own gear. The most common discharge method for bulk cargo is to use an excavator that is fitted on a traverse running over the vessel's entire hatch, and that is able to move sideways as well. Lake freighters on the Great Lakes use conveyor-based unloading gear to empty funnel-shaped holds from the bottom, lifting the bulk cargo onto a boom. See also Boland and Cornelius Company Adam E. Cornelius References Water transport
https://en.wikipedia.org/wiki/Richard%20Bird%20%28computer%20scientist%29
Richard Simpson Bird (4 February 1943 – 4 April 2022) was an English computer scientist. Posts He was a Supernumerary Fellow of Computation at Lincoln College, University of Oxford, in Oxford England, and former director of the Oxford University Computing Laboratory (now the Department of Computer Science, University of Oxford). Formerly, Bird was at the University of Reading. Research interests Bird's research interests lay in algorithm design and functional programming, and he was known as a regular contributor to the Journal of Functional Programming, and as author of several books promoting use of the programming language Haskell, including Introduction to Functional Programming using Haskell, Thinking Functionally with Haskell, Algorithm Design with Haskell co-authored with Jeremy Gibbons, and other books on related topics. His name is associated with the Bird–Meertens formalism, a calculus for deriving programs from specifications in a functional programming style. Other organisational affilitations He was a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, supports, and maintains the programming languages ALGOL 60 and ALGOL 68. References External links , laboratory 1943 births 2022 deaths English computer scientists English non-fiction writers Computer science writers Members of the Department of Computer Science, University of Oxford Fellows of Lincoln College, Oxford Academics of the University of Reading Programming language researchers Formal methods people English male non-fiction writers People educated at St Olave's Grammar School
https://en.wikipedia.org/wiki/U.S.%20critical%20infrastructure%20protection
In the U.S., critical infrastructure protection (CIP) is a concept that relates to the preparedness and response to serious incidents that involve the critical infrastructure of a region or the nation. The American Presidential directive PDD-63 of May 1998 set up a national program of "Critical Infrastructure Protection". In 2014 the NIST Cybersecurity Framework was published after further presidential directives. History The U.S. CIP is a national program to ensure the security of vulnerable and interconnected infrastructures of the United States. In May 1998, President Bill Clinton issued presidential directive PDD-63 on the subject of critical infrastructure protection. This recognized certain parts of the national infrastructure as critical to the national and economic security of the United States and the well-being of its citizenry, and required steps to be taken to protect it. This was updated on December 17, 2003, by President Bush through Homeland Security Presidential Directive HSPD-7 for Critical Infrastructure Identification, Prioritization, and Protection. The directive describes the United States as having some critical infrastructure that is "so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety." Overview The systems and networks that make up the infrastructure of society are often taken for granted, yet a disruption to just one of those systems can have dire consequences across other sectors. Take, for example, a computer virus that disrupts the distribution of natural gas across a region. This could lead to a consequential reduction in electrical power generation, which in turn leads to the forced shutdown of computerized controls and communications. Road traffic, air traffic, and rail transportation might then become affected. Emergency services might also be hampered. An entire region can become
https://en.wikipedia.org/wiki/Apple%E2%80%93Intel%20architecture
The Apple–Intel architecture, or Mactel, is an unofficial name used for Macintosh personal computers developed and manufactured by Apple Inc. that use Intel x86 processors, rather than the PowerPC and Motorola 68000 ("68k") series processors used in their predecessors or the ARM-based Apple silicon SoCs used in their successors. As Apple changed the architecture of its products, they changed the firmware from the Open Firmware used on PowerPC-based Macs to the Intel-designed Extensible Firmware Interface (EFI). With the change in processor architecture to x86, Macs gained the ability to boot into x86-native operating systems (such as Microsoft Windows), while Intel VT-x brought near-native virtualization with macOS as the host OS. Technologies Background Apple uses a subset of the standard PC architecture, which provides support for Mac OS X and support for other operating systems. Hardware and firmware components that must be supported to run an operating system on Apple-Intel hardware include the Extensible Firmware Interface. The EFI and GUID Partition Table With the change in architecture, a change in firmware became necessary. Extensible Firmware Interface (EFI) is the firmware-based replacement for the PC BIOS from Intel. Designed by Intel, it was chosen by Apple to replace Open Firmware, used on PowerPC architectures. Since many operating systems, such as Windows XP and many versions of Windows Vista, are incompatible with EFI, Apple released a firmware upgrade with a Compatibility Support Module that provides a subset of traditional BIOS support with its Boot Camp product. GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a substitute for the earlier PC BIOS. The GPT replaces the Master Boot Record (MBR) used with BIOS. Booting To Mac operating systems Intel Macs can boot in two ways: directly via EFI, or in a "le
https://en.wikipedia.org/wiki/Settling%20time
In control theory the settling time of a dynamical system such as an amplifier or other output device is the time elapsed from the application of an ideal instantaneous step input to the time at which the amplifier output has entered and remained within a specified error band. Settling time includes a propagation delay, plus the time required for the output to slew to the vicinity of the final value, recover from the overload condition associated with slew, and finally settle to within the specified error. Systems with energy storage cannot respond instantaneously and will exhibit transient responses when they are subjected to inputs or disturbances. Definition Tay, Mareels and Moore (1998) defined settling time as "the time required for the response curve to reach and stay within a range of certain percentage (usually 5% or 2%) of the final value." Mathematical detail Settling time depends on the system response and natural frequency. The settling time for a second order, underdamped system responding to a step response can be approximated if the damping ratio by A general form is Thus, if the damping ratio , settling time to within 2% = 0.02 is: See also Rise time Time constant References External links Second-Order System Example Op Amp Settling Time Graphical tutorial of Settling time and Risetime MATLAB function for computing settling time, rise time, and other step response characteristics Settling Time Calculator Transient response characteristics Systems_theory
https://en.wikipedia.org/wiki/Rotary%20phase%20converter
A rotary phase converter, abbreviated RPC, is an electrical machine that converts power from one polyphase system to another, converting through rotary motion. Typically, single-phase electric power is used to produce three-phase electric power locally to run three-phase loads in premises where only single-phase is available. Operation A basic three-phase induction motor will have three windings, each end connected to terminals typically numbered (arbitrarily) as L1, L2, and L3 and sometimes T1, T2, T3. A three-phase induction motor can be run at two-thirds of its rated horsepower on single-phase power applied to a single winding, once spun up by some means. A three-phase motor running on a single phase cannot start itself because it lacks the other phases to create a rotation on its own, much like a crank that is at dead center. A three-phase induction motor that is spinning under single-phase power applied to terminals L1 and L2 will generate an electric potential (voltage) across terminal L3 in respect with L1 and L2. However, L1 to L3 and L2 to L3 will be 120 degrees out of phase with the input voltage, thus creating three-phase power. However, without current injection, special idler windings, or other means of regulation, the voltage will sag when a load is applied. Power-factor correction is a very important consideration when building or choosing an RPC. This is desirable because an RPC that has power-factor correction will consume less current from the single-phase service supplying power to the phase converter and its loads. A major concern with three phase power is that each phase be at similar voltages. A discrepancy between phases is known as phase imbalance. As a general guideline, unbalanced three-phase power that exceeds 4% in voltage variation can damage the equipment that it is meant to operate. History At the beginning of the 20th century, there were two main principles of electric railway traction current systems: DC system 16⅔ Hz
https://en.wikipedia.org/wiki/X%20hyperactivation
X hyperactivation refers to the process in Drosophila by which genes on the X chromosome in male flies become twice as active as genes on the X chromosome in female flies. Because male flies have a single X chromosome and female flies have two X chromosomes, the higher level of activation in males ensures that X chromosome genes are overall expressed at the same level in males and females. X hyperactivation is one mechanism of dosage compensation, whereby organisms that use genetic sex determination systems balance the gene dosage from the sex chromosomes between males and females. X hyperactivation is regulated by the alternative splicing of a gene called sex-lethal. The gene was named sex-lethal due to its mutant phenotype which has little to no effect on male flies but results in the death of females due to X hyperactivation of the two X chromosomes. In female Drosophila, the sex-lethal protein causes the female-specific splicing of the sex-lethalgene to produce more of the sex-lethal protein. This produces a positive feedback loop as the sex-lethal protein splices the sex-lethal gene to produce more of the sex-lethal protein. In male Drosophila, there isn’t enough sex-lethal to activate the female-specific splicing of the sex-lethal gene, and it goes through the "default" splicing. This means that section of the gene that is spliced out in females remains in males. This portion contains an early stop codon resulting in no protein being made from it. In females, the sex-lethal protein inhibits the male-specific lethal (msl) gene complex that would normally activate X-linked genes that result in an increase in the male transcription rate. The msl gene complex was named due to the loss-of-function mutant that results in the improper increase in the male transcription rate that results in the death of males. In males, the absence of the necessary amount of sex-lethal allows for the increase in the male transcription rate due to the msl gene complex no longer being
https://en.wikipedia.org/wiki/X%3AA%20ratio
The X:A ratio is the ratio between the number of X chromosomes and the number of sets of autosomes in an organism. This ratio is used primarily for determining the sex of some species, such as drosophila flies and the C. elegans nematode. The first use of this ratio for sex determination is ascribed to Victor M. Nigon. Generally, a 1:1 ratio results in a female and a 1:2 ratio results in a male. When calculating the ratio, Y chromosomes are ignored. For example, for a diploid drosophila that has XX, the ratio is 1:1 (2 Xs to 2 sets of autosomes, since it is a diploid). For a diploid drosophila that has XY, the ratio is 1:2 (1 X to 2 sets of autosomes, since it is diploid). Drosophilla sex chromosome ratio determines the factors it encodes which enhances the synthesis of sxl protein which in turn activates the female specific pathway. See also Notes References Genetics X
https://en.wikipedia.org/wiki/Inductive%20data%20type
Inductive data type may refer to: Algebraic data type, a datatype each of whose values is data from other datatypes wrapped in one of the constructors of the datatype Inductive family, a family of inductive data types indexed by another type or value Recursive data type, a data type for values that may contain other values of the same type See also Inductive type Induction (disambiguation) Type theory Dependently typed programming
https://en.wikipedia.org/wiki/Homonym%20%28biology%29
In biology, a homonym is a name for a taxon that is identical in spelling to another such name, that belongs to a different taxon. The rule in the International Code of Zoological Nomenclature is that the first such name to be published is the senior homonym and is to be used (it is "valid"); any others are junior homonyms and must be replaced with new names. It is, however, possible that if a senior homonym is archaic, and not in "prevailing usage," it may be declared a nomen oblitum and rendered unavailable, while the junior homonym is preserved as a nomen protectum. For example: Cuvier proposed the genus Echidna in 1797 for the spiny anteater. However, Forster had already published the name Echidna in 1777 for a genus of moray eels. Forster's use thus has priority, with Cuvier's being a junior homonym. Illiger published the replacement name Tachyglossus in 1811. Similarly, the International Code of Nomenclature for algae, fungi, and plants (ICN) specifies that the first published of two or more homonyms is to be used: a later homonym is "illegitimate" and is not to be used unless conserved (or sanctioned, in the case of fungi). Example: the later homonym Myroxylon L.f. (1782), in the family Leguminosae, is conserved against the earlier homonym Myroxylon J.R.Forst. & G.Forst. (1775) (now called Xylosma, in the family Salicaceae). Parahomonyms Under the botanical code, names that are similar enough that they are likely to be confused are also considered to be homonymous (article 53.3). For example, Astrostemma Benth. (1880) is an illegitimate homonym of Asterostemma Decne. (1838). The zoological code has a set of spelling variations (article 58) that are considered to be identical. Hemihomonyms Both codes only consider taxa that are in their respective scope (animals for the ICZN; primarily plants for the ICN). Therefore, if an animal taxon has the same name as a plant taxon, both names are valid. Such names are called hemihomonyms. For example, the name E
https://en.wikipedia.org/wiki/Bead%20theory
The bead theory is a disproved hypothesis that genes are arranged on the chromosome like beads on a necklace. This theory was first proposed by Thomas Hunt Morgan after discovering genes through his work with breeding red and white eyed fruit flies. According to this theory, the existence of a gene as a unit of inheritance is recognized through its mutant alleles. A mutant allele affects a single phenotypic character, maps to one chromosome locus, gives a mutant phenotype when paired and shows a Mendelian ratio when intercrossed. Several tenets of the bead theory are worth emphasizing :- 1. The gene is viewed as a fundamental unit of structure, indivisible by crossing over. Crossing over take place between genes ( the beads in this model ) but never within them. 2. The gene is viewed as the fundamental unit of change or mutation. It changes in toto from one allelic form into another; there are no smaller components within it that can change. 3. The gene is viewed as the fundamental unit of function ( although the precise function of gene is not specified in this model ). Parts of a gene, if they exist cannot function. Guido Pontecorvo continued to work under the basis of this theory until Seymour Benzer showed in the 1950s that the bead theory was not correct. He demonstrated that a gene can be defined as a unit of function. A gene can be subdivided into a linear array of sites that are mutable and that can be recombined. The smallest units of mutation and recombination are now known to be correlated with single nucleotide pairs. References An Introduction to Genetic Analysis 7th edition Griffiths AJF, Miller JH, Suzuki DT, et al. New york W.H. Freeman;2000 Obsolete biology theories Genetics
https://en.wikipedia.org/wiki/Introitus
An introitus is an entrance into a canal or hollow organ. The vaginal introitus is the opening that leads to the vaginal canal. References External links Anatomy
https://en.wikipedia.org/wiki/Momentum%20map
In mathematics, specifically in symplectic geometry, the momentum map (or, by false etymology, moment map) is a tool associated with a Hamiltonian action of a Lie group on a symplectic manifold, used to construct conserved quantities for the action. The momentum map generalizes the classical notions of linear and angular momentum. It is an essential ingredient in various constructions of symplectic manifolds, including symplectic (Marsden–Weinstein) quotients, discussed below, and symplectic cuts and sums. Formal definition Let M be a manifold with symplectic form ω. Suppose that a Lie group G acts on M via symplectomorphisms (that is, the action of each g in G preserves ω). Let be the Lie algebra of G, its dual, and the pairing between the two. Any ξ in induces a vector field ρ(ξ) on M describing the infinitesimal action of ξ. To be precise, at a point x in M the vector is where is the exponential map and denotes the G-action on M. Let denote the contraction of this vector field with ω. Because G acts by symplectomorphisms, it follows that is closed (for all ξ in ). Suppose that is not just closed but also exact, so that for some function . If this holds, then one may choose the to make the map linear. A momentum map for the G-action on (M, ω) is a map such that for all ξ in . Here is the function from M to R defined by . The momentum map is uniquely defined up to an additive constant of integration (on each connected component). An -action on a symplectic manifold is called Hamiltonian if it is symplectic and if there exists a momentum map. A momentum map is often also required to be -equivariant, where G acts on via the coadjoint action, and sometimes this requirement is included in the definition of a Hamiltonian group action. If the group is compact or semisimple, then the constant of integration can always be chosen to make the momentum map coadjoint equivariant. However, in general the coadjoint action must be modified to make the map
https://en.wikipedia.org/wiki/DNS%20hosting%20service
A DNS hosting service is a service that runs Domain Name System (DNS) servers. Most, but not all, domain name registrars include DNS hosting service with registration. Free DNS hosting services also exist. Many third-party DNS hosting services provide dynamic DNS. DNS hosting service is optimal when the provider has multiple servers in various geographic locations that provide resilience and minimize latency for clients around the world. By operating DNS nodes closer to end users, DNS queries travel a much shorter distance, resulting in faster Web address resolution speed. DNS can also be self-hosted by running on generic Internet hosting services. Free DNS A number of sites offer free DNS hosting, either for second level domains registered with registrars which do not offer free (or sufficiently flexible) DNS service, or as third level domains (selection.somedomain.com). These services generally also offer Dynamic DNS. Free DNS typically includes facilities to manage A, MX, CNAME, TXT and NS records of the domain zone. In many cases the free services can be upgraded with various premium services. Free DNS service providers can also make money through sponsorship. The majority of modern free DNS services are sponsored by large providers of telecommunication services. See also Domain Name System Fast-flux DNS Remote backup service List of DNS record types List of managed DNS providers References Internet hosting
https://en.wikipedia.org/wiki/Internet%20hosting%20service
An Internet hosting service is a service that runs servers connected to the Internet, allowing organizations and individuals to serve content or host services connected to the Internet. A common kind of hosting is web hosting. Most hosting providers offer a combination of services e-mail hosting, website hosting, and database hosting, for example. DNS hosting service, another type of service usually provided by hosting providers, is often bundled with domain name registration. Dedicated server hosts, provide a server, usually housed in a datacenter and connected to the Internet where clients can run anything they want (including web servers and other servers). The hosting provider ensures that the servers have Internet connections with good upstream bandwidth and reliable power sources. Another popular kind of hosting service is shared hosting. This is a type of web hosting service, where the hosting provider provisions hosting services for multiple clients on one physical server and share the resources between the clients. Virtualization is key to making this work effectively. Types of hosting service Full-featured hosting services Full-featured hosting services include: Complex managed hosting, applies to both physical dedicated servers and virtual servers, with many companies choosing a hybrid (a combination of physical and virtual) hosting solution. There are many similarities between standard and complex managed hosting but the key difference is the level of administrative and engineering support that the customer pays for – owing to both the increased size and complexity of the infrastructure deployment. The provider steps in to take over most of the management, including security, memory, storage, and IT support. The service is primarily proactive. Dedicated hosting service, also called managed to host service, where the hosting service provider owns and manages the machine, leasing full control to the client. Management of the server can includ
https://en.wikipedia.org/wiki/Email%20hosting%20service
An email hosting service is an Internet hosting service that operates email servers. Features Email hosting services usually offer premium email as opposed to advertisement-supported free email or free webmail. Email hosting services thus differ from typical end-user email providers such as webmail sites. They cater mostly to demanding email users and small and medium-sized (SME) businesses, while larger enterprises usually run their own email hosting services on their own equipment using software such as Microsoft Exchange Server, IceWarp or Postfix. Hosting providers can manage a user's own domain name, including any email authentication scheme that the domain owner wishes to enforce to convey the meaning that using a specific domain name identifies and qualifies email senders. Types There are various types of email hosting services. These vary according to the storage space available, location of the mail boxes and functionality. Various hosting providers offer this service through two models. A traditional email hosting or per mailbox hosting. Traditional email hosting charges a set amount for a certain number of mail boxes whereas the per mail box model charges per mail box needed. These include: Free Email Services using a public domain such as Gmail; Yahoo. These are more suitable for individual and personal use. Shared Hosting Email Services are large mailboxes that are hosted on a server. People on a shared hosting email service share IP addresses as they are hosted on the same server. Cloud Email Services are suitable for small companies and SMEs. These mailboxes are hosted externally utilizing a cloud service provider. Examples of these are Gsuite by Gmail and Microsoft Exchange Emails by Microsoft. Enterprise Email Solutions are suitable for SMEs and large corporations that host several mailboxes. In some cases these are located on dedicated servers on the premises however they can be located on a cloud based server that can scale horizontally
https://en.wikipedia.org/wiki/Comparison%20of%20file%20comparison%20tools
This article compares computer software tools which are used for accomplishing comparisons of files of various types. The file types addressed by individual file comparison apps varies, but may include text, symbols, images, audio, or video. This category of software tool is often called "file comparison" or "diff tool", but those effectively are equivalent terms — where the term "diff" is more commonly associated with the Unix diff utility. A typical rudimentary case is the comparison of one file against another. However, it also may include comparisons between two populations of files, such as in the case of comparing directories or folders, as part of file management. For instance, this might be to detect problems with corrupted backup versions of a collection of files ... or to validate a package of files is in compliance with standards before publishing. Note that comparisons must be made among the same file type. Meaning, a text file cannot be compared to a picture containing text, unless an optical character reader (OCR) process is done first to extract the text. Likewise, text cannot be compared to spoken words, unless the spoken words first are transcribed into text. Additionally, text in one language cannot be compared to text in another, unless one is translated into the language of other. A critical consideration is how the two files being compared must be substantially similar and thus not radically different. Even different revisions of the same document — if there are many changes due to additions, removals, or moving of content — may make comparisons of file changes very difficult to interpret. This suggests frequent version saves of a critical document, to better facilitate a file comparison. A "diff" file comparison tool is a vital time and labor saving utility, because it aids in accomplishing tedious comparisons. Thus, it is a vital part of demanding comparison processes employed by individuals, academics, legal arena, forensics field, and ot
https://en.wikipedia.org/wiki/Plantlet
A plantlet is a young or small plant, produced on the leaf margins or the aerial stems of another plant. Many plants such as spider plants naturally create stolons with plantlets on the ends as a form of asexual reproduction. Vegetative propagules or clippings of mature plants may form plantlets. An example is mother of thousands. Many plants reproduce by throwing out long shoots or runners that can grow into new plants. Mother of thousands appears to have lost the ability to reproduce sexually and make seeds, but transferred at least part of the embryo-making process to the leaves to make plantlets. See also Apomixis Plant propagation Plant reproduction References Plants
https://en.wikipedia.org/wiki/Alpha%20diversity
In ecology, alpha diversity (α-diversity) is the mean species diversity in a site at a local scale. The term was introduced by R. H. Whittaker together with the terms beta diversity (β-diversity) and gamma diversity (γ-diversity). Whittaker's idea was that the total species diversity in a landscape (gamma diversity) is determined by two different things, the mean species diversity in sites at a more local scale (alpha diversity) and the differentiation among those sites (beta diversity). Scale considerations Both the area or landscape of interest and the sites within it may be of very different sizes in different situations, and no consensus has been reached on what spatial scales are appropriate to quantify alpha diversity. It has therefore been proposed that the definition of alpha diversity does not need to be tied to a specific spatial scale: alpha diversity can be measured for an existing dataset that consists of subunits at any scale. The subunits can be, for example, sampling units that were already used in the field when carrying out the inventory, or grid cells that are delimited just for the purpose of analysis. If results are extrapolated beyond the actual observations, it needs to be taken into account that the species diversity in the subunits generally gives an underestimation of the species diversity in larger areas. Different concepts Ecologists have used several slightly different definitions of alpha diversity. Whittaker himself used the term both for the species diversity in a single subunit and for the mean species diversity in a collection of subunits. It has been argued that defining alpha diversity as a mean across all relevant subunits is preferable, because it agrees better with Whittaker's idea that total species diversity consists of alpha and beta components. Definitions of alpha diversity can also differ in what they assume species diversity to be. Often researchers use the values given by one or more diversity indices, such as specie
https://en.wikipedia.org/wiki/Heteronuclear%20single%20quantum%20coherence%20spectroscopy
The heteronuclear single quantum coherence or heteronuclear single quantum correlation experiment, normally abbreviated as HSQC, is used frequently in NMR spectroscopy of organic molecules and is of particular significance in the field of protein NMR. The experiment was first described by Geoffrey Bodenhausen and D. J. Ruben in 1980. The resulting spectrum is two-dimensional (2D) with one axis for proton (1H) and the other for a heteronucleus (an atomic nucleus other than a proton), which is usually 13C or 15N. The spectrum contains a peak for each unique proton attached to the heteronucleus being considered. The 2D HSQC can also be combined with other experiments in higher-dimensional NMR experiments, such as NOESY-HSQC or TOCSY-HSQC. General scheme The HSQC experiment is a highly sensitive 2D-NMR experiment and was first described in a 1H—15N system, but is also applicable to other nuclei such as 1H—13C and 1H—31P. The basic scheme of this experiment involves the transfer of magnetization on the proton to the second nucleus, which may be 15N, 13C or 31P, via an INEPT (Insensitive nuclei enhanced by polarization transfer) step. After a time delay (t1), the magnetization is transferred back to the proton via a retro-INEPT step and the signal is then recorded. In HSQC, a series of experiments is recorded where the time delay t1 is incremented. The 1H signal is detected in the directly measured dimension in each experiment, while the chemical shift of 15N or 13C is recorded in the indirect dimension which is formed from the series of experiments. HSQC in protein NMR 1H—15N HSQC The 15N HSQC experiment is one of the most frequently recorded experiments in protein NMR. The HSQC experiment can be performed using the natural abundance of the 15N isotope, but normally for protein NMR, isotopically labeled proteins are used. Such labelled proteins are usually produced by expressing the protein in cells grown in 15N-labelled media. Each residue of the protein, wi
https://en.wikipedia.org/wiki/Comparison%20of%20widget%20engines
This is a comparison of widget engines. This article is not about widget toolkits that are used in computer programming to build graphical user interfaces. General Operating system support Technical Languages Which programming languages the engines support. Most engines rely upon interpreted languages. Formats and Development Development Tools As widgets are largely combinations of HTML or XHTML, CSS, and Javascript in most cases, standard AJAX tools, such as Eclipse ATF, can be used for development. Specialized tools may give access to additional capabilities supplied by frameworks such as Dojo or Openrico. References Widget engines Widget Engines
https://en.wikipedia.org/wiki/Homopolar%20generator
A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder) with an electrical polarity that depends on the direction of rotation and the orientation of the field. It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can source tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Also, the homopolar generator is unique in that no other rotary electric machine can produce DC without using rectifiers or commutators. The Faraday disc The first homopolar generator was developed by Michael Faraday during his experiments in 1831. It is frequently called the Faraday disc or Faraday wheel in his honor. It was the beginning of modern dynamos — that is, electrical generators which operate using a magnetic field. It was very inefficient and was not used as a practical power source, but it showed the possibility of generating electric power using magnetism, and led the way for commutated direct current dynamos and then alternating current alternators. The Faraday disc was primarily inefficient due to counterflows of current. While current flow was induced directly underneath the magnet, the current would circulate backwards in regions outside the influence of the magnetic field. This counterflow limits the power output to the pickup wires, and induces waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of
https://en.wikipedia.org/wiki/Microsoft%20Advertising
Microsoft Advertising (formerly Bing Ads) is an online advertising platform developed by Microsoft, where advertisers bid to display brief ads, service offers, product listings and videos to web users, it provides pay per click advertising on search engines Bing, Yahoo! and DuckDuckGo, as well as on other websites, mobile apps, and videos. In 2021, Microsoft Advertising surpassed US$10 billion in annual revenue. History Microsoft was the last of the "big three" search engines (which also includes Google and Yahoo!) to develop its own system for delivering pay per click (PPC) ads. Until the beginning of 2006, all of the ads displayed on the MSN Search engine were supplied by Overture (and later Yahoo!). MSN collected a portion of the ad revenue in return for displaying Yahoo!'s ads on its search engine. As search marketing grew, Microsoft began developing its own system, MSN adCenter, for selling PPC advertisements directly to advertisers. As the system was phased in, MSN Search (now Bing) showed Yahoo! and adCenter advertising in its search results. Microsoft effort to create AdCenter was led by Tarek Najm, then general manager of the MSN division of Microsoft. In June 2006, the contract between Yahoo! and Microsoft had expired and Microsoft was displaying only ads from adCenter until 2010. In November 2006 Microsoft acquired Deep Metrix, a company situated in Gatineau, Canada, that created web-analytics software. Microsoft has built a new product adCenter Analytics based on the acquired technology. In October, 2007 the beta version of Microsoft Project Gatineau was released to a limited number of participants. In May 2007, Microsoft agreed to purchase the digital marketing solutions parent company, aQuantive, for roughly $6 billion. Microsoft later resold Atlas, a key piece of the aQuantive acquisition, to Facebook in 2013. Microsoft acquired ScreenTonic on May 3, 2007, AdECN on July 26, 2007, and YaData on February 27, 2008, and merged their technologies
https://en.wikipedia.org/wiki/3-D%20Secure
3-D Secure is a protocol designed to be an additional security layer for online credit and debit card transactions. The name refers to the "three domains" which interact using the protocol: the merchant/acquirer domain, the issuer domain, and the interoperability domain. Originally developed in the autumn of 1999 by Celo Communications AB (which was acquired by Gemplus Associates and integrated into Gemplus, Gemalto and now Thales Group) for Visa Inc. in a project named "p42" ("p" from Pole vault as the project was a big challenge and "42" as the answer from the book The Hitchhiker's Guide to the Galaxy). A new updated version was developed by Gemplus between 2000-2001. In 2001 Arcot Systems (now CA Technologies) and Visa Inc. with the intention of improving the security of Internet payments, and offered to customers under the Verified by Visa brand (later rebranded as Visa Secure). Services based on the protocol have also been adopted by Mastercard as SecureCode (later rebranded as Identity Check), by Discover as ProtectBuy, by JCB International as J/Secure, and by American Express as American Express SafeKey. Later revisions of the protocol have been produced by EMVCo under the name EMV 3-D Secure. Version 2 of the protocol was published in 2016 with the aim of complying with new EU authentication requirements and resolving some of the short-comings of the original protocol. Analysis of the first version of the protocol by academia has shown it to have many security issues that affect the consumer, including a greater surface area for phishing and a shift of liability in the case of fraudulent payments. Description and basic aspects The basic concept of the protocol is to tie the financial authorization process with online authentication. This additional security authentication is based on a three-domain model (hence the "3-D" in the name). The three domains are: Acquirer domain (the bank and the merchant to which the money is being paid), Issuer domain (
https://en.wikipedia.org/wiki/OMI%20cryptograph
The OMI cryptograph was a rotor cipher machine produced and sold by Italian firm Ottico Meccanica Italiana (OMI) in Rome. The machine had seven rotors, including a reflecting rotor. The rotors stepped regularly. Each rotor could be assembled from two sections with different wiring: one section consisted of a "frame" containing ratchet notches, as well as some wiring, while the other section consisted of a "slug" with a separate wiring. The slug section fitted into the frame section, and different slugs and frames could be interchanged with each other. As a consequence, there were many permutations for the rotor selection. The machine was offered for sale during the 1960s. References Cipher A. Deavours and Louis Kruh, "Machine Cryptography and Modern Cryptanalysis", Artech House, 1985, pp. 146–147 F. L. Bauer, Decrypted Secrets, 2nd edition, Springer-Verlag, 2000, , pp. 112,136. Cryptographic hardware Rotor machines
https://en.wikipedia.org/wiki/Proof%20of%20impossibility
In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic. The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers. Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved because the number is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge. Two other classical problems—trisecting the general angle and doubling the cube—were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures. A problem that arose in the 16th century was creating a general formula using radicals to express the solution of any polynomial equation of fixed degree k, where k ≥ 5. In the 1820s, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) showed this to be impossible, using concepts such as solvable groups from Galois theory—a new sub-field of abstract algebra. Some of the most important proofs of impossibility found in the 20th century were those related to undecidability
https://en.wikipedia.org/wiki/Braking%20action
Braking action in aviation is a description of how easily an aircraft can stop after landing on a runway. Either pilots or airport management can report the braking action according to the U.S. Federal Aviation Administration. When reporting braking action, any of the following terms may be used: Good; Medium; Poor; Nil - bad or no braking action. If an air traffic controller receives a braking action report worse than good, a Pilot Report (PIREP) must be completed and an advisory must be included in the Automatic Terminal Information Service ("Braking Action Advisories are in effect"). As of October 2019, the FAA has used mu values to describe braking conditions Europe In Europe this differs from the above reference. Braking action reports in Europe are an indication/declaration of reduced friction on a runway due to runway contamination (see Landing performance, under the Runway Surface section) which may impact an aircraft's crosswind limits. European reports have nothing to do with stopping distances on a runway, though they should alert pilots that stopping distances will also be affected. Landing distances are empirically dealt with by landing performance data on dry/wet/contaminated runways for each aircraft type. Crosswind limits Whenever braking actions are issued, they are informing pilots that the aircraft maximum crosswind limits may have to be reduced on that runway because of reduced surface friction (grip). This should alert pilots that they may experience lateral/directional control issues during the landing roll-out. In a crosswind landing, the pilot tacks into wind to make allowances for the sideways force that is being applied to the aircraft (also known as using a crab angle). This sideways force occurs as the wind strikes the aircraft's vertical fin causing the aircraft to weathercock or weathervane. This manifests itself as an angular displacement of the fuselage relative to the runway centreline. This angular displacement is known as dr
https://en.wikipedia.org/wiki/Software%20analyst
In a software development team, a software analyst is the person who monitors the software development process, performs configuration management, identifies safety, performance, and compliance issues, and prepares software requirements and specification (Software Requirements Specification) documents. The software analyst is the seam between the software users and the software developers. They convey the demands of software users to the developers. See also Systems analyst Application analyst References People in information technology Software requirements Computer occupations
https://en.wikipedia.org/wiki/Continuous%20predicate
Continuous predicate is a term coined by Charles Sanders Peirce (1839–1914) to describe a special type of relational predicate that results as the limit of a recursive process of hypostatic abstraction. Here is one of Peirce's definitive discussions of the concept: When we have analyzed a proposition so as to throw into the subject everything that can be removed from the predicate, all that it remains for the predicate to represent is the form of connection between the different subjects as expressed in the propositional form. What I mean by "everything that can be removed from the predicate" is best explained by giving an example of something not so removable. But first take something removable. "Cain kills Abel." Here the predicate appears as "— kills —." But we can remove killing from the predicate and make the latter "— stands in the relation — to —." Suppose we attempt to remove more from the predicate and put the last into the form "— exercises the function of relate of the relation — to —" and then putting "the function of relate to the relation" into another subject leave as predicate "— exercises — in respect to — to —." But this "exercises" expresses "exercises the function". Nay more, it expresses "exercises the function of relate", so that we find that though we may put this into a separate subject, it continues in the predicate just the same. Stating this in another form, to say that "A is in the relation R to B" is to say that A is in a certain relation to R. Let us separate this out thus: "A is in the relation R¹ (where R¹ is the relation of a relate to the relation of which it is the relate) to R to B". But A is here said to be in a certain relation to the relation R¹. So that we can express the same fact by saying, "A is in the relation R¹ to the relation R¹ to the relation R to B", and so on ad infinitum. A predicate which can thus be analyzed into parts all homogeneous with the whole I call a continuous predicate. It is very impor
https://en.wikipedia.org/wiki/NETCONF
The Network Configuration Protocol (NETCONF) is a network management protocol developed and standardized by the IETF. It was developed in the NETCONF working group and published in December 2006 as RFC 4741 and later revised in June 2011 and published as RFC 6241. The NETCONF protocol specification is an Internet Standards Track document. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices. Its operations are realized on top of a simple Remote Procedure Call (RPC) layer. The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol. The NETCONF protocol can be conceptually partitioned into four layers: The Content layer consists of configuration data and notification data. The Operations layer defines a set of base protocol operations to retrieve and edit the configuration data. The Messages layer provides a mechanism for encoding remote procedure calls (RPCs) and notifications. The Secure Transport layer provides a secure and reliable transport of messages between a client and a server. The NETCONF protocol has been implemented in network devices such as routers and switches by some major equipment vendors. One particular strength of NETCONF is its support for robust configuration change using transactions involving a number of devices. History The IETF developed the Simple Network Management Protocol (SNMP) in the late 1980s and it proved to be a very popular network management protocol. In the early part of the 21st century it became apparent that in spite of what was originally intended, SNMP was not being used to configure network equipment, but was mainly being used for network monitoring. In June 2002, the Internet Architecture Board and key members of the IETF's network management community got together with network operators to discuss the situati
https://en.wikipedia.org/wiki/Render%20output%20unit
In computer graphics, the render output unit (ROP) or raster operations pipeline is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel (each pixel is a dimensionless point) and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value; this process is called rasterization. Thus, ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together. Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs. All data rendered has to travel through the ROP in order to be written to the framebuffer, from there it can be transmitted to the display. Therefore, the ROP is where the GPU's output is assembled into a bitmapped image ready for display. Historically the number of ROPs, texture mapping units (TMUs), and shader processing units/stream processors have been equal. However, from 2004, several GPUs have decoupled these areas to allow optimum transistor allocation for application workload and available memory performance. As the trend continues, it is expected that graphics processors will continue to decouple the various parts of their architectures to enhance their adaptability to future graphics applications. This design also allows chip makers to build a modular line-up, where the top-end GPUs are essentially using the same logic as the low-end products. See also Graphics pipeline Rendering (computer graphics) Execution unit References 3D rendering
https://en.wikipedia.org/wiki/Betty%20Holberton
Frances Elizabeth Holberton (March 7, 1917 – December 8, 2001) was an American computer scientist who was one of the six original programmers of the first general-purpose electronic digital computer, ENIAC. The other five ENIAC programmers were Jean Bartik, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence. Holberton invented breakpoints in computer debugging. Early life and education Holberton was born Frances Elizabeth Snyder in Philadelphia, Pennsylvania in 1917. Her father was John Amos Snyder (1884–1963), her mother was Frances J. Morrow (1892–1981), and she was the third child in a family of eight children. Holberton studied journalism, because its curriculum let her travel far afield. Journalism was also one of the few fields open to women as a career in the 1940s. On her first day of classes at the University of Pennsylvania, her math professor asked her if she wouldn't be better off at home raising children. Career During World War 2 while the U.S. Army needed to compute ballistics trajectories, many women were hired for this task. Holberton was hired by the Moore School of Engineering to work as a "computer" and chosen to be one of the six women to program the ENIAC. The ENIAC stood for Electronic Numerical Integrator And Computer. Classified as "subprofessionals", Holberton, along with Kay McNulty, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas, programmed the ENIAC to perform calculations for ballistics trajectories electronically for the Army's Ballistic Research Laboratory. In the beginning, because the ENIAC was classified, the women were only allowed to work with blueprints and wiring diagrams in order to program it. During her time working on ENIAC she had many productive ideas that came to her overnight, leading other programmers to jokingly state that she "solved more problems in her sleep than other people did awake." The ENIAC was unveiled on February 15, 1946, at the University of Pennsylvania.
https://en.wikipedia.org/wiki/Steven%20Block
Steven M. Block (born 1952) is an American biophysicist and Professor at Stanford University with a joint appointment in the departments of Biology and Applied Physics. In addition, he is a member of the scientific advisory group JASON, a senior fellow of Stanford's Freeman Spogli Institute for International Studies, and an amateur bluegrass musician. Block received his B.A. and M.A. from Oxford University. He has been elected to the U.S. National Academy of Sciences (2007) and the American Academy of Arts and Sciences (2000), and is a winner of the Max Delbruck Prize of the American Physical Society (2008), as well as the Single Molecule Biophysics Prize of the Biophysical Society (2007). He served as President of the Biophysical Society during 2005-6. His graduate work was completed in the laboratory of Howard Berg at the University of Colorado and Caltech. He received his Ph.D. in 1983 and went on to do postdoctoral research at Stanford. Since that time, Block has held positions at the Rowland Institute for Science, Harvard University, and Princeton University before returning to Stanford in 1999. As a graduate student, Block picked apart the adaptation kinetics involved in bacterial chemotaxis. As an independent scientist, Block has pioneered the use of optical tweezers, a technique developed by Arthur Ashkin, to study biological enzymes and polymers at the single-molecule level. Work in his lab has led to the direct observation of the 8 nm steps taken by kinesin and the sub-nanometer stepping motions of RNA polymerase on a DNA template. While consulting for the United States government through JASON, Block has researched the many threats associated with bioterrorism and headed influential studies on how advances in genetic engineering have impacted biological warfare. Selected publications References External links Steven Block Profile Block Lab Website 1952 births Living people University of Colorado alumni Harvard University faculty Princeton Universi
https://en.wikipedia.org/wiki/Cantellated%20tesseract
In four-dimensional geometry, a cantellated tesseract is a convex uniform 4-polytope, being a cantellation (a 2nd order truncation) of the regular tesseract. There are four degrees of cantellations of the tesseract including with permutations truncations. Two are also derived from the 24-cell family. Cantellated tesseract The cantellated tesseract, bicantellated 16-cell, or small rhombated tesseract is a convex uniform 4-polytope or 4-dimensional polytope bounded by 56 cells: 8 small rhombicuboctahedra, 16 octahedra, and 32 triangular prisms. Construction In the process of cantellation, a polytope's 2-faces are effectively shrunk. The rhombicuboctahedron can be called a cantellated cube, since if its six faces are shrunk in their respective planes, each vertex will separate into the three vertices of the rhombicuboctahedron's triangles, and each edge will separate into two of the opposite edges of the rhombicuboctahedrons twelve non-axial squares. When the same process is applied to the tesseract, each of the eight cubes becomes a rhombicuboctahedron in the described way. In addition however, since each cube's edge was previously shared with two other cubes, the separating edges form the three parallel edges of a triangular prism—32 triangular prisms, since there were 32 edges. Further, since each vertex was previously shared with three other cubes, the vertex would split into 12 rather than three new vertices. However, since some of the shrunken faces continues to be shared, certain pairs of these 12 potential vertices are identical to each other, and therefore only 6 new vertices are created from each original vertex (hence the cantellated tesseract's 96 vertices compared to the tesseract's 16). These six new vertices form the vertices of an octahedron—16 octahedra, since the tesseract had 16 vertices. Cartesian coordinates The Cartesian coordinates of the vertices of a cantellated tesseract with edge length 2 is given by all permutations of: Structur
https://en.wikipedia.org/wiki/Active%20Phased%20Array%20Radar
[[File:APAR.jpg|right|thumb|APAR mounted on top of the German Navy Sachsen class frigate Hamburgs superstructure.]]Active Phased Array Radar (APAR''') is a shipborne active electronically scanned array multifunction 3D radar (MFR) developed and manufactured by Thales Nederland. The radar receiver modules are developed and built in the US by the Sanmina Corporation. Characteristics APAR has four fixed (i.e., non-rotating) sensor arrays (faces), fixed on a pyramidal structure. Each face consists of 3424 transmit/receive (TR) modules operating at X band frequencies. The radar provides the following capabilities: air target tracking of over 200 targets out to 150 km surface target tracking of over 150 targets out to 32 km horizon search out to 75 km "limited" volume search out to 150 km (in order to back up the volume search capabilities of the SMART-L) cued search (a mode in which the search is cued using data originating from another sensor) surface naval gunfire support missile guidance using the Interrupted Continuous Wave Illumination (ICWI) technique, thus allowing guidance of 32 semi-active radar homing missiles in flight simultaneously, including 16 in the terminal guidance phase "innovative" Electronic Counter-Countermeasures (ECCM) Note: all ranges listed above are instrumented ranges. Mountings APAR is installed on four Royal Netherlands Navy (RNLN) LCF De Zeven Provinciën class frigates, three German Navy F124 Sachsen class frigates, and three Royal Danish Navy Ivar Huitfeldt class frigates. The Netherlands and Germany (along with Canada) were the original sponsors for the development of APAR, whereas Denmark selected APAR for their frigates as part of a larger decision to select a Thales Nederland anti-air warfare system (designed around the APAR and SMART-L radars, the Raytheon ESSM and SM-2 missile systems, and the Lockheed Martin Mk-41 vertical launch system) over the competing Sea Viper anti-air warfare system (designed around the S1850M an
https://en.wikipedia.org/wiki/List%20of%20streaming%20media%20systems
This is a list of streaming media systems. A more detailed comparison of streaming media systems is also available. Servers Ampache – GPL/LGPL Audio streaming Ant Media Server – Real-Time media streaming atmosph3re – responsive web-based streaming audio server for personal music collection Darwin Streaming Server – Apple Public Source License datarhei Restreamer— Apache licensed media server for RTMP, HLS, and SRT with flexible FFmpeg API and graphical user interface dyne:bolic – Linux live CD ready for radio streaming emby – a media server/client that runs on Linux/Mac/Windows/freeBSD/docker & NAS devices with clients on Android TV/fireTV/Apple TV/Roku/Windows/PlayStation/Xbox/iOS & HTML5 Capable devices FFserver included in FFmpeg (discontinued) Firefly Media Server – GPL Flash Media Server FreeJ – video streamer for Icecast – GPL Helix Universal Server – delivers MPEG-DASH, RTSP, HTTP Live Streaming (HLS), RTMP; developed by RealNetworks, discontinued since October 2014 HelixCommunity – RealNetworks Open Source development community Jellyfin – GPL-licensed fully open-source fork of Emby Icecast – GPL streaming media server IIS Media Services – Extensions for the Windows IIS web server that deliver intelligent progressive downloads, Smooth Streaming, and HTTP Live Streaming Kaltura – full-featured Affero GPL video platform running on your own servers or cloud LIVE555 – a set of open source (LGPL) C++ libraries for multimedia streaming; its RTSP/RTP/RTCP client implementation is used by VLC media player and MPlayer Logitech Media Server – open source music streaming server, backboned by a music database (formerly SlimServer, SqueezeCenter and Squeezebox Server) Nimble Streamer – freeware server for live and VOD streaming (transcoding function is not free) nginx with Nginx-rtmp-module (BSD 2-clause) OpenBroadcaster – LPFM IPTV broadcast automation tools with AGPL Linux Python play out based on Gstreamer Open Broadcaster Software – open source streaming and record
https://en.wikipedia.org/wiki/Schoof%E2%80%93Elkies%E2%80%93Atkin%20algorithm
The Schoof–Elkies–Atkin algorithm (SEA) is an algorithm used for finding the order of or calculating the number of points on an elliptic curve over a finite field. Its primary application is in elliptic curve cryptography. The algorithm is an extension of Schoof's algorithm by Noam Elkies and A. O. L. Atkin to significantly improve its efficiency (under heuristic assumptions). Details The Elkies-Atkin extension to Schoof's algorithm works by restricting the set of primes considered to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime is called an Elkies prime if the characteristic equation: splits over , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials that parametrize pairs of -isogenous elliptic curves in terms of their j-invariants (in practice alternative modular polynomials may also be used but for the same purpose). If the instantiated polynomial has a root in then is an Elkies prime, and we may compute a polynomial whose roots correspond to points in the kernel of the -isogeny from to . The polynomial is a divisor of the corresponding division polynomial used in Schoof's algorithm, and it has significantly lower degree, versus . For Elkies primes, this allows one to compute the number of points on modulo more efficiently than in Schoof's algorithm. In the case of an Atkin prime, we can gain some information from the factorization pattern of in , which constrains the possibilities for the number of points modulo , but the asymptotic complexity of the algorithm depends entirely on the Elkies primes. Provided there are sufficien
https://en.wikipedia.org/wiki/AppArmor
AppArmor ("Application Armor") is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. AppArmor supplements the traditional Unix discretionary access control (DAC) model by providing mandatory access control (MAC). It has been partially included in the mainline Linux kernel since version 2.6.36 and its development has been supported by Canonical since 2009. Details In addition to manually creating profiles, AppArmor includes a learning mode, in which profile violations are logged, but not prevented. This log can then be used for generating an AppArmor profile, based on the program's typical behavior. AppArmor is implemented using the Linux Security Modules (LSM) kernel interface. AppArmor is offered in part as an alternative to SELinux, which critics consider difficult for administrators to set up and maintain. Unlike SELinux, which is based on applying labels to files, AppArmor works with file paths. Proponents of AppArmor claim that it is less complex and easier for the average user to learn than SELinux. They also claim that AppArmor requires fewer modifications to work with existing systems. For example, SELinux requires a filesystem that supports "security labels", and thus cannot provide access control for files mounted via NFS. AppArmor is filesystem-agnostic. Other systems AppArmor represents one of several possible approaches to the problem of restricting the actions that installed software may take. The SELinux system generally takes an approach similar to AppArmor. One important difference: SELinux identifies file system objects by inode number instead of path. Under AppArmor an inaccessible file may become accessible if a hard link to it is created. This difference may be less important than it once was, as Ubuntu 10.10 and later mit
https://en.wikipedia.org/wiki/Biophysical%20Society
The Biophysical Society is an international scientific society whose purpose is to lead the development and dissemination of knowledge in biophysics. Founded in 1958, the Society currently consists of over 7,500 members in academia, government, and industry. Although the Society is based in the United States, it is an international organization. Overseas members currently comprise over one third of the total. Origins The Biophysical Society was founded in response to the growth of the field of biophysics after World War Two, as well as concerns that the American Physiological Society had become too large to serve the community of biophysicists. Discussions between prominent biophysicists in 1955 and 1956 led to the planning of the society's first meeting in Columbus, Ohio in 1957, with about 500 attendees. Among the scientists involved in the early effort were Ernest C. Pollard, Samuel Talbot, Otto Schmitt, Kenneth Stewart Cole, W. A. Selle, Max Lauffer, Ralph Stacy, Herman P. Schwan, and Robley C. Williams. This meeting was described by Cole as "a biophysics meeting with the ulterior motive of finding out if there was such a thing as biophysics and, if so, what sort of thing this biophysics might be." Organization The Biophysical Society is governed by four officers: the President, President-elect, Past-President Secretary, and Treasurer, as well as by a Council of twelve members in addition to the officers. These offices are elected by the membership of the society. The Council appoints an executive officer to oversee the functions and staff of the society. The society has a number of committees that help to implement its mission. The committees are: Awards, Early Careers, Education, Finance, Member Services, Membership, Committee for Inclusion and Diversity, Nominating, Professional Opportunities for Women, Program, Public Affairs, Publications, and Thematic Meetings. The Biophysical Society also supports subgroups focusing on smaller areas within biophy
https://en.wikipedia.org/wiki/Rotational%E2%80%93vibrational%20coupling
In physics, rotational–vibrational coupling occurs when the rotation frequency of a system is close to or identical to a natural internal vibration frequency. The animation on the right shows ideal motion, with the force exerted by the spring and the distance from the center of rotation increasing together linearly with no friction. In rotational-vibrational coupling, angular velocity oscillates. By pulling the circling masses closer together, the spring transfers its stored strain energy into the kinetic energy of the circling masses, increasing their angular velocity. The spring cannot bring the circling masses together, since the spring's pull weakens as the circling masses approach. At some point, the increasing angular velocity of the circling masses overcomes the pull of the spring, causing the circling masses to increasingly distance themselves. This increasingly strains the spring, strengthening its pull and causing the circling masses to transfer their kinetic energy into the spring's strain energy, thereby decreasing the circling masses' angular velocity. At some point, the pull of the spring overcomes the angular velocity of the circling masses, restarting the cycle. In helicopter design, helicopters must incorporate damping devices, because at specific angular velocities, the rotorblade vibrations can reinforced themselves by rotational-vibrational coupling, and build up catastrophically. Without damping, these vibrations would cause the rotorblades to break loose. Energy conversions The animation on the right provides a clearer view on the oscillation of the angular velocity. There is a close analogy with harmonic oscillation. When a harmonic oscillation is at its midpoint then all the energy of the system is kinetic energy. When the harmonic oscillation is at the points furthest away from the midpoint all the energy of the system is potential energy. The energy of the system is oscillating back and forth between kinetic energy and potential ener
https://en.wikipedia.org/wiki/-ase
The suffix -ase is used in biochemistry to form names of enzymes. The most common way to name enzymes is to add this suffix onto the end of the substrate, e.g. an enzyme that breaks down peroxides may be called peroxidase; the enzyme that produces telomeres is called telomerase. Sometimes enzymes are named for the function they perform, rather than substrate, e.g. the enzyme that polymerizes (assembles) DNA into strands is called polymerase; see also reverse transcriptase. Etymology The -ase suffix is a libfix derived from "diastase", the first recognized enzyme. Its usage in subsequently discovered enzymes was proposed by Émile Duclaux, with the intention of honoring the first scientists to isolate diastase. See also Amylase DNA polymerase References ase Biological nomenclature ase ase
https://en.wikipedia.org/wiki/Active%20networking
Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network. Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks. What does it offer? Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics. The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity. The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking. How it relates to other networking paradigms Active networking relates to other networking paradigms primarily based upon how computing and communication are partitioned in the architecture. Active networking and software-defined networking Active networking is an approach to network architecture with in-network programmability. The name derives from a comparison with network approaches advocating minimization of in-network processing, based on design advice such as the "end-to-end argument". Two major approaches were conceived: programmable network elements ("switches") and capsules, a programmabi
https://en.wikipedia.org/wiki/Typographical%20Number%20Theory
Typographical Number Theory (TNT) is a formal axiomatic system describing the natural numbers that appears in Douglas Hofstadter's book Gödel, Escher, Bach. It is an implementation of Peano arithmetic that Hofstadter uses to help explain Gödel's incompleteness theorems. Like any system implementing the Peano axioms, TNT is capable of referring to itself (it is self-referential). Numerals TNT does not use a distinct symbol for each natural number. Instead it makes use of a simple, uniform way of giving a compound symbol to each natural number: {| | zero | align=right | 0 |- | one | align=right | S0 |- | two | align=right | SS0 |- | three | align=right | SSS0 |- | four | align=right | SSSS0 |- | five | align=right | SSSSS0 |} The symbol S can be interpreted as "the successor of", or "the number after". Since this is, however, a number theory, such interpretations are useful, but not strict. It cannot be said that because four is the successor of three that four is SSSS0, but rather that since three is the successor of two, which is the successor of one, which is the successor of zero, which has been described as 0, four can be "proved" to be SSSS0. TNT is designed such that everything must be proven before it can be said to be true. Variables In order to refer to unspecified terms, TNT makes use of five variables. These are a, b, c, d, e. More variables can be constructed by adding the prime symbol after them; for example, a, b, c, a, a‴ are all variables. In the more rigid version of TNT, known as "austere" TNT, only a, a, a‴ etc. are used. Operators Addition and multiplication of numerals In Typographical Number Theory, the usual symbols of "+" for additions, and "·" for multiplications are used. Thus to write "b plus c" is to write (b + c) and "a times d" is written as (a·d) The parentheses are required. Any laxness would violate TNT's formation system (although it is trivially proved this formalism is unnecessary for operations which are both c
https://en.wikipedia.org/wiki/Annuities%20in%20the%20European%20Union
Under European Union law, an annuity is a financial contract which provides an income stream in return for an initial payment with specific parameters. It is the opposite of a settlement funding. A Swiss annuity is not considered a European annuity for tax reasons. Immediate annuity An immediate annuity is an annuity for which the time between the contract date and the date of the first payment is not longer than the time interval between payments. A common use for an immediate annuity is to provide a pension to a retired person or persons. It is a financial contract which makes a series of payments with certain characteristics: either level or fluctuating periodical payments made annually, or at more frequent intervals in advance or arrears duration may be: fixed (annuity certain) during the lifetime or one or more persons, possibly reduced after death of one person during the lifetime but not longer than a maximum number of years during the lifetime but not shorter than a minimum number of years Annuity certain An annuity certain pays the annuitant for a number of years designated. This option is not suitable for retirement income, as the person may outlive the number of years the annuity will pay. Life annuity A life annuity or lifetime immediate annuity is most often used to provide an income in old age (i.e., a pension). This type of annuity may be purchased from an insurance (Ireland and the UK, Life Assurance) company. This annuity can be compared to a loan which is made by the purchaser to the issuing company, who then pay back the original capital with interest to the annuitant on whose life the annuity is based. The assumed period of the loan is based on the life expectancy of the annuitant but life annuities are payable until the death of the last surviving annuitant. In order to guarantee that the income continues for life, the investment relies on cross-subsidy. Because an annuity population can be expected to have a distribution of lifespans
https://en.wikipedia.org/wiki/CoDeeN
CoDeeN is a proxy server system created at Princeton University in 2003 and deployed for general use on PlanetLab. It operates as per the following: Users set their internet caches to a nearby high bandwidth proxy that participates in the system. Requests to that proxy are then forwarded to an appropriate member of the system that is in charge of the file (should be caching it) and that has sent recent updates showing that it is still alive. The file is forwarded to the proxy and thence to the client. What this means for normal users is that if they use this and a server is slow, however the content is cached on the system, then (after the first upload) requests to that file will be fast. It also means that the request will not be satisfied by the original server, equivalent to free bandwidth. For rare files this system could be slightly slower than downloading the file itself. The system's speed is also subject to the constraint of number of participating proxies. For the case of large files requested by many peers, it uses a kind of 'multi-cast stream' from one peer to the others, which then distribute out to their respective proxies. CoBlitz, a CDN technology firm (2006–2009), was a take-off of this, in that files are not saved in the web cache of a single member of the proxy-system, but are instead saved piece-wise across several members, and 'gathered up' when they are requested. This allows for more sharing of disk space among proxies, and for higher fault tolerance. To access this system, URLs were prefixed with http://coblitz.codeen.org/. Verivue Inc. acquired CoBlitz in October 2010. References External links CoDeeN Servers (computing) Distributed data storage
https://en.wikipedia.org/wiki/DRMAA
Distributed Resource Management Application API (DRMAA) is a high-level Open Grid Forum (OGF) API specification for the submission and control of jobs to a distributed resource management (DRM) system, such as a cluster or grid computing infrastructure. The scope of the API covers all the high level functionality required for applications to submit, control, and monitor jobs on execution resources in the DRM system. In 2007, DRMAA was one of the first two (the other one was GridRPC) specifications that reached the full recommendation status in the OGF. In 2012 the second version of the DRMAA standard (DRMAA2) was published in an abstract interface definition language (IDL) defining the semantic of the functions in GFD 194. DRMAA2 specifies more than twice as many calls as DRMAA. It covers cluster monitoring, has a notion of queues and machines, and introduces a multi job-session concept for single applications for a better job workflow management. Later in 2012 the C API was specified as a first language binding in GF 198. Development model The development of this API was done through the Global Grid Forum, in the model of IETF standard development, and it was originally co-authored by: Roger Brobst from Cadence Design Systems Waiman Chan from IBM Fritz Ferstl from Sun Microsystems, now Univa Jeff Gardiner from John P. Robarts Research Institute Andreas Haas from Sun Microsystems (Co-chair) Bill Nitzberg from Altair Engineering Hrabri Rajic from Intel (Maintainer & Co-chair) John Tollefsrud from Sun Microsystems Founding (chair) This specification was first proposed at Global Grid Forum 3 (GGF3) in Frascati, Italy, but gained most of its momentum at Global Grid Forum 4 in Toronto, Ontario. The development of the specification was first proposed with the objective to facilitate direct interfacing of applications to existing DRM systems by application's builders, portal builders, and Independent Software Vendors (ISVs). Because the API was co-authored by partic
https://en.wikipedia.org/wiki/CDDLM
CDDLM (Configuration Description, Deployment, and Lifecycle Management Specification) is a Global Grid Forum standard for the management, deployment and configuration of Grid Service lifecycles or inter-organization resources. Structure The specification is based on component documents; Document that describes functional requirements, use cases, and high-level architectures, and otherwise serves as a Foundation Document Document outlining the development of a non-XML based Configuration, Description and Deployment Language Document outlining the development of an XML based Configuration, Description and Deployment Language Document outlining the development of a Configuration, Description and Deployment Component Model Development Model The development of this API was done through the Global Grid Forum as an open standard, in the model of IETF standard development, and it was originally edited by D. Bell, T. Kojo, P. Goldsack, S. Loughran, D. Milojicic, S. Schaefer, J. Tatemura, and P. Toft. Significance System administration in a distributed environment with diverse hardware, software, patch level, and imposed user requirements makes the ability to deploy, manage, and describe services and software configuration difficult. Within a grid, this difficulty is complicated further by the need to have similar service end points, possibly on heterogeneous architectures. Grid service requests may require configuration changes. This standard provided a framework which described a language and methods that have the ability to describe system configuration, and move system, services, and software towards desired configuration endpoints. Furthermore, it served as the first real attempt to address system administration issues within a grid. CDDLM is to grids, as CFEngine for servers. References Global Grid Forum CDDLM document Global Grid Forum Document Series External links Global Grid Forum homepage System Administration and CDDLM Distributed Resource Mana
https://en.wikipedia.org/wiki/PlanetLab
PlanetLab was a group of computers available as a testbed for computer networking and distributed systems research. It was established in 2002 by Prof. Larry L. Peterson and Prof. David Culler, and as of June 2010, it was composed of 1090 nodes at 507 sites worldwide. Each research project had a "slice", or virtual machine access to a subset of the nodes. Accounts were limited to persons affiliated with corporations and universities that hosted PlanetLab nodes. However, a number of free, public services have been deployed on PlanetLab, including CoDeeN, the Coral Content Distribution Network, and Open DHT. Open DHT was taken down on 1 July 2009. PlanetLab was officially shut down in May 2020 but continues in Europe. References External links PlanetLab PlanetLab Europe Software testing
https://en.wikipedia.org/wiki/Hidden%20subgroup%20problem
The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems such as factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's algorithm for factoring in quantum computing is an instance of the hidden subgroup problem for finite abelian groups, while the other problems correspond to finite groups that are not abelian. Problem statement Given a group , a subgroup , and a set , we say a function hides the subgroup if for all if and only if . Equivalently, is constant on the cosets of H, while it is different between the different cosets of H. Hidden subgroup problem: Let be a group, a finite set, and a function that hides a subgroup . The function is given via an oracle, which uses bits. Using information gained from evaluations of via its oracle, determine a generating set for . A special case is when is a group and is a group homomorphism in which case corresponds to the kernel of . Motivation The hidden subgroup problem is especially important in the theory of quantum computing for the following reasons. Shor's algorithm for factoring and for finding discrete logarithms (as well as several of its extensions) relies on the ability of quantum computers to solve the HSP for finite abelian groups. The existence of efficient quantum algorithms for HSPs for certain non-abelian groups would imply efficient quantum algorithms for two major problems: the graph isomorphism problem and certain shortest vector problems (SVPs) in lattices. More precisely, an efficient quantum algorithm for the HSP for the symmetric group would give a quantum algorithm for the graph isomorphism. An efficient quantum algorithm for the HSP for the dihedral group would give a quantum algorithm for the unique SVP. Algorithms There is an efficient quantum algorithm for solving HSP over finite
https://en.wikipedia.org/wiki/Optical%20disc%20recording%20technologies
Optical disc authoring requires a number of different optical disc recorder technologies working in tandem, from the optical disc media to the firmware to the control electronics of the optical disc drive. Types of recordable optical disc There are numerous formats of recordable optical direct to disk on the market, all of which are based on using a laser to change the reflectivity of the digital recording medium in order to duplicate the effects of the pits and lands created when a commercial optical disc is pressed. Emerging technologies such as holographic data storage and 3D optical data storage aim to use entirely different data storage methods, but these products are in development and are not yet widely available. The earliest form is magneto-optical, which uses a magnetic field in combination with a laser to write to the medium. Though not widely used in consumer equipment, the original NeXT cube used MO media as its standard storage device, and consumer MO technology is available in the form of Sony's MiniDisc. This form of medium is rewriteable. The most common form of recordable optical media is write-once organic dye technology, popularized in the form of the CD-R and still used for higher-capacity media such as DVD-R. This uses the laser alone to scorch a transparent organic dye (usually cyanine, phthalocyanine, or azo compound-based) to create "pits" (i.e. dark spots) over a reflective spiral groove. Most such media are designated with an R (recordable) suffix. Such discs are often quite colorful, generally coming in shades of blue or pale yellow or green. Rewritable, non-magnetic optical media are possible using phase change alloys, which are converted between crystalline and amorphous states (with different reflectivity) using the heat from the drive laser. Such media must be played in specially tuned drives, since the phase-change material has less of a contrast in reflectivity than dye-based media; while most modern drives support such media,
https://en.wikipedia.org/wiki/Variable%20%28mathematics%29
In mathematics, a variable (from Latin variabilis, "changeable") is a symbol that represents a mathematical object. A variable may represent a number, a vector, a matrix, a function, the argument of a function, a set, or an element of a set. Algebraic computations with variables as if they were explicit numbers solve a range of problems in a single computation. For example, the quadratic formula solves any quadratic equation by substituting the numeric values of the coefficients of that equation for the variables that represent them in the quadratic formula. In mathematical logic, a variable is either a symbol representing an unspecified term of the theory (a meta-variable), or a basic object of the theory that is manipulated without referring to its possible intuitive interpretation. History In ancient works such as Euclid's Elements, single letters refer to geometric points and shapes. In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours". At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns. In 1637, René Descartes "invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c". Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in a 1887 Scientific American article. Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a variable quantity induces a corresponding variation of anothe
https://en.wikipedia.org/wiki/Optical%20disc%20recording%20modes
In optical disc authoring, there are multiple modes for recording, including Disc-At-Once, Track-At-Once, and Session-At-Once. CD Disc-At-Once Disc-At-Once (DAO) for CD-R media is a mode that masters the disc contents in one pass, rather than a track at a time as in Track At Once. DAO mode, unlike TAO mode, allows any amount of audio data (or no data at all) to be written in the "pre-gaps" between tracks. One use of this technique, for example, is to burn track introductions to be played before each track starts. A CD player will generally display a negative time offset counting up to the next track when such pre-gap introductions play. Pre-gap audio before the first track of the CD makes it possible to burn an unnumbered, "hidden" audio track. This track can only be accessed by "rewinding" from the start of the first track, backwards into the pre-gap audio. DAO recording is also the only way to write data to the unused R-W sub-channels. This allows for extended graphic and text features on an audio CD such as CD+G and CD-Text. It is also the only way to write audio files that link together seamlessly with no gaps, a technique often used in progressive rock, trance and other music genres. CD Track-At-Once Track-At-Once (TAO) is a recording mode where the recording laser stops after each track is finished and two run-out blocks are written. One link block and four run-in blocks are written when the next track is recorded. TAO discs can have both data and audio at the same time. There are 2 TAO writing modes Mode 1 Mode 2 XA DVD-R Disc At Once Disc-At-Once (DAO) recording for DVD-R media is a mode in which all data is written sequentially to the disc in one uninterrupted recording session. The on-disk contents result in a lead-in area, followed by the data, and closed by a lead-out area. The data is addressable in sectors of 2048 bytes each, with the first sector address being zero. There are no run-out blocks as in CD-R disc-at-once. Session At Once
https://en.wikipedia.org/wiki/Formal%20calculation
In mathematical logic, a formal calculation, or formal operation, is a calculation that is systematic but without a rigorous justification. It involves manipulating symbols in an expression using a generic substitution without proving that the necessary conditions hold. Essentially, it involves the form of an expression without considering its underlying meaning. This reasoning can either serve as positive evidence that some statement is true when it is difficult or unnecessary to provide proof or as an inspiration for the creation of new (completely rigorous) definitions. However, this interpretation of the term formal is not universally accepted, and some consider it to mean quite the opposite: a completely rigorous argument, as in formal mathematical logic. Examples Formal calculations can lead to results that are wrong in one context, but correct in another context. The equation holds if q has an absolute value less than 1. Ignoring this restriction, and substituting q = 2 to leads to Substituting q=2 into the proof of the first equation, yields a formal calculation that produces the last equation. But it is wrong over the real numbers, since the series does not converge. However, in other contexts (e.g. working with 2-adic numbers, or with integers modulo a power of 2), the series does converge. The formal calculation implies that the last equation must be valid in those contexts. Another example is obtained by substituting q=-1. The resulting series 1-1+1-1+... is divergent (over the real and the p-adic numbers) but a value can be assigned to it with an alternative method of summation, such as Cesàro summation. The resulting value, 1/2, is the same as that obtained by the formal computation. Formal power series Formal power series is a concept that adopts the form of power series from real analysis. The word "formal" indicates that the series need not converge. In mathematics, and especially in algebra, a formal series is an infinite sum that is consider
https://en.wikipedia.org/wiki/GPnotebook
GPnotebook is a British medical database for general practitioners (GPs). It is an online encyclopaedia of medicine that provides an immediate reference resource for clinicians worldwide. The database consists of over 30,000 index terms and over two million words of information. GPnotebook is provided online by Oxbridge Solutions Limited. GPnotebook website is primarily designed with the needs of general practitioners (GPs) in mind, and written by a variety of specialists, ranging from paediatrics to accident and emergency. The original idea for the database began in the canteen of John Radcliffe Hospital in 1990 while James McMorran, a first-year Oxford University clinical student, was writing up his medical notes. Instead of writing notes in longhand, he wrote his notes in ‘mind maps’ of packets of information linking different concepts and conditions in a two-dimensional representation of clinical knowledge. James discussed with Stewart McMorran (then a medical student at Cambridge University and a talented computer programmer) this way of representing medical knowledge and between them they created the authoring software to produce linking ‘packets’ of information in a database. This first authoring software and database was the origin of what today is GPnotebook. It was, in effect, a medical ‘Wiki’ over 16 years before the first ‘Wiki’! Initially, James used the authoring software alone to capture his own clinical learning. There was interest from other medical students at Oxford and in the end a team of six authors (mainly Oxford medical students) became the founding (and continuing) principal authors of GPnotebook. Among them was Damian Crowther who, in time, took over the role of technical lead for the project. James takes the role of editorial lead for the website. Damian developed the software for the web version of the database which was released on the worldwide web in 2001 as GPnotebook. GPnotebook is used within consultation by general practitioner
https://en.wikipedia.org/wiki/Structural%20steel
Structural steel is a category of steel used for making construction materials in a variety of shapes. Many structural steel shapes take the form of an elongated beam having a profile of a specific cross section. Structural steel shapes, sizes, chemical composition, mechanical properties such as strengths, storage practices, etc., are regulated by standards in most industrialized countries. Most structural steel shapes, such as -beams, have high second moments of area, which means they are very stiff in respect to their cross-sectional area and thus can support a high load without excessive sagging. Common structural shapes The shapes available are described in many published standards worldwide, and a number of specialist and proprietary cross sections are also available. -beam (-shaped cross-section – in Britain these include Universal Beams (UB) and Universal Columns (UC); in Europe it includes the IPE, HE, HL, HD and other sections; in the US it includes Wide Flange (WF or W-Shape) and sections) Z-Shape (half a flange in opposite directions) HSS-Shape (Hollow structural section also known as SHS (structural hollow section) and including square, rectangular, circular (pipe) and elliptical cross sections) Angle (-shaped cross-section) Structural channel, or -beam, or cross-section Tee (-shaped cross-section) Rail profile (asymmetrical -beam) Railway rail Vignoles rail Flanged rail Grooved rail Bar, a long piece with a rectangular cross section, but not so wide so as to be called a sheet. Rod, a round or square section long compared to its width; see also rebar and dowel. Plate, metal sheets thicker than 6 mm or  in. Open web steel joist While many sections are made by hot or cold rolling, others are made by welding together flat or bent plates (for example, the largest circular hollow sections are made from flat plate bent into a circle and seam-welded). The terms angle iron, channel iron, and sheet iron have been in common use since before wrought iron wa
https://en.wikipedia.org/wiki/Fault%20coverage
Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it. In electronics for example, stuck-at fault coverage is measured by sticking each pin of the hardware model at logic '0' and logic '1', respectively, and running the test vectors. If at least one of the outputs differs from what is to be expected, the fault is said to be detected. Conceptually, the total number of simulation runs is twice the number of pins (since each pin is stuck in one of two ways, and both faults should be detected). However, there are many optimizations that can reduce the needed computation. In particular, often many non-interacting faults can be simulated in one run, and each simulation can be terminated as soon as a fault is detected. A fault coverage test passes when at least a specified percentage of all possible faults can be detected. If it does not pass, at least three options are possible. First, the designer can augment or otherwise improve the vector set, perhaps by using a more effective automatic test pattern generation tool. Second, the circuit may be re-defined for better fault detectibility (improved controllability and observability). Third, the designer may simply accept the lower coverage. Test coverage (computing) The term test coverage used in the context of programming / software engineering, refers to measuring how much a software program has been exercised by tests. Coverage is a means of determining the rigour with which the question underlying the test has been answered. There are many kinds of test coverage: code coverage feature coverage, scenario coverage, screen item coverage, requirements coverage, model coverage. Each of these coverage types assumes that some kind of baseline exists which defin
https://en.wikipedia.org/wiki/501%20%28number%29
501 (five hundred [and] one) is the natural number following 500 and preceding 502. 501 is the sum of the first eighteen primes. There are 501 degree-8 polynomials with integer coefficients, all of whose roots are in the unit disk. There are 501 ways of partitioning the digits from 0 to 9 into two sets, each of which contains at least two digits, and 501 ways of partitioning a set of five elements into any number of ordered sequences. 501 is also a figurate number based on the 5-orthoplex or 5-dimensional cross polytope. In the gematria of Eleazar of Worms, the Hebrew words "temunah" (image) and "parsuf 'adam" (human face) both had the numerological value of 501. Eleazar used this equivalence to argue that, in several Biblical passages, God appeared to His prophets in the form of a human face. Other uses 501 is commonly used to refer to people deported from Australia under section 501 of the 1958 Migration Act. References Integers
https://en.wikipedia.org/wiki/Endocast
An endocast is the internal cast of a hollow object, often referring to the cranial vault in the study of brain development in humans and other organisms. Endocasts can be artificially made for examining the properties of a hollow, inaccessible space, or they may occur naturally through fossilization. Cranial endocasts Artificial casts Endocasts of the inside of the neurocranium (braincase) are often made in paleoanthropology to study brain structures and hemispheric specialization in extinct human ancestors. While an endocast can not directly reveal brain structure, it can allow scientists to gauge the size of areas of the brain situated close to the surface, notably Wernicke's and Broca's areas, responsible for interpreting and producing speech. Traditionally, the casting material is some form of rubber or rubber-like material. The openings to the brain cavity, except for the foramen magnum, are closed, and the liquid rubber is slushed around in the empty cranial vault and then left to set. The resulting hollow sphere can then be drained of air like a balloon and pulled out through the foramen magnum. Rubber endocasts like these were the standard practice until the end of the 20th century and are still used in some fields. However, scientists are increasingly utilizing computerized tomography scanning technology to create digital endocasts in order to avoid risking damage to valuable specimens. Natural endocasts Natural cranial endocasts are also known. The famous Taung Child, the first Australopithecus found, consists of a natural endocast connected to the facial portion of the skull. It was the shape of the brain that allowed Raymond Dart to conclude that the fossil was that of a human relative rather than an extinct ape. Mammal endocasts are particularly useful, as they resemble the fresh brain with the dura mater in place. Such "fossil brains" are known from several hundred different mammal species. More than a hundred natural casts of the cranial vaul
https://en.wikipedia.org/wiki/Cubic%20honeycomb
The cubic honeycomb or cubic cellulation is the only proper regular space-filling tessellation (or honeycomb) in Euclidean 3-space made up of cubic cells. It has 4 cubes around every edge, and 8 cubes around each vertex. Its vertex figure is a regular octahedron. It is a self-dual tessellation with Schläfli symbol {4,3,4}. John Horton Conway called this honeycomb a cubille. Related honeycombs It is part of a multidimensional family of hypercube honeycombs, with Schläfli symbols of the form {4,3,...,3,4}, starting with the square tiling, {4,4} in the plane. It is one of 28 uniform honeycombs using convex uniform polyhedral cells. Isometries of simple cubic lattices Simple cubic lattices can be distorted into lower symmetries, represented by lower crystal systems: Uniform colorings There is a large number of uniform colorings, derived from different symmetries. These include: Projections The cubic honeycomb can be orthogonally projected into the euclidean plane with various symmetry arrangements. The highest (hexagonal) symmetry form projects into a triangular tiling. A square symmetry projection forms a square tiling. Related polytopes and honeycombs It is related to the regular 4-polytope tesseract, Schläfli symbol {4,3,3}, which exists in 4-space, and only has 3 cubes around each edge. It's also related to the order-5 cubic honeycomb, Schläfli symbol {4,3,5}, of hyperbolic space with 5 cubes around each edge. It is in a sequence of polychora and honeycombs with octahedral vertex figures. It in a sequence of regular polytopes and honeycombs with cubic cells. Related polytopes The cubic honeycomb has a lower symmetry as a runcinated cubic honeycomb, with two sizes of cubes. A double symmetry construction can be constructed by placing a small cube into each large cube, resulting in a nonuniform honeycomb with cubes, square prisms, and rectangular trapezoprisms (a cube with D2d symmetry). Its vertex figure is a triangular pyramid with its lateral faces aug
https://en.wikipedia.org/wiki/Absolute%20OpenBSD
Absolute OpenBSD: Unix for the Practical Paranoid is a comprehensive guide to the OpenBSD operating system by Michael W. Lucas, author of Absolute FreeBSD and Cisco Routers for the Desperate. The book assumes basic knowledge of the design, commands, and user permissions of Unix-like operating systems. The book contains troubleshooting tips, background information on the system and its commands, and examples to assist with learning. 1st edition The first edition was released in June 2003. Some of the information in the book became outdated when OpenBSD 3.4 was released only a few months later. 2nd edition The second edition was released in April 2013. Peter N. M. Hansteen, author of The Book of PF, was the technical reviewer. External links References OpenBSD 2003 non-fiction books No Starch Press books Books about free software Books on operating systems
https://en.wikipedia.org/wiki/Program%20comprehension
Program comprehension (also program understanding or [source] code comprehension) is a domain of computer science concerned with the ways software engineers maintain existing source code. The cognitive and other processes involved are identified and studied. The results are used to develop tools and training. Software maintenance tasks have five categories: adaptive maintenance, corrective maintenance, perfective maintenance, code reuse, and code leverage. Theories of program comprehension Titles of works on program comprehension include Using a behavioral theory of program comprehension in software engineering The concept assignment problem in program understanding, and Program Comprehension During Software Maintenance and Evolution. Computer scientists pioneering program comprehension include Ruven Brooks, Ted J. Biggerstaff, and Anneliese von Mayrhauser. See also Program analysis (computer science) Program slicing Computer programming
https://en.wikipedia.org/wiki/Apple%20Open%20Directory
Apple Open Directory is the LDAP directory service model implementation from Apple Inc. A directory service is software which stores and organizes information about a computer network's users and network resources and which allows network administrators to manage users' access to the resources. In the context of macOS Server, Open Directory describes a shared LDAPv3 directory domain and a corresponding authentication model composed of Apple Password Server and Kerberos 5 tied together using a modular Directory Services system. Apple Open Directory is a fork of OpenLDAP. The term Open Directory can also be used to describe the entire directory services framework used by macOS and macOS Server. In this context, it describes the role of a macOS or macOS Server system when it is connected to an existing directory domain, in which context it is sometimes referred to as Directory Services. Apple, Inc. also publishes an API called the OpenDirectory framework, permitting macOS applications to interrogate and edit the Open Directory data. With the release of Mac OS X Leopard (10.5), Apple chose to move away from using the NetInfo directory service (originally found in NeXTSTEP and OPENSTEP), which had been used by default for all local accounts and groups in every release of Mac OS X from 10.0 to 10.4. Mac OS X 10.5 now uses Directory Services and its plugins for all directory information. Local accounts are now registered in the Local Plugin, which uses XML property list (plist) files stored in /var/db/dslocal/nodes/Default/ as its backing storage. Implementation in macOS Server macOS Server can host an Open Directory domain when configured as an Open Directory Master. In addition to its local directory, this OpenLDAP-based LDAPv3 domain is designed to store centralized management data, user, group, and computer accounts, which other systems can access. The directory domain is paired with the Open Directory Password Server and, optionally, a Kerberos realm. Either p
https://en.wikipedia.org/wiki/Pupillary%20distance
Pupillary distance (PD), more correctly known as interpupillary distance (IPD) is the distance in millimeters between the centers of each pupil. Interpupillary Distance Classifications Distance PD is the separation between the visual axes of the eyes in their primary position, as the subject fixates on an infinitely distant object. Near PD is the separation between the visual axes of the eyes, at the plane of the spectacle lenses, as the subject fixates on a near object at the intended working distance. Intermediate PD is at a specified plane in between distance and near. Monocular PD refers to the distance between either the right or left visual axis to the bridge of the nose, which may be slightly different for each eye due to anatomical variations but always sums up to the binocular PD. For people who need to wear prescription glasses, consideration of monocular PD measurement by an optician helps to ensure that the lenses will be located in the optimum position. Whilst PD is an optometric term used to specify prescription eyewear, IPD is more critical for the design of binocular viewing systems, where both eye pupils need to be positioned within the exit pupils of the viewing system. These viewing systems include binocular microscopes, night vision devices or goggles (NVGs), and head-mounted displays (HMDs). IPD data are used in the design of such systems to specify the range of lateral adjustment of the exit optics or eyepieces. IPD is also used to describe the distance between the exit pupils or optical axes of a binocular optical system. The distinction with IPD is the importance of anthropometric databases and the design of binocular viewing devices with an IPD adjustment that will fit a targeted population of users. Because instruments such as binoculars and microscopes can be used by different people, the distance between the eye pieces is usually made adjustable to account for IPD. In some applications, when IPD is not correctly set, it can lead to an
https://en.wikipedia.org/wiki/Dilution%20%28equation%29
Dilution is the process of decreasing the concentration of a solute in a solution, usually simply by mixing with more solvent like adding more water to the solution. To dilute a solution means to add more solvent without the addition of more solute. The resulting solution is thoroughly mixed so as to ensure that all parts of the solution are identical. The same direct relationship applies to gases and vapors diluted in air for example. Although, thorough mixing of gases and vapors may not be as easily accomplished. For example, if there are 10 grams of salt (the solute) dissolved in 1 litre of water (the solvent), this solution has a certain salt concentration (molarity). If one adds 1 litre of water to this solution, the salt concentration is reduced. The diluted solution still contains 10 grams of salt (0.171 moles of NaCl). Mathematically this relationship can be shown by equation: where c1 = initial concentration or molarity V1 = initial volume c2 = final concentration or molarity V2 = final volume .... Basic room purge equation The basic room purge equation is used in industrial hygiene. It determines the time required to reduce a known vapor concentration existing in a closed space to a lower vapor concentration. The equation can only be applied when the purged volume of vapor or gas is replaced with "clean" air or gas. For example, the equation can be used to calculate the time required at a certain ventilation rate to reduce a high carbon monoxide concentration in a room. Sometimes the equation is also written as: where Dt = time required; the unit of time used is the same as is used for Q V = air or gas volume of the closed space or room in cubic feet, cubic metres or litres Q = ventilation rate into or out of the room in cubic feet per minute, cubic metres per hour or litres per second Cinitial = initial concentration of a vapor inside the room measured in ppm Cfinal = final reduced concentration of the vapor inside the room in ppm Diluti
https://en.wikipedia.org/wiki/Outcome%20%28game%20theory%29
In game theory, the outcome of a game is the ultimate result of a strategic interaction with one or more people, dependant on the choices made by all participants in a certain exchange. It represents the final payoff resulting from a set of actions that individuals can take within the context of the game. Outcomes are pivotal in determining the payoffs and expected utility for parties involved. Game theorists commonly study how the outcome of a game is determined and what factors affect it. In game theory, a strategy is a set of actions that a player can take in response to the actions of others. Each player’s strategy is based on their expectation of what the other players are likely to do, often explained in terms of probability. Outcomes are dependent on the combination of strategies chosen by involved players and can be represented in a number of ways; one common way is a payoff matrix showing the individual payoffs for each players with a combination of strategies, as seen in the payoff matrix example below. Outcomes can be expressed in terms of monetary value or utility to a specific person. Additionally, a game tree can be used to deduce the actions leading to an outcome by displaying possible sequences of actions and the outcomes associated. A commonly used theorem in relation to outcomes is the Nash equilibrium. This theorem is a combination of strategies in which no player can improve their payoff or outcome by changing their strategy, given the strategies of the other players. In other words, a Nash equilibrium is a set of strategies in which each player is doing the best possible, assuming what the others are doing to receive the most optimal outcome for themselves. It is important to note that not all games have a unique nash equilibrium and if they do, it may not be the most desirable outcome. Additionally, the desired outcomes is greatly affected by individuals chosen strategies, and their beliefs on what they believe other players will do under the
https://en.wikipedia.org/wiki/Ethanol%20precipitation
Ethanol precipitation is a method used to purify and/or concentrate RNA, DNA, and polysaccharides such as pectin and xyloglucan from aqueous solutions by adding ethanol as an antisolvent. DNA precipitation Theory DNA is polar due to its highly charged phosphate backbone. Its polarity makes it water-soluble (water is polar) according to the principle "like dissolves like". Because of the high polarity of water, illustrated by its high dielectric constant of 80.1 (at 20 °C), electrostatic forces between charged particles are considerably lower in aqueous solution than they are in a vacuum or in air. This relation is reflected in Coulomb's law, which can be used to calculate the force acting on two charges and separated by a distance by using the dielectric constant (also called relative static permittivity) of the medium in the denominator of the equation ( is an electric constant): At an atomic level, the reduction in the force acting on a charge results from water molecules forming a hydration shell around it. This fact makes water a very good solvent for charged compounds like salts. Electric force which normally holds salt crystals together by way of ionic bonds is weakened in the presence of water allowing ions to separate from the crystal and spread through solution. The same mechanism operates in the case of negatively charged phosphate groups on a DNA backbone: even though positive ions are present in solution, the relatively weak net electrostatic force prevents them from forming stable ionic bonds with phosphates and precipitating out of solution. Ethanol is much less polar than water, with a dielectric constant of 24.3 (at 25 °C). This means that adding ethanol to solution disrupts the screening of charges by water. If enough ethanol is added, the electrical attraction between phosphate groups and any positive ions present in solution becomes strong enough to form stable ionic bonds and DNA precipitation. This usually happens when ethanol compo
https://en.wikipedia.org/wiki/Quantum%20neural%20network
Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments. Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits. The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classical artificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data. Examples Quantum neural ne
https://en.wikipedia.org/wiki/Nameplate
A nameplate identifies and displays a person or product's name. Nameplates are usually shaped as rectangles but are also seen in other shapes, sometimes taking on the shape of someone's written name. Nameplates primarily serve an informative function (as in an office environment, where nameplates mounted on doors or walls identify employees' spaces) or a commercial role (as in a retail environment, where nameplates are mounted on products to identify the brand). Whereas name tags tend to be worn on uniforms or clothing, nameplates tend to be mounted onto an object (e.g. cars, amplification devices) or physical space (e.g. doors, walls, or desktops). Nameplates are also distinct from name plaques. Plaques have larger dimensions and aim to communicate more information than a name and title. Office nameplates Office nameplates generally are made out of plastic, wood, metals (stainless steel, brass, aluminium, zinc, copper) and usually contain one or two lines of text. The standard format for an office nameplate is to display a person's name on the first line and a person's job title on the second line. It is common for organizations to request nameplates that exclude the job title. The primary reasons for excluding job titles are to extend the longevity of a nameplate and to promote a culture of meritocracy, where the strength of one's thoughts are not connected to one's job title. Nameplates without job titles have longer lives because someone can reuse the same nameplate after changing job titles. It is rare for an office nameplate to contain three or more lines of text. Although office nameplates range in size, the most popular nameplate size is . Office nameplates typically are made out of plastic. This is because plastic is an inexpensive material relative to wood and metal. More expensive nameplates can be manufactured out of bronze. To promote consistency, organizations tend to use the same style nameplate for all employees. This helps to achiev
https://en.wikipedia.org/wiki/Rose%20symbolism
Various folk cultures and traditions assign symbolic meaning to the rose, though these are seldom understood in-depth. Examples of deeper meanings lie within the language of flowers, and how a rose may have a different meaning in arrangements. Examples of common meanings of different coloured roses are: true love (red), mystery (blue), innocence or purity (white), death (black), friendship (yellow), and passion (orange). In religion Greco-Roman religion In ancient Greece, the rose was closely associated with the goddess Aphrodite. In the Iliad, Aphrodite protects the body of Hector using the "immortal oil of the rose" and the archaic Greek lyric poet Ibycus praises a beautiful youth saying that Aphrodite nursed him "among rose blossoms". The second-century AD Greek travel writer Pausanias associates the rose with the story of Adonis Book Eleven of the ancient Roman novel The Golden Ass by Apuleius contains a scene in which the goddess Isis, who is identified with Venus, instructs the main character, Lucius, who has been transformed into a donkey, to eat rose petals from a crown of roses worn by a priest as part of a religious procession in order to regain his humanity. Judaism In the Song of Songs 2:1-2, the Jewish people are compared with a rose, remaining beautiful amongst thorns, although some translations instead refer to a "lily among thorns." The Zohar uses a "thirteen-petalled rose" as a symbol for the thirteen attributes of Divine Mercy named in Exodus 34:6-7. The rose and rosettes were also used to symbolize royalty and Israel, and were used in wreaths for the bridegroom at weddings in Biblical times. Christianity Following the Christianization of the Roman Empire, the rose became identified with the Virgin Mary. The rose symbol eventually led to the creation of the rosary and other devotional prayers in Christianity. Ever since the 1400s, the Franciscans have had a Crown Rosary of the Seven Joys of the Blessed Virgin Mary. In the 1400s and 1500s, t
https://en.wikipedia.org/wiki/Shadowrun%20%282007%20video%20game%29
Shadowrun is a first-person shooter video game, developed by FASA Studio for Xbox 360 and Windows Vista. The game features a buying system which is inspired by the game Counter-Strike. The game is also inspired by the role-playing game of the same name. Gameplay Shadowrun'''s multiplayer consists wholly of a first person/third person deathmatch. Players choose various races with unique abilities. Additionally, a currency system dictates in-match upgrades, with each race given a different amount of starting capital. The four playable races are Human, Elf, Dwarf, and Troll. Magic is a key component to this game. Players can heal, damage, teleport, and summon to gain advantages over others. Additionally, gadgets, or "tech", are obtainable through currency. Currency also allows players to purchase new weapons.Shadowrun features no campaign mode. If a user is without online services, they can set up bot matches and hone their skills. Plot According to the ancient Mayan calendar, magic is cyclical, leaving the world and returning every 5000 years. Magic enters the world, grows, peaks, and eventually retreats. When magic was last at its peak, a powerful Ziggurat was constructed near what would be modern day Santos, Brazil. The purpose of this construct is shrouded in the mists of history. Even the Chancela family, who secretly maintained the ziggurat for thousands of years, did not know its purpose, nor did they know the purpose of the strange artifact somehow connected to the ziggurat. In the millennia since its construction, the ziggurat was eventually buried, hidden in the side of a mountain. Then, on December 24, 2012, magic began returning to the world, leaving change and confusion in its wake. The years after magic's return wrought change on a global scale. RNA Global, a powerful multinational corporation, sent a research team to Santos, Brazil. Their job was to explore and research the strange energies coming from a mountainside along one edge of Santos. Armed
https://en.wikipedia.org/wiki/Failure%20mode%2C%20effects%2C%20and%20criticality%20analysis
Failure mode effects and criticality analysis (FMECA) is an extension of failure mode and effects analysis (FMEA). FMEA is a bottom-up, inductive analytical method which may be performed at either the functional or piece-part level. FMECA extends FMEA by including a criticality analysis, which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively high probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value. FMECA tends to be preferred over FMEA in space and North Atlantic Treaty Organization (NATO) military applications, while various forms of FMEA predominate in other industries. History FMECA was originally developed in the 1940s by the U.S military, which published MIL–P–1629 in 1949. By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA under a variety of names. In 1966 NASA released its FMECA procedure for use on the Apollo program. FMECA was subsequently used on other NASA programs including Viking, Voyager, Magellan, and Galileo. Possibly because MIL–P–1629 was replaced by MIL–STD–1629 (SHIPS) in 1974, development of FMECA is sometimes incorrectly attributed to NASA. At the same time as the space program developments, use of FMEA and FMECA was already spreading to civil aviation. In 1967 the Society for Automotive Engineers released the first civil publication to address FMECA. The civil aviation industry now tends to use a combination of FMEA and Fault Tree Analysis in accordance with SAE ARP4761 instead of FMECA, though some helicopter manufacturers continue to use FMECA for civil rotorcraft. Ford Motor Company began using FMEA in the 1970s after problems experienced with its Pinto model, and by the 1980s FMEA was gaining broad use in the automotive industry. In Europe, the International Electrotechnical Commission publi
https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%20Vista
Compared with previous versions of Microsoft Windows, features new to Windows Vista are very numerous, covering most aspects of the operating system, including additional management features, new aspects of security and safety, new I/O technologies, new networking features, and new technical features. Windows Vista also removed some others. Windows Shell and user interface Windows Aero Windows Vista introduces a redesigned user interface and visual style named Windows Aero (a backronym for Authentic, Energetic, Reflective, and Open) that is intended to be aesthetically pleasing and cleaner than previous versions of Windows, with features such as glass translucencies, light effects, live thumbnails, and window animations enabled by the new Desktop Window Manager. Windows Aero also encompasses a new default typeface (Segoe UI)—set at a larger size than the default font of previous versions of Windows—new mouse cursors and new sounds, new dialog box, pop-up notification, and wizard interfaces, and revisions to the tone and phrasing of messages throughout the operating system. Windows Aero is available in the Home Premium, Business, Enterprise, and Ultimate Windows Vista editions. All editions of Windows Vista include a new "Windows Vista Basic" theme with updated visuals; it is equivalent to Luna of Windows XP in that it does not rely on a compositing window manager. Glass translucencies, light effects, live thumbnails, or window animations of Windows Aero are not available. Windows Vista Home Basic additionally includes a unique "Windows Vista Standard" theme, which has the same hardware requirements of Windows Aero, but it does not include glass translucency or live thumbnail features or effects. Start menu The Start menu has undergone a significant revision in Windows Vista, and it is updated in accordance with Windows Aero design principles, featuring glass translucencies and subtle light effects while Windows Aero is enabled. The current user's profile p
https://en.wikipedia.org/wiki/Turing%27s%20proof
Turing's proof is a proof by Alan Turing, first published in January 1937 with the title "On Computable Numbers, with an Application to the ". It was the second proof (after Church's theorem) of the negation of Hilbert's ; that is, the conjecture that some purely mathematical yes–no questions can never be answered by computation; more technically, that some decision problems are "undecidable" in the sense that there is no single algorithm that infallibly gives a correct "yes" or "no" answer to each instance of the problem. In Turing's own words: "what I shall prove is quite different from the well-known results of Gödel ... I shall now show that there is no general method which tells whether a given formula U is provable in K [Principia Mathematica]". Turing followed this proof with two others. The second and third both rely on the first. All rely on his development of typewriter-like "computing machines" that obey a simple set of rules and his subsequent development of a "universal computing machine". Summary of the proofs In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof. His first theorem is most relevant to the halting problem, the second is more relevant to Rice's theorem. First proof: that no "computing machine" exists that can decide whether or not an arbitrary "computing machine" (as represented by an integer 1, 2, 3, . . .) is "circle-free" (i.e. goes on printing its number in binary ad infinitum): "...we have no general process for doing this in a finite number of steps" (p. 132, ibid.). Turing's proof, although it seems to use the "diagonal process", in fact shows that his machine (called H) cannot calculate its own number, let alone the entire diagonal number (Cantor's diagonal argument): "The fallacy in the argument lies in the assumption that B [the diagonal number] is computable" The proof does not require much mathematics. Second proof: This one is perhaps more f
https://en.wikipedia.org/wiki/Lost%20Planet%3A%20Extreme%20Condition
Lost Planet: Extreme Condition is a third-person shooter video game developed and published by Capcom for Xbox 360, Microsoft Windows and PlayStation 3. The game was released in Japan in December 2006 and worldwide in January 2007. Originally intended to be an Xbox 360 exclusive, it was later ported and released for Microsoft Windows in June 2007 and PlayStation 3 in February 2008. Gameplay The game is played through a third person over-the-shoulder view. Players are allowed to switch between first-person and third-person at any moment. Players either travel on foot or ride various types of mechanized suits called Vital Suits (VSs). VSs carry heavy weapons such as chain guns and rocket launchers. They can pick up weapons lying on the ground and fire multiple weapons at once. On foot, players are able to use a grappling hook to pull themselves up to normally hard-to-reach places, or to hook onto a VS and hijack it. Driving VSs and using certain weapons requires thermal energy. Also, the planet's cold temperature causes the characters' thermal energy level to continually decrease. Players can replenish their thermal energy level by defeating enemies or activating data posts. Data posts also allow players to use their navigational radars to see incoming enemies. Each of the 11 levels is accompanied by a boss, which can be either a VS or a large Akrid. Multiplayer Online multiplayer versus also requires players to monitor their thermal energy level, but here, reaching zero does not cause death. Instead, the characters cannot use VSs or fire the weapons which require thermal energy. Online multiplayer versus consists of four modes, called Elimination, Team Elimination, Post Grab, and Fugitive. Players score points by killing other players and activating posts, and they lose points for being killed or committing suicide. Post grab is a mode where players on opposite teams compete to capture as many posts as possible before the set time runs out. Team Elimination is a 1
https://en.wikipedia.org/wiki/Comparison%20of%20operating%20system%20kernels
A kernel is a component of a computer operating system. A comparison of system kernels can provide insight into the design and architectural choices made by the developers of particular operating systems. Comparison criteria The following tables compare general and technical information for a number of widely used and currently available operating system kernels. Please see the individual products' articles for further information. Even though there are a large number and variety of available Linux distributions, all of these kernels are grouped under a single entry in these tables, due to the differences among them being of the patch level. See comparison of Linux distributions for a detailed comparison. Linux distributions that have highly modified kernels — for example, real-time computing kernels — should be listed separately. There are also a wide variety of minor BSD operating systems, many of which can be found at comparison of BSD operating systems. The tables specifically do not include subjective viewpoints on the merits of each kernel or operating system. Feature overview The major contemporary general-purpose kernels are shown in comparison. Only an overview of the technical features is detailed. Transport protocol support In-kernel security In-kernel virtualization In-kernel server support Binary format support A comparison of OS support for different binary formats (executables): File system support Physical file systems: Networked file system support Supported CPU instruction sets and microarchitectures Supported GPU processors Supported kernel execution environment This table indicates, for each kernel, what operating systems' executable images and device drivers can be run by that kernel. Supported cipher algorithms This may be usable on some situations like file system encrypting. Supported compression algorithms This may be usable on some situations like compression file system. Supported message digest algorithms Support
https://en.wikipedia.org/wiki/Linux%20kernel%20oops
In computing, an oops is a serious but non-fatal error in the Linux kernel. An oops may precede a kernel panic, but it may also allow continued operation with compromised reliability. The term does not stand for anything, other than that it is a simple mistake. Functioning When the kernel detects a problem, it kills any offending processes and prints an oops message, which Linux kernel engineers can use in debugging the condition that created the oops and fixing the underlying programming error. After a system has experienced an oops, some internal resources may no longer be operational. Thus, even if the system appears to work correctly, undesirable side effects may have resulted from the active task being killed. A kernel oops often leads to a kernel panic when the system attempts to use resources that have been lost. Some kernels are configured to panic when many oopses ( by default) have occurred. This oops limit is due to the potential, for example, for attackers to repeatedly trigger an oops and an associated resource leak, which eventually overflows an integer and allows further exploitation. The official Linux kernel documentation regarding oops messages resides in the file of the kernel sources. Some logger configurations may affect the ability to collect oops messages. The kerneloops software can collect and submit kernel oopses to a repository such as the www.kerneloops.org website, which provides statistics and public access to reported oopses. For a person not familiar with technical details of computers and operating systems, an oops message might look confusing. Unlike other operating systems such as Windows or macOS, Linux chooses to present details explaining the crash of the kernel rather than display a simplified, user-friendly message, such as the BSoD on Windows. A simplified crash screen has been proposed a few times, however currently none are in development. See also kdump (Linux) Linux kernel's crash dump mechanism, which internally u
https://en.wikipedia.org/wiki/Frederic%20Charles%20Dreyer
Admiral Sir Frederic Charles Dreyer, (8 January 1878 – 11 December 1956) was an officer of the Royal Navy. A gunnery expert, he developed a fire control system for British warships, and served as flag captain to Admiral Sir John Jellicoe at the Battle of Jutland. He retired with the rank of admiral in 1943, having served through two world wars and having already retired once. Background and early life Frederic Dreyer was born on 8 January 1878 in the Irish town of Parsonstown (now Birr) in King's County (now County Offaly), the second son of the Danish-born astronomer John Louis Emil Dreyer who was director of the Armagh Observatory. Educated at The Royal School, Armagh, in 1891 Dreyer joined the Royal Navy and entered the Royal Naval College, Dartmouth. Royal Navy career Early years At Dartmouth Dreyer performed well in his examinations and was placed fifth in his term. He then served as a midshipman in HMS Anson (1893–1896) and HMS Barfleur (1896–1897). In nearly all his subsequent examinations for promotions he obtained Class 1 certificates—for sub-lieutenant, lieutenant (July 1898, while aboard HMS Repulse) and then gunnery lieutenant. In 1900 he authored a book called How to Get a First Class in Seamanship. He came first in his class of three in the advanced course for gunnery and torpedo lieutenants at the Royal Naval College, Greenwich in 1901, after which he was posted to the staff of the gunnery school at Sheerness. He served as gunnery officer to the cruiser HMS Scylla for annual manoeuvres during summer 1902, then was lent to the protected cruiser HMS Hawke for a trooping trip to the Mediterranean (August–September 1902). He was appointed to the battleship HMS Hood in the Mediterranean from September 1902, but the ship's rudder had been damaged and the ship proceeded home to be repaired and paid off at Plymouth. Dreyer was reappointed to the Hawke on 13 January 1903 for another trooping voyage to Malta, and when she was paid off in March, he was appo
https://en.wikipedia.org/wiki/Close-ratio%20transmission
A close-ratio transmission describes a motor vehicle transmission with a smaller than average difference between the gear ratios. They are most often used on sports cars in order to keep the engine in the power band. There is no industry standard as to what constitutes a close-ratio transmission, a transmission that one manufacturer terms close-ratio may not necessarily be considered close-ratioed by another manufacturer. Generally speaking, the more gears a transmission has, the closer they are together. A continuously variable transmission has a near infinite "number" of gear ratios, which implies an infinitely close-ratio between gears. However, with no specific gear ratios, it would not be considered a close-ratioed transmission. Comparison with ordinary transmission This table compares the ratios of three Porsche 911 vehicles from 1967 to 1971, the first being the standard 901/75 transmission, the second being the 901/76 transmission denoted "For hill climbs", and the third being the 901/79 transmission denoted "Nurburgring ratios". Mathematically, this closeness can be represented by the cumulative average spacing between, or geometric average of, gears. For the above series transmission, each successive gear's ratio is on average 75% of that of the preceding gear (e.g. (0.82 / 2.64)1/4 = 0.747). The Hill Climb transmission has successive gear ratios 81% of that of the preceding gear, and the Nuerburgring transmission has successive gear ratios 77% of that of the preceding gear. Thus, the Hill Climb transmission's gears are "closer" in numerical ratio to the preceding gear than that of the standard or Nuerburgring transmission, making it a close-ratio transmission. There is no specific figure that is used to denote whether the steps between gears constitute a normal or close-ratio transmission. Often, manufacturers use this term when offering a standard manual transmission and an optional, sportier transmission, one with closer ratios than the other, such a