source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Subthreshold%20conduction
Subthreshold conduction or subthreshold leakage or subthreshold drain current is the current between the source and drain of a MOSFET when the transistor is in subthreshold region, or weak-inversion region, that is, for gate-to-source voltages below the threshold voltage. The amount of subthreshold conduction in a transistor is set by its threshold voltage, which is the minimum gate voltage required to switch the device between on and off states. However, as the drain current in a MOS device varies exponentially with gate voltage, the conduction does not immediately become zero when the threshold voltage is reached. Rather it continues showing an exponential behavior with respect to the subthreshold gate voltage. When plotted against the applied gate voltage, this subthreshold drain current exhibits a log-linear slope, which is defined as the subthreshold slope. Subthreshold slope is used as a figure of merit for the switching efficiency of a transistor. In digital circuits, subthreshold conduction is generally viewed as a parasitic leakage in a state that would ideally have no conduction. In micropower analog circuits, on the other hand, weak inversion is an efficient operating region, and subthreshold is a useful transistor mode around which circuit functions are designed. Historically, in CMOS circuits, the threshold voltage has been insignificant compared to the full range of gate voltage between the ground and supply voltages, which allowed for gate voltages significantly below the threshold in the off state. As gate voltages scaled down with transistor size, the room for gate voltage swing below the threshold voltage drastically reduced, and the subthreshold conduction became a significant part of the off-state leakage of a transistor. For a technology generation with threshold voltage of 0.2 V, subthreshold conduction, along with other leakage modes, can account for 50% of total power consumption. Sub-threshold electronics Some devices exploit sub-thresh
https://en.wikipedia.org/wiki/Mathlete
A mathlete is a person who competes in mathematics competitions at any level or any age. More specifically, a Mathlete is a student who participates in any of the MATHCOUNTS programs, as Mathlete is a registered trademark of the MATHCOUNTS Foundation in the United States. The term is a portmanteau of the words mathematics and athlete. Top Mathletes from MATHCOUNTS often go on to compete in the AIME, USAMO, and ARML competitions in the United States. Those in other countries generally participate in national olympiads to qualify for the International Mathematical Olympiad. Participants in World Math Day also are commonly referred to as mathletes. Mathletic competitions The Putnam Exam: The William Lowell Putnam Competition is the preeminent undergraduate level mathletic competition in the United States. Administered by the Mathematical Association of America, students compete as individuals and as teams (as chosen by their Institution) for scholarships and team prize money. The exam is administered on the first saturday in December. Mathletic off-season training The academic off-season (traditionally referred to as "summer") can be especially difficult on mathletes, though various training regimens have been proposed to keep mathletic ability at its peak. Publications such as the MAA's The American Mathematical Monthly and the AMS's Notices of the American Mathematical Society are widely read to maintain and hone mathematical ability. Some coaches suggest seeking research internships or grants, many of which are funded by the National Science Foundation. At higher levels, mathletes can obtain funding from host institutions to work on summer research projects. For example, the University of Delaware offers the Groups Exploring the Mathematical Sciences project (GEMS project) to first year graduate students. The students act as the principal investigator and work with an undergraduate research assistant and a faculty adviser who will oversee their summer research
https://en.wikipedia.org/wiki/Evolutionary%20radiation
An evolutionary radiation is an increase in taxonomic diversity that is caused by elevated rates of speciation, that may or may not be associated with an increase in morphological disparity. A significantly large and diverse radiation within a relatively short geologic time scale (e.g. a period or epoch) is often referred to as an explosion. Radiations may affect one clade or many, and be rapid or gradual; where they are rapid, and driven by a single lineage's adaptation to their environment, they are termed adaptive radiations. Examples Perhaps the most familiar example of an evolutionary radiation is that of placental mammals immediately after the extinction of the non-avian dinosaurs at the end of the Cretaceous, about 66 million years ago. At that time, the placental mammals were mostly small, insect-eating animals similar in size and shape to modern shrews. By the Eocene (58–37 million years ago), they had evolved into such diverse forms as bats, whales, and horses. Other familiar radiations include the Avalon Explosion, the Cambrian Explosion, the Great Ordovician Biodiversification Event, the Carboniferous-Earliest Permian Biodiversification Event, the Mesozoic–Cenozoic Radiation, the radiation of land plants after their colonisation of land, the Cretaceous radiation of angiosperms, and the diversification of insects, a radiation that has continued almost unabated since the Devonian, . Types Adaptive radiations involve an increase in a clade's speciation rate coupled with divergence of morphological features that are directly related to ecological habits; these radiations involve speciation not driven by geographic factors and occurring in sympatry; they also may be associated with the acquisition of a key trait. Nonadaptive radiations arguably encompass every type of evolutionary radiation that is not an adaptive radiation, although when a more precise mechanism is known to drive diversity, it can be useful to refer to the pattern as, e.g., a geographic r
https://en.wikipedia.org/wiki/Design%20rule%20checking
In electronic design automation, a design rule is a geometric constraint imposed on circuit board, semiconductor device, and integrated circuit (IC) designers to ensure their designs function properly, reliably, and can be produced with acceptable yield. Design rules for production are developed by process engineers based on the capability of their processes to realize design intent. Electronic design automation is used extensively to ensure that designers do not violate design rules; a process called design rule checking (DRC). DRC is a major step during physical verification signoff on the design, which also involves LVS (layout versus schematic) checks, XOR checks, ERC (electrical rule check), and antenna checks. The importance of design rules and DRC is greatest for ICs, which have micro- or nano-scale geometries; for advanced processes, some fabs also insist upon the use of more restricted rules to improve yield. Design rules Design rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a particular semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly. The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007). A two layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might s
https://en.wikipedia.org/wiki/Radiation%20reduced%20hybrid
Radiation reduced hybrid is procedure in discovering the location of genetic markers relative to one another. The relative location of these markers can be combined into a physical map or a genetic map. The radiation hybrid technique begins as another way to amplify and purify DNA, the first step in any sequencing project; radiation is used to break the DNA into pieces and these pieces are incorporated into the hybrid cell; the hybrid can be grown in large quantities. One can then check for the presence of various genetic markers using PCR and linkage analysis to resolve the distance between the markers. That is, if two genetic markers are near each other, they are less likely to be separated by the DNA-breaking radiation. The technique is similar to traditional linkage analysis, where we depend on genetic recombination to calculate the distance between two genetic markers. Procedure The procedure utilizes two cell lines, neither of which can survive in toxic media, but contain genes that can resist the toxin when contained by the same cell. The cell line under study is irradiated, causing breaks in the DNA. These cells are fused with the other cell line, producing a hybrid. If the hybrid incorporates genes from both cells, it will be able to survive in the toxic media. The cells that survive can be grown in large quantities, thus amplifying the DNA that was incorporated from the irradiated cell line. One can prepare a sample of DNA from the hybrid cell line and use PCR to amplify two specific genetic markers. By running the PCR products on a gel, one can determine if both markers are in the cell line. If both markers are usually in the hybrid, we can conclude that they are near each other. References Genetics
https://en.wikipedia.org/wiki/Differential%20signalling
Differential signalling is a method for electrically transmitting information using two complementary signals. The technique sends the same electrical signal as a differential pair of signals, each in its own conductor. The pair of conductors can be wires in a twisted-pair or ribbon cable or traces on a printed circuit board. Electrically, the two conductors carry voltage signals which are equal in magnitude, but of opposite polarity. The receiving circuit responds to the difference between the two signals, which results in a signal with a magnitude twice as large. The symmetrical signals of differential signalling may be referred to as balanced, but this term is more appropriately applied to balanced circuits and balanced lines which reject common-mode interference when fed into a differential receiver. Differential signalling does not make a line balanced, nor does noise rejection in balanced circuits require differential signalling. Differential signalling is to be contrasted to single-ended signalling which drives only one conductor with signal, while the other is connected to a fixed reference voltage. Advantages Contrary to popular belief, differential signalling does not affect noise cancellation. Balanced lines with differential receivers will reject noise regardless of whether the signal is differential or single-ended, but since balanced line noise rejection requires a differential receiver anyway, differential signalling is often used on balanced lines. Some of the benefits of differential signalling include: Doubled signal voltage between the differential pair (compared to a single-ended signal of the same nominal level), giving 6 dB extra headroom. Common-mode noise between the two amps (e.g. from imperfect power supply rejection) is easily rejected by a differential receiver. Longer cable runs are possible due to this increased noise immunity and 6 dB extra headroom. At higher frequencies, the output impedance of the output amplifier can chan
https://en.wikipedia.org/wiki/String%20interning
In computer science, string interning is a method of storing only one copy of each distinct string value, which must be immutable. Interning strings makes some string processing tasks more time- or space-efficient at the cost of requiring more time when the string is created or interned. The distinct values are stored in a string intern pool. The single copy of each string is called its intern and is typically looked up by a method of the string class, for example String.intern() in Java. All compile-time constant strings in Java are automatically interned using this method. String interning is supported by some modern object-oriented programming languages, including Java, Python, PHP (since 5.4), Lua and .NET languages. Lisp, Scheme, Julia, Ruby and Smalltalk are among the languages with a symbol type that are basically interned strings. The library of the Standard ML of New Jersey contains an atom type that does the same thing. Objective-C's selectors, which are mainly used as method names, are interned strings. Objects other than strings can be interned. For example, in Java, when primitive values are boxed into a wrapper object, certain values (any boolean, any byte, any char from 0 to 127, and any short or int between −128 and 127) are interned, and any two boxing conversions of one of these values are guaranteed to result in the same object. History Lisp introduced the notion of interned strings for its symbols. Historically, the data structure used as a string intern pool was called an oblist (when it was implemented as a linked list) or an obarray (when it was implemented as an array). Modern Lisp dialects typically distinguish symbols from strings; interning a given string returns an existing symbol or creates a new one, whose name is that string. Symbols often have additional properties that strings do not (such as storage for associated values, or namespacing): the distinction is also useful to prevent accidentally comparing an interned string with
https://en.wikipedia.org/wiki/Wafer%20fabrication
Wafer fabrication is a procedure composed of many repeated sequential processes to produce complete electrical or photonic circuits on semiconductor wafers in semiconductor device fabrication process. Examples include production of radio frequency (RF) amplifiers, LEDs, optical computer components, and microprocessors for computers. Wafer fabrication is used to build components with the necessary electrical structures. The main process begins with electrical engineers designing the circuit and defining its functions, and specifying the signals, inputs/outputs and voltages needed. These electrical circuit specifications are entered into electrical circuit design software, such as SPICE, and then imported into circuit layout programs, which are similar to ones used for computer aided design. This is necessary for the layers to be defined for photomask production. The resolution of the circuits increases rapidly with each step in design, as the scale of the circuits at the start of the design process is already being measured in fractions of micrometers. Each step thus increases circuit density for a given area. The silicon wafers start out blank and pure. The circuits are built in layers in clean rooms. First, photoresist patterns are photo-masked in micrometer detail onto the wafers' surface. The wafers are then exposed to short-wave ultraviolet light and the unexposed areas are thus etched away and cleaned. Hot chemical vapors are deposited on to the desired zones and baked in high heat, which permeate the vapors into the desired zones. In some cases, ions, such as O2+ or O+, are implanted in precise patterns and at a specific depth by using RF-driven ion sources. These steps are often repeated many hundreds of times, depending on the complexity of the desired circuit and its connections. New processes to accomplish each of these steps with better resolution and in improved ways emerge every year, with the result of constantly changing technology in the wafer fa
https://en.wikipedia.org/wiki/Holos%20%28software%29
Holos was an influential OLAP (Online Analytical Processing) product of the 1990s. Developed by Holistic Systems in 1987, the product remained in use until around 2004. The core of the Holos Server was a business intelligence (BI) virtual machine. The Holos Language was a very broad language in that it covered a wide range of statements and concepts, including the reporting system, business rules, OLAP data, SQL data (using the Embedded SQL syntax within the hosting HL), device properties, analysis, forecasting, and data mining. Holos Server provided an array of different, but compatible, storage mechanisms for its multi-cube architecture: memory, disk, SQL. It was therefore the first product to provide "hybrid OLAP" (HOLAP). The Holos Client was both a design and delivery vehicle, and this made it quite large. Around about 2000, the Holos Language was made object-oriented (HL++) with a view to allowing the replacement of the Holos Client with a custom Java or VB product. However, the company were never sold on this, and so the project was abandoned. Before its demise, the Holos Server product ran under Windows NT (Intel and Alpha), VMS (VAX and Alpha), plus about 10 flavors of UNIX, and accessed over half-a-dozen different SQL databases. It was also ported to several different locales, including Japanese. Company Holistic Systems was purchased by the hardware company Seagate Technology in 1996. Along with other companies such as Crystal Services, it was used to create a new subsidiary company called Seagate Software. Only Holistic and Crystal remained, and Seagate Software was renamed to Crystal Decisions. Holistic and Crystal had very different sales models. The average sale for the Holos Product in the United States was in excess of $250,000 and was sold primarily to Fortune 500 companies by a direct sales force. The main Holos development team finally started to leave around 2000, and Crystal Decisions was finally taken over by Business Objects in 2004. Foll
https://en.wikipedia.org/wiki/Conflict%20escalation
Conflict escalation is the process by which conflicts grow in severity or scale over time. That may refer to conflicts between individuals or groups in interpersonal relationships, or it may refer to the escalation of hostilities in a political or military context. In systems theory, the process of conflict escalation is modeled by positive feedback. While the word escalation was used as early as in 1938, it was popularized during the Cold War by two important books: On Escalation (Herman Kahn, 1965) and Escalation and the Nuclear Option (Bernard Brodie, 1966). In those contexts, it especially referred to war between two major states with weapons of mass destruction during the Cold War. Conflict escalation has a tactical role in military conflict and is often formalized with explicit rules of engagement. Highly-successful military tactics exploit a particular form of conflict escalation such as by controlling an opponent's reaction time, which allows the tactician to pursue or trap his opponent. Both Napoleon Bonaparte and Heinz Guderian advocated that approach. Sun Tzu elaborated it in a more abstract form and maintained that military strategy was about minimizing escalation and diplomacy was about eliminating it. Continuum of force The United States Marine Corps' "Continuum of Force" (found in MCRP 3-02B) documents the stages of conflict escalation in combat for a typical subject: Level 1: Compliant (cooperative) The subject responds to and obeys verbal commands. He refrains from close combat. Level 2: Resistant (passive) The subject resists verbal commands but complies to commands immediately upon contact controls. He refrains from close combat. Level 3: Resistant (active) Initially, the subject physically resists commands, but he can be made to comply by compliance techniques, including come-along holds, soft-handed stunning blows, and techniques inducing pain by joint manipulation and pressure points. Level 4: Assaultive (bodily harm) The unarmed subje
https://en.wikipedia.org/wiki/Forward%20genetics
Forward genetics is a molecular genetics approach of determining the genetic basis responsible for a phenotype. Forward genetics provides an unbiased approach because it relies heavily on identifying the genes or genetic factors that cause a particular phenotype or trait of interest. This was initially done by using naturally occurring mutations or inducing mutants with radiation, chemicals, or insertional mutagenesis (e.g. transposable elements). Subsequent breeding takes place, mutant individuals are isolated, and then the gene is mapped. Forward genetics can be thought of as a counter to reverse genetics, which determines the function of a gene by analyzing the phenotypic effects of altered DNA sequences. Mutant phenotypes are often observed long before having any idea which gene is responsible, which can lead to genes being named after their mutant phenotype (e.g. Drosophila rosy gene which is named after the eye colour in mutants). Techniques used in Forward Genetics Forward genetics provides researchers with the ability to identify genetic changes caused by mutations that are responsible for individual phenotypes in organisms. There are three major steps involved with the process of forward genetics which includes: making random mutations, selecting the phenotype or trait of interest, and identifying the gene and its function. Forward genetics involves the use of several mutagenesis processes to induce DNA mutations at random which may include: Chemical mutagenesis Chemical mutagenesis is an easy tool that is used to generate a broad spectrum of mutant alleles. Chemicals like ethyl methanesulfonate (EMS) cause random point mutations particularly in G/C to A/T transitions due to guanine alkylation. These point mutations are typically loss-of-function or null alleles because they generate stop codons in the DNA sequence. These types of mutagens can be useful because they are easily applied to any organism but they were traditionally very difficult to map,
https://en.wikipedia.org/wiki/Spherical%20space%20form%20conjecture
In geometric topology, the spherical space form conjecture (now a theorem) states that a finite group acting on the 3-sphere is conjugate to a group of isometries of the 3-sphere. History The conjecture was posed by Heinz Hopf in 1926 after determining the fundamental groups of three-dimensional spherical space forms as a generalization of the Poincaré conjecture to the non-simply connected case. Status The conjecture is implied by Thurston's geometrization conjecture, which was proven by Grigori Perelman in 2003. The conjecture was independently proven for groups whose actions have fixed points—this special case is known as the Smith conjecture. It is also proven for various groups acting without fixed points, such as cyclic groups whose orders are a power of two (George Livesay, Robert Myers) and cyclic groups of order 3 (J. Hyam Rubinstein). See also Killing–Hopf theorem References Conjectures that have been proved Geometric topology
https://en.wikipedia.org/wiki/Vis%20viva
Vis viva (from the Latin for "living force") is a historical term used to describe a quantity similar to kinetic energy in an early formulation of the principle of conservation of energy. Overview Proposed by Gottfried Leibniz over the period 1676–1689, the theory was controversial as it seemed to oppose the theory of conservation of quantity of motion advocated by René Descartes. Descartes' quantity of motion was different from momentum, but Newton defined the quantity of motion as the conjunction of the quantity of matter and velocity in Definition II of his Principia. In Definition III, he defined the force that resists a change in motion as the vis inertia of Descartes. His Third Law of Motion is also equivalent to the principle of conservation of momentum. Leibniz accepted the principle of conservation of momentum, but rejected the Cartesian version of it. The difference between these ideas was whether the quantity of motion was simply related to a body's resistance to a change in velocity (vis inertia) or whether a body's amount of force due to its motion (vis viva) was related to the square of its velocity. The theory was eventually absorbed into the modern theory of energy, though the term still survives in the context of celestial mechanics through the vis viva equation. The English equivalent "living force" was also used, for example by George William Hill. The term is due to German Gottfried Wilhelm Leibniz, who was the first to attempt a mathematical formulation from 1676 to 1689. Leibniz noticed that in many mechanical systems (of several masses, mi each with velocity vi) the quantity was conserved. He called this quantity the vis viva or "living force" of the system. The principle represented an accurate statement of the conservation of kinetic energy in elastic collisions that was independent of the conservation of momentum. However, many physicists at the time were unaware of this fact and, instead, were influenced by the prestige of Sir Isaac N
https://en.wikipedia.org/wiki/Coupling%20%28computer%20programming%29
In software engineering, coupling is the degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules. Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability. History The software quality metrics of coupling and cohesion were invented by Larry Constantine in the late 1960s as part of a structured design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured design, including cohesion and coupling, were published in the article Stevens, Myers & Constantine (1974) and the book Yourdon & Constantine (1979), and the latter subsequently became standard terms. Types of coupling Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows: Procedural programming A module here refers to a subroutine of any kind, i.e. a set of one or more statements having a name and preferably its own set of variable names. Content coupling (high) Content coupling is said to occur when one module uses the code of another module, for instance a branch. This violates information hiding – a basic software design concept. Common coupling Common coupling is said to occur when several modules have access to the same global data. But it can lead to uncontrolled error propagation and unforeseen side-effects when changes are made. External coupling External coupling occurs when two modules share an externally imposed data format, communication protocol, or device interface. This is basically related to the communication to external tools and devices. Control coupling
https://en.wikipedia.org/wiki/RCA%20TK-40/41
The RCA TK-40 is considered to be the first practical color television camera, initially used for special broadcasts in late 1953, and with the follow-on TK-40A actually becoming the first to be produced in quantity in March 1954. The TK-40 was produced by RCA Broadcast to showcase the new compatible color system for NTSC—eventually named NTSC-M or simply M—which the company is credited with inventing (though several other companies including Philco were involved in development). Color had been attempted many times before, often in a semi-mechanical fashion, but this was the first series of practical, fully electronic cameras to go into widespread production. The camera was quickly followed with the TK-41, a line that shared a very similar shape, but featured streamlined and enhanced electronic subsystems. Earlier TK-40s are distinguished by the lack of venting slots on the sides (the cameras were prone to overheating, necessitating the addition of these openings). The last variation of the TK-41 was the TK-41C, released circa 1960. The cameras are considered to have been of very good quality, better than the very different TK-42 which succeeded the TK-40/41, and probably better than anything produced by RCA for several years after the production line shut down (NBC didn't fully replace their TK-41s in Rockefeller Center or their Burbank, California broadcast facility until the release of the TK-44A around 1968). Prior development in the late 1940s and early 1950s had included the TK-X (for "experimental"). An image (beam) splitter was used in the TK-40/41 to direct the incoming light into three image orthicon tubes for recording moving pictures in the red, green, and blue component colors. The early cameras required a very large amount of lighting, which caused television studios to become very warm due to the use of multi-kilowatt lamps (a problem that still exists somewhat today, but is less pronounced). The RCA TK-40 and TK-41 color cameras required more
https://en.wikipedia.org/wiki/Fulkerson%20Prize
The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at each (triennial) International Symposium of the MOS. Originally, the prizes were paid out of a memorial fund administered by the AMS that was established by friends of the late Delbert Ray Fulkerson to encourage mathematical excellence in the fields of research exemplified by his work. The prizes are now funded by an endowment administered by MPS. Winners Source: Mathematical Optimization Society 1979: Richard M. Karp for classifying many important NP-complete problems. Kenneth Appel and Wolfgang Haken for the four color theorem. Paul Seymour for generalizing the max-flow min-cut theorem to matroids. 1982: D.B. Judin, Arkadi Nemirovski, Leonid Khachiyan, Martin Grötschel, László Lovász and Alexander Schrijver for the ellipsoid method in linear programming and combinatorial optimization. G. P. Egorychev and D. I. Falikman for proving van der Waerden's conjecture that the matrix with all entries equal has the smallest permanent of any doubly stochastic matrix. 1985: Jozsef Beck for tight bounds on the discrepancy of arithmetic progressions. H. W. Lenstra Jr. for using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints. Eugene M. Luks for a polynomial time graph isomorphism algorithm for graphs of bounded maximum degree. 1988: Éva Tardos for finding minimum cost circulations in strongly polynomial time. Narendra Karmarkar for Karmarkar's algorithm for linear programming. 1991: Martin E. Dyer, Alan M. Frieze and Ravindran Kannan for random-walk-based approximation algorithms for the volume of convex bodies. Alfred Lehman for 0,1-matrix analogues of the theory of perfect graphs. Nikolai E. Mnev for Mnev's universality theorem, that every semi
https://en.wikipedia.org/wiki/User%20identifier
Unix-like operating systems identify a user by a value called a user identifier, often abbreviated to user ID or UID. The UID, along with the group identifier (GID) and other access control criteria, is used to determine which system resources a user can access. The password file maps textual user names to UIDs. UIDs are stored in the inodes of the Unix file system, running processes, tar archives, and the now-obsolete Network Information Service. In POSIX-compliant environments, the shell command id gives the current user's UID, as well as more information such as the user name, primary user group and group identifier (GID). Process attributes The POSIX standard introduced three different UID fields into the process descriptor table, to allow privileged processes to take on different roles dynamically: Effective user ID The effective UID (euid) of a process is used for most access checks. It is also used as the owner for files created by that process. The effective GID (egid) of a process also affects access control and may also affect file creation, depending on the semantics of the specific kernel implementation in use and possibly the mount options used. According to BSD Unix semantics, the group ownership given to a newly created file is unconditionally inherited from the group ownership of the directory in which it is created. According to AT&T UNIX System V semantics (also adopted by Linux variants), a newly created file is normally given the group ownership specified by the egid of the process that creates the file. Most filesystems implement a method to select whether BSD or AT&T semantics should be used regarding group ownership of a newly created file; BSD semantics are selected for specific directories when the S_ISGID (s-gid) permission is set. File system user ID Linux also has a file system user ID (fsuid) which is used explicitly for access control to the file system. It matches the euid unless explicitly set otherwise. It may be root's user ID o
https://en.wikipedia.org/wiki/Here%20document
In computing, a here document (here-document, here-text, heredoc, hereis, here-string or here-script) is a file literal or input stream literal: it is a section of a source code file that is treated as if it were a separate file. The term is also used for a form of multiline string literals that use similar syntax, preserving line breaks and other whitespace (including indentation) in the text. Here documents originate in the Unix shell, and are found in the Bourne shell (sh), C shell (csh), tcsh (tcsh), KornShell (ksh), Bourne Again Shell (bash), and Z shell (zsh), among others. Here document-style string literals are found in various high-level languages, notably the Perl programming language (syntax inspired by Unix shell) and languages influenced by Perl, such as PHP and Ruby. JavaScript also supports this functionality via template literals, a feature added in its 6th revision (ES6). Other high-level languages such as Python, Julia and Tcl have other facilities for multiline strings. Here documents can be treated either as files or strings. Some shells treat them as a format string literal, allowing variable substitution and command substitution inside the literal. Overview The most common syntax for here documents, originating in Unix shells, is << followed by a delimiting identifier (often the word EOF or END), followed, starting on the next line, by the text to be quoted, and then closed by the same delimiting identifier on its own line. This syntax is because here documents are formally stream literals, and the content of the here document is often redirected to stdin (standard input) of the preceding command or curent shell script/executable. The here document syntax analogous to the shell syntax for input redirection, which is < “take input from the following file”. Other languages often use substantially similar syntax, but details of syntax and actual functionality can vary significantly. When used simply for string literals, the << does not indi
https://en.wikipedia.org/wiki/Chapman%20code
Chapman codes are a set of 3-letter codes used in genealogy to identify the administrative divisions in the United Kingdom, Ireland, the Isle of Man and the Channel Islands. Use They were created by the historian, Dr. Colin R Chapman, in the late 1970s, and as intended, provide a widely used shorthand in genealogy which follows the common practice of describing areas in terms of the counties existing in the 19th and 20th centuries. Other uses Chapman codes have no mapping, postal or administrative use. They can however be useful for disambiguation by postal services where a full county name or traditional abbreviation is not supplied after a place name which has more than one occurrence, a particular problem where these are post towns such as Richmond. Country codes CHI Channel Islands ENG England IOM Isle of Man IRL Ireland NIR Northern Ireland SCT Scotland WLS Wales ALL All countries Channel Islands ALD Alderney GSY Guernsey JSY Jersey SRK Sark England Historic counties BDF Bedfordshire BRK Berkshire BKM Buckinghamshire CAM Cambridgeshire CHS Cheshire CON Cornwall CUL Cumberland DBY Derbyshire DEV Devonshire DOR Dorset DUR Durham ESS Essex GLS Gloucestershire HAM Hampshire HEF Herefordshire HRT Hertfordshire HUN Huntingdonshire KEN Kent LAN Lancashire LEI Leicestershire LIN Lincolnshire MDX Middlesex NFK Norfolk NTH Northamptonshire NBL Northumberland NTT Nottinghamshire OXF Oxfordshire RUT Rutland SAL Shropshire (Salop) SOM Somerset STS Staffordshire SFK Suffolk SRY Surrey SSX Sussex WAR Warwickshire WES Westmorland WIL Wiltshire WOR Worcestershire YKS Yorkshire ERY East Riding NRY North Riding WRY West Riding Administrative areas AVN Avon CLV Cleveland CMA Cumbria SXE East Sussex GTM Greater Manchester HWR Hereford and Worcester HUM Humberside IOW Isle of Wight LND London MSY Merseyside WMD West Midlands NYK North Yorkshire SYK South Yorkshire TWR Tyne and Wear SXW West Sussex WYK West Yorkshire Scotland Historic counties ABD Aberdeenshire ANS Ang
https://en.wikipedia.org/wiki/Pseudorange
The pseudorange (from pseudo- and range) is the pseudo distance between a satellite and a navigation satellite receiver (see GNSS positioning calculation), for instance Global Positioning System (GPS) receivers. To determine its position, a satellite navigation receiver will determine the ranges to (at least) four satellites as well as their positions at time of transmitting. Knowing the satellites' orbital parameters, these positions can be calculated for any point in time. The pseudoranges of each satellite are obtained by multiplying the speed of light by the time the signal has taken from the satellite to the receiver. As there are accuracy errors in the time measured, the term pseudo-ranges is used rather than ranges for such distances. Pseudorange and time error estimation Typically a quartz oscillator is used in the receiver to do the timing. The accuracy of quartz clocks in general is worse (i.e. more) than one part in a million; thus, if the clock hasn't been corrected for a week, the deviation may be so great as to result in a reported location not on the Earth, but outside the Moon's orbit. Even if the clock is corrected, a second later the clock may no longer be usable for positional calculation, because after a second the error will be hundreds of meters for a typical quartz clock. But in a GPS receiver the clock's time is used to measure the ranges to different satellites at almost the same time, meaning all the measured ranges have the same error. Ranges with the same error are called pseudoranges. By finding the pseudo-range of an additional fourth satellite for precise position calculation, the time error can also be estimated. Therefore, by having the pseudoranges and the locations of four satellites, the actual receiver's position along the x, y, z axes and the time error can be computed accurately. The reason we speak of pseudo-ranges rather than ranges, is precisely this "contamination" with unknown receiver clock offset. GPS positioning
https://en.wikipedia.org/wiki/Rockbox
Rockbox is a free and open-source software replacement for the OEM firmware in various forms of digital audio players (DAPs) with an original kernel. It offers an alternative to the player's operating system, in many cases without removing the original firmware, which provides a plug-in architecture for adding various enhancements and functions. Enhancements include personal digital assistant (PDA) functions, applications, utilities, and games. Rockbox can also retrofit video playback functions on players first released in mid-2000. Rockbox includes a voice-driven user-interface suitable for operation by visually impaired users. Rockbox runs on a wide variety of devices with very different hardware abilities: from early Archos players with 1-bit character cell-based displays, to modern players with high resolution color displays, digital optical audio hardware and advanced recording abilities. History The Rockbox project began in late 2001 and was first implemented on the early Archos series of hard-disk based MP3 players/recorders (including the flash-only model Ondio), because of owner frustration with severe limitations in the manufacturer-supplied user interface and device operations. These devices have relatively weak main central processing units (CPU), and instead offload music playback to dedicated hardware MP3 decoding chips (MAS). Rockbox was unable to significantly alter playback abilities. Instead, it offered a greatly improved user interface and added plug-in functions absent in the factory firmware. Rockbox can be permanently flashed into flash memory on the Archos devices, making it a firmware replacement. Versions of Rockbox have since been produced for more sophisticated devices. These perform audio decoding in software, allowing Rockbox to potentially support many more music formats than the original firmware, and adding the extensibility and increased functions already present in the Archos ports. Rockbox is run from the hard drive or flash me
https://en.wikipedia.org/wiki/Comparison%20of%20file%20transfer%20protocols
This article lists communication protocols that are designed for file transfer over a telecommunications network. Protocols for shared file systems—such as 9P and the Network File System—are beyond the scope of this article, as are file synchronization protocols. Protocols for packet-switched networks A packet-switched network transmits data that is divided into units called packets. A packet comprises a header (which describes the packet) and a payload (the data). The Internet is a packet-switched network, and most of the protocols in this list are designed for its protocol stack, the IP protocol suite. They use one of two transport layer protocols: the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). In the tables below, the "Transport" column indicates which protocol(s) the transfer protocol uses at the transport layer. Some protocols designed to transmit data over UDP also use a TCP port for oversight. The "Server port" column indicates the port from which the server transmits data. In the case of FTP, this port differs from the listening port. Some protocols—including FTP, FTP Secure, FASP, and Tsunami—listen on a "control port" or "command port", at which they receive commands from the client. Similarly, the encryption scheme indicated in the "Encryption" column applies to transmitted data only, and not to the authentication system. Overview Features The "Managed" column indicates whether the protocol is designed for managed file transfer (MFT). MFT protocols prioritise secure transmission in industrial applications that require such features as auditable transaction records, monitoring, and end-to-end data security. Such protocols may be preferred for electronic data interchange. Ports In the table below, the data port is the network port or range of ports through which the protocol transmits file data. The control port is the port used for the dialogue of commands and status updates between client and server. The column "Assi
https://en.wikipedia.org/wiki/Heterochrony
In evolutionary developmental biology, heterochrony is any genetically controlled difference in the timing, rate, or duration of a developmental process in an organism compared to its ancestors or other organisms. This leads to changes in the size, shape, characteristics and even presence of certain organs and features. It is contrasted with heterotopy, a change in spatial positioning of some process in the embryo, which can also create morphological innovation. Heterochrony can be divided into intraspecific heterochrony, variation within a species, and interspecific heterochrony, phylogenetic variation, i.e. variation of a descendant species with respect to an ancestral species. These changes all affect the start, end, rate or time span of a particular developmental process. The concept of heterochrony was introduced by Ernst Haeckel in 1875 and given its modern sense by Gavin de Beer in 1930. History The concept of heterochrony was introduced by the German zoologist Ernst Haeckel in 1875, where he used it to define deviations from recapitulation theory, which held that "ontogeny recapitulates phylogeny". As Stephen Jay Gould pointed out, Haeckel's term is now used in a sense contrary to his coinage; Haeckel had assumed that embryonic development (ontogeny) of "higher" animals recapitulated their ancestral development (phylogeny), as when mammal embryos have structures on the neck that resemble fish gills at one stage. This, in his view, necessarily compressed the earlier developmental stages, representing the ancestors, into a shorter time, meaning accelerated development. The ideal for Haeckel would be when the development of every part of an organism was thus accelerated, but he recognised that some organs could develop with displacements in position (heterotopy, another concept he originated) or time (heterochrony), as exceptions to his rule. He thus intended the term to mean a change in the timing of the embryonic development of one organ with respect to t
https://en.wikipedia.org/wiki/Vitascope
Vitascope was an early film projector first demonstrated in 1895 by Charles Francis Jenkins and Thomas Armat. They had made modifications to Jenkins' patented Phantoscope, which cast images via film and electric light onto a wall or screen. The Vitascope is a large electrically-powered projector that uses light to cast images. The images being cast are originally taken by a kinetoscope mechanism onto gelatin film. Using an intermittent mechanism, the film negatives produced up to fifty frames per second. The shutter opens and closes to reveal new images. This device can produce up to 3,000 negatives per minute. With the original Phantoscope and before he partnered with Armat, Jenkins displayed the earliest documented projection of a filmed motion picture in June 1894 in Richmond, Indiana. Armat independently sold the Phantoscope to The Kinetoscope Company. The company realized that their Kinetoscope would soon be a thing of the past with the rapidly advancing proliferation of early cinematic engineering. By 1897, just two years after the Vitascope was first demonstrated, the technology was being nationally adopted. Hawaii and Texas were among the first to incorporate the Vitascope into their picture shows. Vitascope was also used briefly as a trademark by Warner Brothers in 1930 for a widescreen process used for films such as Song of the Flame. Warner was trying to compete with other widescreen processes such as Magnascope, Widevision, Natural Vision (no relation to the later 3-D film process), and Fox Grandeur. History Thomas Edison was slow to develop a projection system at this time, since his company's single-user Kinetoscopes were very profitable. However, films projected for large audiences could generate more profits since fewer machines were needed in proportion to the number of viewers. Thus, others sought to develop their own projection systems. One inventor who led the way was Charles Francis Jenkins who created the Phantoscope. Jenkins was behind th
https://en.wikipedia.org/wiki/NTLMSSP
NTLMSSP (NT LAN Manager (NTLM) Security Support Provider) is a binary messaging protocol used by the Microsoft Security Support Provider Interface (SSPI) to facilitate NTLM challenge-response authentication and to negotiate integrity and confidentiality options. NTLMSSP is used wherever SSPI authentication is used including Server Message Block / CIFS extended security authentication, HTTP Negotiate authentication (e.g. IIS with IWA turned on) and MSRPC services. The NTLMSSP and NTLM challenge-response protocol have been documented in Microsoft's Open Protocol Specification. References Microsoft Windows security technology Computer access control protocols
https://en.wikipedia.org/wiki/The%20Schoolmaster%27s%20Assistant%2C%20Being%20a%20Compendium%20of%20Arithmetic%20Both%20Practical%20and%20Theoretical
The Schoolmaster's Assistant, Being a Compendium of Arithmetic both Practical and Theoretical was an early and popular English arithmetic textbook, written by Thomas Dilworth and first published in England in 1743. An American edition was published in 1769; by 1786 it had reached 23 editions, and through 1800 it was the most popular mathematics text in America. Sections Although different editions of the book varied in content according to the whims of their publishers, most editions of the book reached from the introductory topics to the advanced in five sections: Section I, Whole Numbers included the basis of the four operations and proceeded to topics on interest, rebates, partnership, weights and measures, the double rule of three, alligation, mediation and permutations. Section II dealt with common fractions. Section III dealt with decimal fraction operations and included roots up to the fourth power, and work on annuities and pensions. Section IV was a collection of 104 word problems to be solved. As was common in many older texts, the questions were sometimes stated in rhyme. Lessons for students were for memorization and recitation. Section V was on duodecimals, working with fractions in which the only denominators were twelfths. These types of problems continue in textbooks and appear in the 1870 edition of White's Complete Arithmetic in the appendix. The definition states: A Duodecimal is a denominate number in which twelve units of any denomination make a unit of the next higher denomination. Duodecimals are used by artificers in measuring surfaces and solids. References 1743 non-fiction books Mathematics textbooks
https://en.wikipedia.org/wiki/Built-in%20self-test
A built-in self-test (BIST) or built-in test (BIT) is a mechanism that permits a machine to test itself. Engineers design BISTs to meet requirements such as: high reliability lower repair cycle times or constraints such as: limited technician accessibility cost of testing during manufacture The main purpose of BIST is to reduce the complexity, and thereby decrease the cost and reduce reliance upon external (pattern-programmed) test equipment. BIST reduces cost in two ways: reduces test-cycle duration reduces the complexity of the test/probe setup, by reducing the number of I/O signals that must be driven/examined under tester control. Both lead to a reduction in hourly charges for automated test equipment (ATE) service. Applications BIST is commonly placed in weapons, avionics, medical devices, automotive electronics, complex machinery of all types, unattended machinery of all types, and integrated circuits. Automotive Automotive tests itself to enhance safety and reliability. For example, most vehicles with antilock brakes test them once per safety interval. If the antilock brake system has a broken wire or other fault, the brake system reverts to operating as a normal brake system. Most automotive engine controllers incorporate a "limp mode" for each sensor, so that the engine will continue to operate if the sensor or its wiring fails. Another, more trivial example of a limp mode is that some cars test door switches, and automatically turn lights on using seat-belt occupancy sensors if the door switches fail. Aviation Almost all avionics now incorporate BIST. In avionics, the purpose is to isolate failing line-replaceable units, which are then removed and repaired elsewhere, usually in depots or at the manufacturer. Commercial aircraft only make money when they fly, so they use BIST to minimize the time on the ground needed for repair and to increase the level of safety of the system which contains BIST. Similar arguments apply to military ai
https://en.wikipedia.org/wiki/Phage%20display
Phage display is a laboratory technique for the study of protein–protein, protein–peptide, and protein–DNA interactions that uses bacteriophages (viruses that infect bacteria) to connect proteins with the genetic information that encodes them. In this technique, a gene encoding a protein of interest is inserted into a phage coat protein gene, causing the phage to "display" the protein on its outside while containing the gene for the protein on its inside, resulting in a connection between genotype and phenotype. The proteins that the phages are displaying can then be screened against other proteins, peptides or DNA sequences, in order to detect interaction between the displayed protein and those of other molecules. In this way, large libraries of proteins can be screened and amplified in a process called in vitro selection, which is analogous to natural selection. The most common bacteriophages used in phage display are M13 and fd filamentous phage, though T4, T7, and λ phage have also been used. History Phage display was first described by George P. Smith in 1985, when he demonstrated the display of peptides on filamentous phage (long, thin viruses that infect bacteria) by fusing the virus's capsid protein to one peptide out of a collection of peptide sequences. This displayed the different peptides on the outer surfaces of the collection of viral clones, where the screening step of the process isolated the peptides with the highest binding affinity. In 1988, Stephen Parmley and George Smith described biopanning for affinity selection and demonstrated that recursive rounds of selection could enrich for clones present at 1 in a billion or less. In 1990, Jamie Scott and George Smith described creation of large random peptide libraries displayed on filamentous phage. Phage display technology was further developed and improved by groups at the Laboratory of Molecular Biology with Greg Winter and John McCafferty, The Scripps Research Institute with Richard Lerner and
https://en.wikipedia.org/wiki/Hypsometer
A hypsometer is an instrument for measuring height or elevation. Two different principles may be used: trigonometry and atmospheric pressure. Etymology The English word hypsometer originates from the Ancient Greek words ὕψος (húpsos, "height") and μέτρον (métron, "measure"). Scale hypsometer A simple scale hypsometer allows the height of a building or tree to be measured by sighting across a ruler to the base and top of the object being measured, when the distance from the object to the observer is known. Modern hypsometers use a combination of laser rangefinder and clinometer to measure distances to the top and bottom of objects, and the angle between the lines from the observer to each to calculate height. An example of such a scale hypsometer is illustrated here, and can be seen to consist of a sighting tube, a fixed horizontal scale, and an adjustable vertical scale with attached plumb line. The principle of operation of such a scale hypsometer is based on the idea of similar triangles in geometry. First the adjustable vertical scale is set at a suitable height. Then as in step 1 in the illustration, a sighting is taken on the top of the object whose height is to be determined, and the reading on the horizontal scale, h', recorded. Calculation from this value will eventually give the height h, from the eye-line of the observer to the top of the object whose height is to be determined. Similarly as in step 2 of the illustration, a sighting is taken on the base of the object whose height is to be determined, and the reading on the horizontal scale, d', recorded. Calculation from this value will eventually give the distance from the base of the object to the eye-line of the observer. Finally the distance x from the observer to the object needs to be measured. Looking at the geometry involved in step 1 results in sketch a: two right angled triangles, shown here with the identical small angles in yellow. Next in sketch b we see that the two triangles ha
https://en.wikipedia.org/wiki/Unipolar%20encoding
Unipolar encoding is a line code. A positive voltage represents a binary 1, and zero volts indicates a binary 0. It is the simplest line code, directly encoding the bitstream, and is analogous to on-off keying in modulation. Its drawbacks are that it is not self-clocking and it has a significant DC component, which can be halved by using return-to-zero, where the signal returns to zero in the middle of the bit period. With a 50% duty cycle each rectangular pulse is only at a positive voltage for half of the bit period. This is ideal if one symbol is sent much more often than the other and power considerations are necessary, and also makes the signal self-clocking. NRZ (Non-Return-to-Zero) - Traditionally, a unipolar scheme was designed as a non-return-to-zero (NRZ) scheme, in which the positive voltage defines bit 1 and the zero voltage defines bit 0. It is called NRZ because the signal does not return to zero at the middle of the bit, as instead happens in other line coding schemes, such as Manchester code. Compared with its polar counterpart, polar NRZ, this scheme applies a DC bias to the line and unnecessarily wastes power – The normalized power (power required to send 1 bit per unit line resistance) is double that for polar NRZ. For this reason, unipolar encoding is not normally used in data communications today. An Optical Orthogonal Code (OOC) is a family of (0,l) sequences with good auto- and cross-correlation properties for unipolar environments. They are used in optical communications to enable CDMA in optical fiber transmission. See also Bipolar encoding Bipolar violation On-off keying References Encodings Line codes
https://en.wikipedia.org/wiki/Potential%20theory
In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of the Laplace equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other. Modern potential theory is also intimately connected with probability and the theory of Markov chains. In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others. Symmetry A useful starting point and organizing principle in the study of harmonic functions is a consideration of the symmetries of the Laplace equation. Although it is
https://en.wikipedia.org/wiki/Bipolar%20encoding
In telecommunication, bipolar encoding is a type of return-to-zero (RZ) line code, where two nonzero values are used, so that the three values are +, −, and zero. Such a signal is called a duobinary signal. Standard bipolar encodings are designed to be DC-balanced, spending equal amounts of time in the + and − states. The reason why bipolar encoding is classified as a return to zero (RZ) is that when a bipolar encoded channel is idle the line is held at a constant "zero" level, and when it is transmitting bits the line is either in a +V or -V state corresponding to the binary bit being transmitted. Thus, the line always returns to the "zero" level to denote optionally a separation of bits or to denote idleness of the line. Alternate mark inversion One kind of bipolar encoding is a paired disparity code, of which the simplest example is alternate mark inversion. In this code, a binary 0 is encoded as zero volts, as in unipolar encoding, whereas a binary 1 is encoded alternately as a positive voltage or a negative voltage. The name arose because, in the context of a T-carrier, a binary '1' is referred to as a "mark", while a binary '0' is called a "space". Voltage build-up The use of a bipolar code prevents a significant build-up of DC, as the positive and negative pulses average to zero volts. Little or no DC-component is considered an advantage because the cable may then be used for longer distances and to carry power for intermediate equipment such as line repeaters. The DC-component can be easily and cheaply removed before the signal reaches the decoding circuitry. Synchronization and zeroes Bipolar encoding is preferable to non-return-to-zero whenever signal transitions are required to maintain synchronization between the transmitter and receiver. Other systems must synchronize using some form of out-of-band communication, or add frame synchronization sequences that don't carry data to the signal. These alternative approaches require either an additi
https://en.wikipedia.org/wiki/Sulfate-reducing%20microorganism
Sulfate-reducing microorganisms (SRM) or sulfate-reducing prokaryotes (SRP) are a group composed of sulfate-reducing bacteria (SRB) and sulfate-reducing archaea (SRA), both of which can perform anaerobic respiration utilizing sulfate () as terminal electron acceptor, reducing it to hydrogen sulfide (H2S). Therefore, these sulfidogenic microorganisms "breathe" sulfate rather than molecular oxygen (O2), which is the terminal electron acceptor reduced to water (H2O) in aerobic respiration. Most sulfate-reducing microorganisms can also reduce some other oxidized inorganic sulfur compounds, such as sulfite (), dithionite (), thiosulfate (), trithionate (), tetrathionate (), elemental sulfur (S8), and polysulfides (). Other than sulfate reduction, some sulfate-reducing microorganisms are also capable of other reactions like disproportionation of sulfur compounds. Depending on the context, "sulfate-reducing microorganisms" can be used in a broader sense (including all species that can reduce any of these sulfur compounds) or in a narrower sense (including only species that reduce sulfate, and excluding strict thiosulfate and sulfur reducers, for example). Sulfate-reducing microorganisms can be traced back to 3.5 billion years ago and are considered to be among the oldest forms of microbes, having contributed to the sulfur cycle soon after life emerged on Earth. Many organisms reduce small amounts of sulfates in order to synthesize sulfur-containing cell components; this is known as assimilatory sulfate reduction. By contrast, the sulfate-reducing microorganisms considered here reduce sulfate in large amounts to obtain energy and expel the resulting sulfide as waste; this is known as dissimilatory sulfate reduction. They use sulfate as the terminal electron acceptor of their electron transport chain. Most of them are anaerobes; however, there are examples of sulfate-reducing microorganisms that are tolerant of oxygen, and some of them can even perform aerobic respiration
https://en.wikipedia.org/wiki/Mode%20%28statistics%29
In statistics, the mode is the value that appears most often in a set of data values. If is a discrete random variable, the mode is the value at which the probability mass function takes its maximum value (i.e, ). In other words, it is the value that is most likely to be sampled. Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions. The mode is not necessarily unique to a given discrete distribution, since the probability mass function may take the same maximum value at several points , , etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently. A mode of a continuous probability distribution is often considered to be any value at which its probability density function has a locally maximum value. When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local maxima as modes of the distribution, so any peak is a mode. Such a continuous distribution is called multimodal (as opposed to unimodal). In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric unimodal distribution, the sample mean can be used as an estimate of the population mode. Mode of a sample The mode of a sample is the element that occurs most often in the collection. For example, the mode of the sample [1, 3, 6, 6, 6, 6, 7, 7, 12, 12, 17] is 6. Given the list of data [1, 1, 2, 4, 4] its mode is not unique. A dataset, in such a case, is said to be bimodal, while a set with more than two modes may be described as multimodal. For a sample from a continuous distribution, such as [0.9
https://en.wikipedia.org/wiki/Eumel
EUMEL (pronounced oimel for Extendable Multi User Microprocessor ELAN System and also known as L2 for Liedtke 2) is an operating system (OS) which began as a runtime system (environment) for the programming language ELAN. It was created in 1979 by Jochen Liedtke at the Bielefeld University. EUMEL initially ran on the 8-bit Zilog Z80 processor. It later was ported to many different computer architectures. More than 2000 Eumel systems shipped, mostly to schools and also to legal practices as a text processing platform. EUMEL is based on a virtual machine using a bitcode and achieves remarkable performance and function. Z80-based EUMEL systems provide full multi-user multi-tasking operation with virtual memory management and complete isolation of one process against all others. These systems usually execute ELAN programs faster than equivalent programs written in languages such as COBOL, BASIC, or Pascal, and compiled into Z80 machine code on other operating systems. One of the main features of EUMEL is that it is persistent, using a fixpoint/restart logic. This means that if the OS crashes, or the power fails, a user loses only a few minutes of work: on restart they continue working from the prior fixpoint with all program state intact fully. This is also termed orthogonal persistence. EUMEL was followed by the L3 microkernel, and later the L4 microkernel family. References Discontinued operating systems Microkernels
https://en.wikipedia.org/wiki/Paul%20Benacerraf
Paul Joseph Salomon Benacerraf (; born 26 March 1931) is a French-born American philosopher working in the field of the philosophy of mathematics who taught at Princeton University his entire career, from 1960 until his retirement in 2007. He was appointed Stuart Professor of Philosophy in 1974, and retired as the James S. McDonnell Distinguished University Professor of Philosophy. Life and career Benacerraf was born in Paris to a Moroccan-Venezuelan father and an Algerian mother. In 1939 the family moved to Caracas and then to New York City. When the family returned to Caracas, Benacerraf remained in the United States, boarding at the Peddie School in Hightstown, New Jersey. He attended Princeton University for both his undergraduate and graduate studies. He was elected a fellow of the American Academy of Arts and Sciences in 1998. His brother was the Venezuelan Nobel Prize-winning immunologist Baruj Benacerraf. Philosophical work Benacerraf is perhaps best known for his two papers "What Numbers Could Not Be" (1965) and "Mathematical Truth" (1973), and for his anthology on the philosophy of mathematics, co-edited with Hilary Putnam. In "What Numbers Could Not Be" (1965), Benacerraf argues against a Platonist view of mathematics, and for structuralism, on the ground that what is important about numbers is the abstract structures they represent rather than the objects that number words ostensibly refer to. In particular, this argument is based on the point that Ernst Zermelo and John von Neumann give distinct, and completely adequate, identifications of natural numbers with sets (see Zermelo ordinals and von Neumann ordinals). This argument is called Benacerraf's identification problem. In "Mathematical Truth" (1973), he argues that no interpretation of mathematics offers a satisfactory package of epistemology and semantics; it is possible to explain mathematical truth in a way that is consistent with our syntactico-semantical treatment of truth in non-math
https://en.wikipedia.org/wiki/Experimental%20pathology
Experimental pathology, also known as investigative pathology, is the scientific study of disease processes through the microscopic or molecular examination of organs, tissues, cells, or body fluids from diseased organisms. It is closely related, both historically and in modern academic settings, to the medical field of pathology. External link American Society for Investigative Pathology Pathology
https://en.wikipedia.org/wiki/Schwarz%20lemma
In mathematics, the Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem, which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions. Statement Let be the open unit disk in the complex plane centered at the origin, and let be a holomorphic map such that and on . Then for all , and . Moreover, if for some non-zero or , then for some with . Proof The proof is a straightforward application of the maximum modulus principle on the function which is holomorphic on the whole of , including at the origin (because is differentiable at the origin and fixes zero). Now if denotes the closed disk of radius centered at the origin, then the maximum modulus principle implies that, for , given any , there exists on the boundary of such that As we get . Moreover, suppose that for some non-zero , or . Then, at some point of . So by the maximum modulus principle, is equal to a constant such that . Therefore, , as desired. Schwarz–Pick theorem A variant of the Schwarz lemma, known as the Schwarz–Pick theorem (after Georg Pick), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself: Let be holomorphic. Then, for all , and, for all , The expression is the distance of the points , in the Poincaré metric, i.e. the metric in the Poincaré disc model for hyperbolic geometry in dimension two. The Schwarz–Pick theorem then essentially states that a holomorphic map of the unit disk into itself decreases the distance of points in the Poincaré metric. If equality holds throughout in one of the two inequalities above (which is equivalent to saying that the holomorphic map preserves the distance in the Poincaré metric), then must be an analytic automorphism of th
https://en.wikipedia.org/wiki/Choice%20function
A choice function (selector, selection) is a mathematical function f that is defined on some collection X of nonempty sets and assigns some element of each set S in that collection to S by f(S); f(S) maps S to some element of S. In other words, f is a choice function for X if and only if it belongs to the direct product of X. An example Let X = { {1,4,7}, {9}, {2,7} }. Then the function that assigns 7 to the set {1,4,7}, 9 to {9}, and 2 to {2,7} is a choice function on X. History and importance Ernst Zermelo (1904) introduced choice functions as well as the axiom of choice (AC) and proved the well-ordering theorem, which states that every set can be well-ordered. AC states that every set of nonempty sets has a choice function. A weaker form of AC, the axiom of countable choice (ACω) states that every countable set of nonempty sets has a choice function. However, in the absence of either AC or ACω, some sets can still be shown to have a choice function. If is a finite set of nonempty sets, then one can construct a choice function for by picking one element from each member of This requires only finitely many choices, so neither AC or ACω is needed. If every member of is a nonempty set, and the union is well-ordered, then one may choose the least element of each member of . In this case, it was possible to simultaneously well-order every member of by making just one choice of a well-order of the union, so neither AC nor ACω was needed. (This example shows that the well-ordering theorem implies AC. The converse is also true, but less trivial.) Choice function of a multivalued map Given two sets X and Y, let F be a multivalued map from X to Y (equivalently, is a function from X to the power set of Y). A function is said to be a selection of F, if: The existence of more regular choice functions, namely continuous or measurable selections is important in the theory of differential inclusions, optimal control, and mathematical economics. See Selection the
https://en.wikipedia.org/wiki/Thermodesulfobacteriota
The Thermodesulfobacteriota are a phylum of thermophilic sulfate-reducing bacteria. A pathogenic intracellular thermodesulfobacteriote has recently been identified. Phylogeny The phylogeny is based on phylogenomic analysis: See also List of bacterial orders List of bacteria genera References Bergey's volume 1 Bacteria phyla
https://en.wikipedia.org/wiki/Joseph%20Kruskal
Joseph Bernard Kruskal, Jr. (; January 29, 1928 – September 19, 2010) was an American mathematician, statistician, computer scientist and psychometrician. Personal life Kruskal was born to a Jewish family in New York City to a successful fur wholesaler, Joseph B. Kruskal, Sr. His mother, Lillian Rose Vorhaus Kruskal Oppenheimer, became a noted promoter of origami during the early era of television. Kruskal had two notable brothers, Martin David Kruskal, co-inventor of solitons, and William Kruskal, who developed the Kruskal–Wallis one-way analysis of variance. One of Joseph Kruskal's nephews is notable computer scientist and professor Clyde Kruskal. Education and career He was a student at the University of Chicago earning a bachelor of science in mathematics in the year of 1948, and a master of science in mathematics in the following year 1949. After his time at the University of Chicago Kruskal attended Princeton University, where he completed his Ph.D. in 1954, nominally under Albert W. Tucker and Roger Lyndon, but de facto under Paul Erdős with whom he had two very short conversations. Kruskal worked on well-quasi-orderings and multidimensional scaling. He was a Fellow of the American Statistical Association, former president of the Psychometric Society, and former president of the Classification Society of North America. He also initiated and was first president of the Fair Housing Council of South Orange and Maplewood in 1963, and actively supported civil rights in several other organizations such as CORE. He worked at Bell Labs from 1959 to 1993. Research In statistics, Kruskal's most influential work is his seminal contribution to the formulation of multidimensional scaling. In computer science, his best known work is Kruskal's algorithm for computing the minimal spanning tree (MST) of a weighted graph. The algorithm first orders the edges by weight and then proceeds through the ordered list adding an edge to the partial MST provided that adding the
https://en.wikipedia.org/wiki/Biogeology
Biogeology is the study of the interactions between the Earth's biosphere and the lithosphere. Biogeology examines biotic, hydrologic, and terrestrial systems in relation to each other, to help understand the Earth's climate, oceans, and other effects on geologic systems. For example, bacteria are responsible for the formation of some minerals such as pyrite, and can concentrate economically important metals such as tin and uranium. Bacteria are also responsible for the chemical composition of the atmosphere, which affects weathering rates of rocks. Prior to the late Devonian period, there was little plant life beyond lichens, and bryophytes. At this time large vascular plants evolved, growing up to in height. These large plants changed the atmosphere, and altered the composition of the soil by increasing the amount of organic carbon. This helped prevent the soil being washed away through erosion. See also Pedology Geobiology References External links Biogeology department at the Institute of Paleobiology. American Geophysical Union (AGU) and Journal of Geophysical Research (JGR) - Biogeosciences European Geosciences Union (EGU) and Biogeosciences (BG) Earth system sciences Geobiology
https://en.wikipedia.org/wiki/Galling
Galling is a form of wear caused by adhesion between sliding surfaces. When a material galls, some of it is pulled with the contacting surface, especially if there is a large amount of force compressing the surfaces together. Galling is caused by a combination of friction and adhesion between the surfaces, followed by slipping and tearing of crystal structure beneath the surface. This will generally leave some material stuck or even friction welded to the adjacent surface, whereas the galled material may appear gouged with balled-up or torn lumps of material stuck to its surface. Galling is most commonly found in metal surfaces that are in sliding contact with each other. It is especially common where there is inadequate lubrication between the surfaces. However, certain metals will generally be more prone to galling, due to the atomic structure of their crystals. For example, aluminium is a metal that will gall very easily, whereas annealed (softened) steel is slightly more resistant to galling. Steel that is fully hardened is very resistant to galling. Galling is a common problem in most applications where metals slide in contact with other metals. This can happen regardless of whether the metals are the same or different. Alloys such as brass and bronze are often chosen for bearings, bushings, and other sliding applications because of their resistance to galling, as well as other forms of mechanical abrasion. Introduction Galling is adhesive wear that is caused by the microscopic transfer of material between metallic surfaces during transverse motion (sliding). It occurs frequently whenever metal surfaces are in contact, sliding against each other, especially with poor lubrication. It often occurs in high-load, low-speed applications, although it also can occur in high-speed applications with very little load. Galling is a common problem in sheet metal forming, bearings and pistons in engines, hydraulic cylinders, air motors, and many other industrial operatio
https://en.wikipedia.org/wiki/Test%20script
A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected. Types of test scripts There are various means for executing test scripts. These last two types are also done in manual testing. Manual testing. These are more commonly called test cases. Automated testing. Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP QuickTest Professional, Borland SilkTest, IBM TPNS and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Powershell, Python, or Ruby). As documented in IEEE, ISO and IEC. Extensively parameterized short programs a.k.a. Data-driven testing Reusable steps created in a table a.k.a. keyword-driven or table-driven testing. Usage and functionality Automated testing may be executed continuously without the need for human intervention, they are easily repeatable, and often faster. Automated tests are useful in situations where the test is to be executed several times, for example as part of regression testing. Automated tests can be disadvantageous when poorly written, leading to incorrect testing or broken tests being carried out. Automated tests can, like any piece of software, be poorly written or simply break during playback. They also can only examine what they have been programmed to examine. Since most systems are designed with human interaction in mind, it is good practice that a human tests the system at some point. A trained manual tester can notice that the system under test is misbehaving without being prompted or directed; automated tests can only examine what they have been programmed to examine. When used in regression testing, manual testers can find new bugs while ensuring that old bugs do not reappear while an
https://en.wikipedia.org/wiki/Harnack%27s%20principle
In the mathematical field of partial differential equations, Harnack's principle or Harnack's theorem is a corollary of Harnack's inequality which deals with the convergence of sequences of harmonic functions. Given a sequence of harmonic functions on an open connected subset of the Euclidean space , which are pointwise monotonically nondecreasing in the sense that for every point of , then the limit automatically exists in the extended real number line for every . Harnack's theorem says that the limit either is infinite at every point of or it is finite at every point of . In the latter case, the convergence is uniform on compact sets and the limit is a harmonic function on . The theorem is a corollary of Harnack's inequality. If is a Cauchy sequence for any particular value of , then the Harnack inequality applied to the harmonic function implies, for an arbitrary compact set containing , that is arbitrarily small for sufficiently large and . This is exactly the definition of uniform convergence on compact sets. In words, the Harnack inequality is a tool which directly propagates the Cauchy property of a sequence of harmonic functions at a single point to the Cauchy property at all points. Having established uniform convergence on compact sets, the harmonicity of the limit is an immediate corollary of the fact that the mean value property (automatically preserved by uniform convergence) fully characterizes harmonic functions among continuous functions. The proof of uniform convergence on compact sets holds equally well for any linear second-order elliptic partial differential equation, provided that it is linear so that solves the same equation. The only difference is that the more general Harnack inequality holding for solutions of second-order elliptic PDE must be used, rather than that only for harmonic functions. Having established uniform convergence on compact sets, the mean value property is not available in this more general setting, and so
https://en.wikipedia.org/wiki/Isopsephy
Isopsephy (; isos meaning "equal" and psephos meaning "pebble") or isopsephism is the practice of adding up the number values of the letters in a word to form a single number. The total number is then used as a metaphorical bridge to other words evaluating the equal number, which satisfies isos or "equal" in the term. The early Greeks used pebbles arranged in patterns to learn arithmetic and geometry, which corresponds to psephos or "pebble" and "counting" in the term. Isopsephy is related to gematria: the same practice using the Hebrew alphabet. It is also related to the ancient number systems of many other peoples (for the Arabic alphabet version, see Abjad numerals). A gematria of Latin script languages was also popular in Europe from the Middle Ages to the Renaissance, and its legacy remains in code-breaking, numerology, and Masonic symbolism today (see arithmancy). History Until Arabic numerals were adopted and adapted from Indian numerals in the 8th and 9th centuries AD, and promoted in Europe by Fibonacci of Pisa with his 1202 book Liber Abaci, numerals were predominantly alphabetical. For instance in Ancient Greece, Greek numerals used the alphabet. It is just a short step from using letters of the alphabet in everyday arithmetic and mathematics to seeing numbers in words, and to writing with an awareness of the numerical dimension of words. An early reference to isopsephy, albeit of more-than-usual sophistication (employing multiplication rather than addition), is from the mathematician Apollonius of Perga, writing in the 3rd century BC. He asks: "Given the verse: (), what does the product of all its elements equal?" More conventional are the instances of isopsephy found in graffiti at Pompeii, dating from around 79 AD. One reads "" Another says, " " Suetonius, writing in 121 AD, reports a political slogan that someone wrote on a wall in Rome: which appears to be another example. In Greek, Νερων, Nero, has the numerical value 50+5+100+800+50=1005
https://en.wikipedia.org/wiki/Phase%20vocoder
A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting). At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms. The STFT converts a time domain representation of sound into a time-frequency representation (the "analysis" phase), allowing modifications to the amplitudes or phases of specific frequency components of the sound, before resynthesis of the time-frequency domain representation into the time domain by the inverse STFT. The time evolution of the resynthesized sound can be changed by means of modifying the time position of the STFT frames prior to the resynthesis operation allowing for time-scale modification of the original sound file. Phase coherence problem The main problem that has to be solved for all cases of manipulation of the STFT is the fact that individual signal components (sinusoids, impulses) will be spread over multiple frames and multiple STFT frequency locations (bins). This is because the STFT analysis is done using overlapping analysis windows. The windowing results in spectral leakage such that the information of individual sinusoidal components is spread over adjacent STFT bins. To avoid border effects of tapering of the analysis windows, STFT analysis windows overlap in time. This time overlap results in the fact that adjacent STFT analyses are strongly correlated (a sinusoid present in analysis frame at time "t" will be present in the subsequent frames as well). The problem of signal transformation with the phase vocoder is related to the problem that all modifications that are done in the STFT representation need to preserve the appropriate correlation between adja
https://en.wikipedia.org/wiki/Strategy%20%28game%20theory%29
In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines. The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by a player at some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A strategy on the other hand is a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the game. It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B doesn't enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff. This also helps players to identify Nash equilibrium which are discussed in more detail below. A strategy profile (sometimes called a strategy c
https://en.wikipedia.org/wiki/Autoregressive%20model
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable. Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root. Definition The notation indicates an autoregressive model of order p. The AR(p) model is defined as where are the parameters of the model, and is white noise. This can be equivalently written using the backshift operator B as so that, moving the summation term to the left side and using polynomial notation, we have An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise. Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial must lie outside the unit circle, i.e., each (complex) root must satisfy (see pages 89,92 ). Intertemporal effect of shocks In an AR process, a one-time
https://en.wikipedia.org/wiki/Spectral%20mask
In telecommunications, a spectral mask, also known as a channel mask or transmission mask, is a mathematically-defined set of lines applied to the levels of radio (or optical) transmissions. The spectral mask is generally intended to reduce adjacent-channel interference by limiting excessive radiation at frequencies beyond the necessary bandwidth. Attenuation of these spurious emissions is usually done with a band-pass filter, tuned to allow through the correct center frequency of the carrier wave, as well as all necessary sidebands. The spectral mask is usually one of the things defined in a bandplan for each particular band. It is essential in assuring that a transmission stays within its channel. An FM radio station, for example, must attenuate everything beyond ±75kHz from the center frequency by a few decibels, and anything beyond ±100 kHz (the channel boundary) by much more. Emissions on further adjacent channels must be reduced to almost zero. FM broadcast subcarriers are normally required to stay under 75 kHz (and up to 100 kHz if reduced) to comply with the mask. The introduction of in-band on-channel (IBOC) digital radio in the United States has been slowed by issues concerning the subcarriers it uses – and the corresponding increase in the amount of energy in the sidebands – overstepping the bounds of the spectral mask set forth for FM by the NRSC and enforced by the FCC. Other types of modulation have different spectral masks for the same purpose. Many digital modulation methods such as COFDM use the electromagnetic spectrum very efficiently, allowing for a very tight spectral mask. This allows placement of broadcast stations or other transmissions on channels right next to each other without interference, allowing for an increase in a band's total capacity. Conversely, it is allowing the U.S. to eliminate TV channels 52 to 69, freeing up 108 MHz (from approximately 700 to 800 MHz) for emergency services and to be auctioned off to the high
https://en.wikipedia.org/wiki/Pentyl%20pentanoate
Pentyl pentanoate (C4H9COOC5H11) is an ester used in dilute solution to replicate the scent or flavour of apple, and sometimes pineapple. It is referred to as pentyl valerate or amyl pentanoate using classical nomenclature. it can be used for a variety of chemical uses, such as in the production of flavoured products, like sweets. References Flavors Valerate esters
https://en.wikipedia.org/wiki/ADO.NET
ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components. ADO.NET is a set of computer software components that programmers can use to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework. It is commonly used by programmers to access and modify data stored in relational database systems, though it can also access data in non-relational data sources. ADO.NET is sometimes considered an evolution of ActiveX Data Objects (ADO) technology, but was changed so extensively that it can be considered an entirely new product. Architecture ADO.NET is conceptually divided into consumers and data providers. The consumers are the applications that need access to the data, and the providers are the software components that implement the interface and thereby provide the data to the consumer. Functionality exists in Visual Studio IDE to create specialized subclasses of the DataSet classes for a particular database schema, allowing convenient access to each field in the schema through strongly typed properties. This helps catch more programming errors at compile-time and enhances the IDE's Intellisense feature. A provider is a software component that interacts with a data source. ADO.NET data providers are analogous to ODBC drivers, JDBC drivers, and OLE DB providers. ADO.NET providers can be created to access such simple data stores as a text file and spreadsheet, through to such complex databases as Oracle Database, Microsoft SQL Server, MySQL, PostgreSQL, SQLite, IBM Db2, Sybase ASE, and many others. They can also provide access to hierarchical data stores such as email systems. Because different data store technologies can have different capabilities, every ADO.NET provider cannot implement every possible interface available in the ADO.NET standard. Microsoft describes
https://en.wikipedia.org/wiki/Maximum%20principle
In the mathematical fields of partial differential equations and geometric analysis, the maximum principle is any of a collection of results and techniques of fundamental importance in the study of elliptic and parabolic differential equations. In the simplest case, consider a function of two variables such that The weak maximum principle, in this setting, says that for any open precompact subset of the domain of , the maximum of on the closure of is achieved on the boundary of . The strong maximum principle says that, unless is a constant function, the maximum cannot also be achieved anywhere on itself. Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once. In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary. Intuition A partial formulation of the strong maximum principle Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let be an open subset of Euclidean space and let be a function on such that where for each and between 1 and , is a function on with . Fix some choice of in . According to the spectral theorem of linear algebra, all eigenvalues of the matrix are real, and there is an orthonormal basis of consisting of eigenvectors. Denote the eigenvalues by and the corresponding eigenvectors by , for from 1 to . Then the differential equation, at the point , can be rephrased as The essence of the maximum principle is the simple observation that if each e
https://en.wikipedia.org/wiki/Voigt%20notation
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application. For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector . As another example: The stress tensor (in matrix notation) is given as In Voigt notation it is simplified to a 6-dimensional vector: The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as Its representation in Voigt notation is where , , and are engineering shear strains. The benefit of using different representations for stress and strain is that the scalar invariance is preserved. Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix. Mnemonic rule A simple mnemonic rule for memorizing Voigt notation is as follows: Write down the second order tensor in matrix form (in the example, the stress tensor) Strike out the diagonal Continue on the third column Go back to the first element along the first row. Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue). Mandel notation For a symmetric tensor of second rank only six components are distinct, the three on the diagonal and the others being off-diagonal. Thus it can be expressed, in Mandel notation, as the vector The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors, for example: A symmetric tensor o
https://en.wikipedia.org/wiki/Terminal%20%28electronics%29
A terminal is the point at which a conductor from a component, device or network comes to an end. Terminal may also refer to an electrical connector at this endpoint, acting as the reusable interface to a conductor and creating a point where external circuits can be connected. A terminal may simply be the end of a wire or it may be fitted with a connector or fastener. In network analysis, terminal means a point at which connections can be made to a network in theory and does not necessarily refer to any physical object. In this context, especially in older documents, it is sometimes called a pole. On circuit diagrams, terminals for external connections are denoted by empty circles. They are distinguished from nodes or junctions which are entirely internal to the circuit, and are denoted by solid circles. All electrochemical cells have two terminals (electrodes) which are referred to as the anode and cathode or positive (+) and negative (-). On many dry batteries, the positive terminal (cathode) is a protruding metal cap and the negative terminal (anode) is a flat metal disc . In a galvanic cell such as a common AA battery, electrons flow from the negative terminal to the positive terminal, while the conventional current is opposite to this. Types of terminals Connectors Line splices Terminal strip, also known as a tag board or tag strip Solder cups or buckets Wire wrap connections (wire to board) Crimp terminals (ring, spade, fork, bullet, blade) Turret terminals for surface-mount circuits Crocodile clips Screw terminals and terminal blocks Wire nuts, a type of twist-on wire connector Leads on electronic components Battery terminals, often using screws or springs Electrical polarity See also Electrical connector - many terminals fall under this category Electrical termination - a method of signal conditioning References Electronic engineering Electrical components
https://en.wikipedia.org/wiki/Cereal%20germ
The germ of a cereal grain is the part that develops into a plant; it is the seed embryo. Along with bran, germ is often a by-product of the milling that produces refined grain products. Cereal grains and their components, such as wheat germ oil, rice bran oil, and maize bran, may be used as a source from which vegetable oil is extracted, or used directly as a food ingredient. The germ is retained as an integral part of whole-grain foods. Non-whole grain methods of milling are intended to isolate the endosperm, which is ground into flour, with removal of both the husk (bran) and the germ. Removal of bran is aimed at producing a flour with a white rather than a brown color, and eliminating fiber, which reduces nutrition. The germ is rich in polyunsaturated fats (which have a tendency to oxidize and become rancid on storage) and so germ removal improves the storage qualities of flour. Wheat germ Wheat germ or wheatgerm is a concentrated source of several essential nutrients, including vitamin E, folate (folic acid), phosphorus, thiamin, zinc, and magnesium, as well as essential fatty acids and fatty alcohols. It is a good source of fiber. White bread is made using flour that has had the germ and bran removed. Wheat germ can be added to protein shakes, casseroles, muffins, pancakes, cereals, yogurt, smoothies, cookies, and other goods. Wheat germ can become rancid if not properly stored in a refrigerator or freezer and away from sunlight. Some manufacturers prevent rancidity by storing wheat germ in vacuum-sealed glass containers, or by placing an oxygen-absorbing sachet inside air-tight packaging. Other uses In molecular biology, wheat germ extract is used to carry out cell-free in vitro translation experiments since the plant embryo contains all the macromolecular components necessary for translating mRNA into proteins but relatively low levels of its own mRNA. Wheat germ is also useful in biochemistry since it contains lectins that bind strongly to certain gly
https://en.wikipedia.org/wiki/Velocity-addition%20formula
In relativistic physics, a velocity-addition formula is an equation that specifies how to combine the velocities of objects in a way that is consistent with the requirement that no object's speed can exceed the speed of light. Such formulas apply to successive Lorentz transformations, so they also relate different frames. Accompanying velocity addition is a kinematic effect known as Thomas precession, whereby successive non-collinear Lorentz boosts become equivalent to the composition of a rotation of the coordinate system and a boost. Standard applications of velocity-addition formulas include the Doppler shift, Doppler navigation, the aberration of light, and the dragging of light in moving water observed in the 1851 Fizeau experiment. The notation employs as velocity of a body within a Lorentz frame , and as velocity of a second frame , as measured in , and as the transformed velocity of the body within the second frame. History The speed of light in a fluid is slower than the speed of light in vacuum, and it changes if the fluid is moving along with the light. In 1851, Fizeau measured the speed of light in a fluid moving parallel to the light using an interferometer. Fizeau's results were not in accord with the then-prevalent theories. Fizeau experimentally correctly determined the zeroth term of an expansion of the relativistically correct addition law in terms of as is described below. Fizeau's result led physicists to accept the empirical validity of the rather unsatisfactory theory by Fresnel that a fluid moving with respect to the stationary aether partially drags light with it, i.e. the speed is instead of , where is the speed of light in the aether, is the refractive index of the fluid, and is the speed of the fluid with respect to the aether. The aberration of light, of which the easiest explanation is the relativistic velocity addition formula, together with Fizeau's result, triggered the development of theories like Lorentz aether theory o
https://en.wikipedia.org/wiki/South-pointing%20chariot
The south-pointing chariot (or carriage) was an ancient Chinese two-wheeled vehicle that carried a movable pointer to indicate the south, no matter how the chariot turned. Usually, the pointer took the form of a doll or figure with an outstretched arm. The chariot was supposedly used as a compass for navigation and may also have had other purposes. The ancient Chinese invented a mobile-like armored cart in the 5th century BC called the Dongwu Che (). It was used for the purpose of protecting warriors on the battlefield. The Chinese war wagon was designed as a kind of mobile protective cart with a shed-like roof. It would serve to be rolled up to city fortifications to provide protection for sappers digging underneath to weaken a wall's foundation. The early Chinese war wagon became the basis of technologies for the making of ancient Chinese south-pointing chariots. There are legends of earlier south-pointing chariots, but the first reliably documented one was created by the Chinese mechanical engineer Ma Jun ( – 265) of Cao Wei during the Three Kingdoms. No ancient chariots still exist, but many extant ancient Chinese texts mention them, saying they were used intermittently until about 1300. Some include information about their inner components and workings. There were probably several types of south-pointing chariot which worked differently. In most or all of them, the rotating road wheels mechanically operated a geared mechanism to keep the pointer aimed correctly. The mechanism had no magnets and did not automatically detect which direction was south. The pointer was aimed southward by hand at the start of a journey. Subsequently, whenever the chariot turned, the mechanism rotated the pointer relative to the body of the chariot to counteract the turn and keep the pointer aiming in a constant direction, to the south. Thus the mechanism did a kind of directional dead reckoning, which is inherently prone to cumulative errors and uncertainties. Some chariots' mech
https://en.wikipedia.org/wiki/Anti-neutrophil%20cytoplasmic%20antibody
Anti-neutrophil cytoplasmic antibodies (ANCAs) are a group of autoantibodies, mainly of the IgG type, against antigens in the cytoplasm of neutrophils (the most common type of white blood cell) and monocytes. They are detected as a blood test in a number of autoimmune disorders, but are particularly associated with systemic vasculitis, so called ANCA-associated vasculitides (AAV). ANCA IF patterns Immunofluorescence (IF) on ethanol-fixed neutrophils is used to detect ANCA, although formalin-fixed neutrophils may be used to help differentiate ANCA patterns. ANCA can be divided into four patterns when visualised by IF; cytoplasmic ANCA (c-ANCA), C-ANCA (atypical), perinuclear ANCA (p-ANCA) and atypical ANCA (a-ANCA), also known as x-ANCA. c-ANCA shows cytoplasmic granular fluorescence with central interlobular accentuation. C-ANCA (atypical) shows cytoplasmic staining that is usually uniform and has no interlobular accentuation. p-ANCA has three subtypes, classical p-ANCA, p-ANCA without nuclear extension and granulocyte specific-antinuclear antibody (GS-ANA). Classical p-ANCA shows perinuclear staining with nuclear extension, p-ANCA without nuclear extension has perinuclear staining without nuclear extension and GS-ANA shows nuclear staining on granulocytes only. a-ANCA often shows combinations of both cytoplasmic and perinuclear staining. ANCA antigens The c-ANCA antigen is specifically proteinase 3 (PR3). p-ANCA antigens include myeloperoxidase (MPO) and bacterial permeability increasing factor Bactericidal/permeability-increasing protein (BPI). Other antigens exist for c-ANCA (atypical), however many are as yet unknown. Classical p-ANCA occurs with antibodies directed to MPO. p-ANCA without nuclear extension occurs with antibodies to BPI, cathepsin G, elastase, lactoferrin and lysozyme. GS-ANA are antibodies directed to granulocyte specific nuclear antigens. Atypical ANCA are thought to be antigens similar to that of the p-ANCAs, however may occur due to differ
https://en.wikipedia.org/wiki/Degree%20of%20curvature
Degree of curve or degree of curvature is a measure of curvature of a circular arc used in civil engineering for its easy use in layout surveying. Definition The degree of curvature is defined as the central angle to the ends of an agreed length of either an arc or a chord; various lengths are commonly used in different areas of practice. This angle is also the change in forward direction as that portion of the curve is traveled. In an n-degree curve, the forward bearing changes by n degrees over the standard length of arc or chord. Usage Curvature is usually measured in radius of curvature. A small circle can be easily laid out by just using radius of curvature, but degree of curvature is more convenient for calculating and laying out the curve if the radius is large as a kilometer or a mile, as it needed for large scale works like roads and railroads. By using degrees of curvature, curve setting can be easily done with the help of a transit or theodolite and a chain, tape, or rope of a prescribed length. Length selection The usual distance used to compute degree of curvature in North American road work is of arc. Conversely, North American railroad work traditionally used 100 feet of chord, which is used in other places for road work. Other lengths may be used—such as where SI is favoured or a shorter length for sharper curves. Where degree of curvature is based on 100 units of arc length, the conversion between degree of curvature and radius is , where is degree and is radius. Since rail routes have very large radii, they are laid out in chords, as the difference to the arc is inconsequential; this made work easier before electronic calculators became available. The is called a station, used to define length along a road or other alignment, annotated as stations plus feet 1+00, 2+00, etc. Metric work may use similar notation, such as kilometers plus meters 1+000. Formulas for radius of curvature Degree of curvature can be converted to radius of curv
https://en.wikipedia.org/wiki/134%20%28number%29
134 (one hundred [and] thirty-four) is the natural number following 133 and preceding 135. In mathematics 134 is a nontotient since there is no integer with exactly 134 coprimes below it. And it is a noncototient since there is no integer with 134 integers with common factors below it. 134 is . In Roman numerals, 134 is a Friedman number since CXXXIV = XV * (XC/X) - I. In the military was a Mission Buenaventura-class fleet oiler during World War II was a United States Navy during World War II was a United States Navy between World War I and World War II was the lead ship of the United States Navy heavy cruisers during World War II was a United States Navy General G. O. Squier-class transport ship during World War II was a United States Navy converted steel-hulled trawler, during World War II was a United States Navy which saw battle during the Battle of Midway was a United States Navy during World War II , was a United States S-class submarine which was later transferred to the Royal Navy was a United States Navy Crater-class cargo ship during World War II 134 (Bedford) Squadron in the United Kingdom Air Training Corps The 134th (48th Highlanders) Battalion, CEF was a Toronto, Ontario unit of the Canadian Expeditionary Force during World War I The 134th Pennsylvania Volunteer Infantry was an infantry regiment in the Union Army during the American Civil War In sports Former running back George Reed for the Saskatchewan Roughriders held the career record of 134 rushing touchdowns In transportation London Buses route 134 is a Transport for London contracted bus route in London In other fields 134 is also: The year AD 134 or 134 BC 134 AH is a year in the Islamic calendar that corresponds to 751 – 752 CE 134 Sophrosyne is a large main belt asteroid with a dark surface and most likely a primitive carbonaceous composition Caesium-134 has a half-life of 2.0652 years. It is produced both directly (at a very small yield) as a fission
https://en.wikipedia.org/wiki/Biometric%20passport
A biometric passport (also known as an electronic passport, e-passport or a digital passport) is a traditional passport that has an embedded electronic microprocessor chip, which contains biometric information that can be used to authenticate the identity of the passport holder. It uses contactless smart card technology, including a microprocessor chip (computer chip) and antenna (for both power to the chip and communication) embedded in the front or back cover, or centre page, of the passport. The passport's critical information is printed on the data page of the passport, repeated on the machine readable lines and stored in the chip. Public key infrastructure (PKI) is used to authenticate the data stored electronically in the passport chip, supposedly making it expensive and difficult to forge when all security mechanisms are fully and correctly implemented. Many countries are moving towards issuing biometric passports to their citizens. Malaysia was the first country to issue biometric passports in 1998. In December 2008, 60 countries were issuing such passports, which increased to over 150 by mid-2019. The currently standardised biometrics used for this type of identification system are facial recognition, fingerprint recognition, and iris recognition. These were adopted after assessment of several different kinds of biometrics including retinal scan. Document and chip characteristics are documented in the International Civil Aviation Organization's (ICAO) Doc 9303 (ICAO 9303). The ICAO defines the biometric file formats and communication protocols to be used in passports. Only the digital image (usually in JPEG or JPEG 2000 format) of each biometric feature is actually stored in the chip. The comparison of biometric features is performed outside the passport chip by electronic border control systems (e-borders). To store biometric data on the contactless chip, it includes a minimum of 32 kilobytes of EEPROM storage memory, and runs on an interface in accordan
https://en.wikipedia.org/wiki/PowerVR
PowerVR is a division of Imagination Technologies (formerly VideoLogic) that develops hardware and software for 2D and 3D rendering, and for video encoding, decoding, associated image processing and DirectX, OpenGL ES, OpenVG, and OpenCL acceleration. PowerVR also develops AI accelerators called Neural Network Accelerator (NNA). The PowerVR product line was originally introduced to compete in the desktop PC market for 3D hardware accelerators with a product with a better price–performance ratio than existing products like those from 3dfx Interactive. Rapid changes in that market, notably with the introduction of OpenGL and Direct3D, led to rapid consolidation. PowerVR introduced new versions with low-power electronics that were aimed at the laptop computer market. Over time, this developed into a series of designs that could be incorporated into system-on-a-chip architectures suitable for handheld device use. PowerVR accelerators are not manufactured by PowerVR, but instead their IP blocks of integrated circuit designs and patents are licensed to other companies, such as Texas Instruments, Intel, NEC, BlackBerry, Renesas, Samsung, Sony, STMicroelectronics, Freescale, Apple, NXP Semiconductors (formerly Philips Semiconductors), and many others. Technology The PowerVR chipset uses a method of 3D rendering known as tile-based deferred rendering (often abbreviated as TBDR) which is tile-based rendering combined with PowerVR's proprietary method of Hidden Surface Removal (HSR) and Hierarchical Scheduling Technology (HST). As the polygon generating program feeds triangles to the PowerVR (driver), it stores them in memory in a triangle strip or an indexed format. Unlike other architectures, polygon rendering is (usually) not performed until all polygon information has been collated for the current frame. Furthermore, the expensive operations of texturing and shading of pixels (or fragments) is delayed, whenever possible, until the visible surface at a pixel is determin
https://en.wikipedia.org/wiki/Facial%20Action%20Coding%20System
The Facial Action Coding System (FACS) is a system to taxonomize human facial movements by their appearance on the face, based on a system originally developed by a Swedish anatomist named Carl-Herman Hjortsjö. It was later adopted by Paul Ekman and Wallace V. Friesen, and published in 1978. Ekman, Friesen, and Joseph C. Hager published a significant update to FACS in 2002. Movements of individual facial muscles are encoded by the FACS from slight different instant changes in facial appearance. It has proven useful to psychologists and to animators. Background In 2009, a study was conducted to study spontaneous facial expressions in sighted and blind judo athletes. They discovered that many facial expressions are innate and not visually learned. Method Using the FACS human coders can manually code nearly any anatomically possible facial expression, deconstructing it into the specific "action units" (AU) and their temporal segments that produced the expression. As AUs are independent of any interpretation, they can be used for any higher order decision making process including recognition of basic emotions, or pre-programmed commands for an ambient intelligent environment. The FACS manual is over 500 pages in length and provides the AUs, as well as Ekman's interpretation of their meanings. The FACS defines AUs, as contractions or relaxations of one or more muscles. It also defines a number of "action descriptors", which differ from AUs in that the authors of the FACS have not specified the muscular basis for the action and have not distinguished specific behaviors as precisely as they have for the AUs. For example, the FACS can be used to distinguish two types of smiles as follows: the insincere and voluntary Pan-Am smile: contraction of zygomatic major alone the sincere and involuntary Duchenne smile: contraction of zygomatic major and inferior part of orbicularis oculi. The FACS is designed to be self-instructional. People can learn the technique from a
https://en.wikipedia.org/wiki/WCWN
WCWN (channel 45) is a television station licensed to Schenectady, New York, United States, serving the Capital District as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside CBS affiliate WRGB (channel 6, also licensed to Schenectady). Both stations share studios on Balltown Road in Niskayuna, New York (with a Schenectady postal address), while WCWN's transmitter is located on the Helderberg Escarpment west of New Salem. WCWN brands as CW 15 after the cable channel position on Charter Spectrum and Verizon Fios. History WCWN originated as an independent station on December 3, 1984, under the call letters WUSV, after its owner, Union Street Video. Chattanooga, Tennessee-based Media Central, a company that helped sign on a handful of independent UHF stations during this time, helped Union Street Video build the station, and also provided services to WUSV. However, it made little headway against the market's original independent, WXXA-TV (channel 23). As with most independent stations in mid-sized markets during that period, it had difficulties from the outset in terms of getting programming with only afternoon cartoons and some reruns getting respectable ratings. Most of the stronger programming went to WXXA, perhaps due in part to one of its founding partners, longtime movie director Arthur Penn, having ties to Hollywood. Also hurting WUSV was the presence of New York City's three independent stations–WNEW-TV (now WNYW), WOR-TV (now WWOR-TV) and WPIX–and Boston's WSBK-TV on cable, which had been the case for more than a decade. When WXXA joined Fox as a charter affiliate in 1986 (displacing WNYW on cable systems), WUSV had some difficulty filling the void. The Capital District was just barely large enough at the time to support what were essentially two independent stations (WXXA, like other early Fox affiliates, was still programmed as an independent at the time), and there simply wasn't enough programming to go around. Due to continuing fin
https://en.wikipedia.org/wiki/Non-cooperative%20game%20theory
A non-cooperative game is a form of game under the topic of game theory. Non-cooperative games are used in situations where there are competition between the players of the game. In this model, there are no external rules that enforces the cooperation of the players therefore it is typically used to model a competitive environment. This is stated in various accounts most prominent being John Nash's paper. That being said, there are many arguments to be made regarding this point as with decades of research, it is shown that non-cooperative game models can be used to show cooperation as well and vice versa for cooperative game model being used to show competition. Some examples of this would be the usage of non-cooperative model in determining the stability and sustainability of cartels and coalitions. Non zero-sum games and zero-sum games are both types of non-cooperative games. Non-cooperative game theory in academic literature Referring to the above mentioned John Nash's 1951 article in the journal Annals of Mathematics. Nash Equilibrium, non-cooperative game theory in fact, are often referred to as "non-cooperative equilibrium". According to Tamer Başar in Lecture Notes on Non-Cooperative Game Theory, a non-cooperative game requires specifying: the number of players; the possible actions available to each player, and any constraints that may be imposed on them; the objective function of each player which he or she attempts to optimise; any time ordering of the execution of the actions if the players are allowed to act more than once; any information acquisition that takes place and how the information available to a player at each point in time depends on the past actions of other players, and; whether there is a player (nature) whose action is the outcome of a probabilistic event with a fixed (known) distribution. The utility of each player for all action profile. Assumptions Perfect recall: each player remembers their decisions and known inform
https://en.wikipedia.org/wiki/Norton%20Internet%20Security
Norton Internet Security, developed by Symantec Corporation, is a discontinued computer program that provides malware protection and removal during a subscription period. It uses signatures and heuristics to identify viruses. Other features include a personal firewall, email spam filtering, and phishing protection. With the release of the 2015 line in summer 2014, Symantec officially retired Norton Internet Security after 14 years as the chief Norton product. It was superseded by Norton Security, a rechristened adaptation of the Norton 360 security suite. Symantec distributed the product as a download, a boxed CD, and as OEM software. Some retailers distributed it on a flash drive. Norton Internet Security held a 61% market share in the United States retail security suite category in the first half of 2007. History In August 1990, Symantec acquired Peter Norton Computing from Peter Norton. Norton and his company developed various applications for DOS, including an antivirus. Symantec continued the development of the acquired technologies, marketed under the name of "Norton", with the tagline "from Symantec". Norton's crossed-arm pose, a registered U.S. trademark, was featured on Norton product packaging. However, his pose later moved to the spine of the packaging, and then disappeared. Users of the 2006 and later versions could upgrade to the replacement software without buying a new subscription. The upgraded product retains the earlier product's subscription data. Releases were named by year but have internal version numbers as well. The internal version number was advanced to 15.x in the 2008 edition to match the Norton AntiVirus release of the same year. As of the 2013 (20.x) release the product dropped the year from its name, although it still was referenced in some venues. 2000 (1.0, 2.0) Norton Internet Security 2000, released January 10, 2000, was Symantec's first foray beyond virus protection and content filters. Its release followed an alliance betwee
https://en.wikipedia.org/wiki/Symmetric%20polynomial
In mathematics, a symmetric polynomial is a polynomial in variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, is a symmetric polynomial if for any permutation of the subscripts one has . Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial. Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory. Examples The following polynomials in two variables X1 and X2 are symmetric: as is the following polynomial in three variables X1, X2, X3: There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).
https://en.wikipedia.org/wiki/Dunnage
Dunnage is inexpensive or waste material used to load and secure cargo during transportation; more loosely, it refers to miscellaneous baggage, brought along during travel. The term can also refer to low-priority cargo used to fill out transport capacity which would otherwise ship underweight. In the context of shipping manufactured goods, dunnage refers to the packing material used as protective fill inside the carton, box or other type container used to prevent the merchandise from being damaged during shipment. These materials include bubble wrap; wadded, crumpled or shredded paper; styrofoam; inflated air packs; and other materials. International laws When unloading a ship, sometimes there is a problem as to what to do with the dunnage. Sometimes the dunnage cannot be landed because of customs duties on imported timber, or quarantine rules to avoid foreign insect pests getting offshore, and as a result often the unwanted dunnage is later furtively jettisoned overside and adds to the area's driftwood problem. According to U.S. and International Law (MARPOL 73/78) it is illegal for ships to dump dunnage within of the shore. Currently, the International Plant Protection Convention (IPPC), an international regulatory agency, mandates its 134 signatory countries to comply with the ISPM 15, which requires all dunnage to be heat-treated or fumigated with pesticides and marked with an accredited seal. There are several instances in which foreign insects have entered by land and caused devastation to the ecosystem, even ruining crops. Construction In construction, dunnage is often scrap wood or disposable manufactured material whose purpose is to be placed on the ground to raise construction materials to allow access for forklifts and slings for hoisting, and to protect them from the elements. Dunnage can also refer to a structural platform for mechanical equipment. Typically, these are open steel structures located on the roof of a building, consisting of steel b
https://en.wikipedia.org/wiki/Elementary%20symmetric%20polynomial
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree in variables for each positive integer , and it is formed by adding together all distinct products of distinct variables. Definition The elementary symmetric polynomials in variables , written for , are defined by and so forth, ending with In general, for we define so that if . (Sometimes, is included among the elementary symmetric polynomials, but excluding it allows generally simpler formulation of results and properties.) Thus, for each positive integer less than or equal to there exists exactly one elementary symmetric polynomial of degree in variables. To form the one that has degree , we take the sum of all products of -subsets of the variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.) Given an integer partition (that is, a finite non-increasing sequence of positive integers) , one defines the symmetric polynomial , also called an elementary symmetric polynomial, by . Sometimes the notation is used instead of . Examples The following lists the elementary symmetric polynomials for the first four positive values of . For : For : For : For : Properties The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity That is, when we substitute numerical values for the variables , we obtain the monic univariate polynomial (with variable ) whose roots are
https://en.wikipedia.org/wiki/Adiabatic%20invariant
A property of a physical system, such as the entropy of a gas, that stays approximately constant when changes occur slowly is called an adiabatic invariant. By this it is meant that if a system is varied between two end points, as the time for the variation between the end points is increased to infinity, the variation of an adiabatic invariant between the two end points goes to zero. In thermodynamics, an adiabatic process is a change that occurs without heat flow; it may be slow or fast. A reversible adiabatic process is an adiabatic process that occurs slowly compared to the time to reach equilibrium. In a reversible adiabatic process, the system is in equilibrium at all stages and the entropy is constant. In the 1st half of the 20th century the scientists that worked in quantum physics used the term "adiabatic" for reversible adiabatic processes and later for any gradually changing conditions which allow the system to adapt its configuration. The quantum mechanical definition is closer to the thermodynamical concept of a quasistatic process and has no direct relation with adiabatic processes in thermodynamics. In mechanics, an adiabatic change is a slow deformation of the Hamiltonian, where the fractional rate of change of the energy is much slower than the orbital frequency. The area enclosed by the different motions in phase space are the adiabatic invariants. In quantum mechanics, an adiabatic change is one that occurs at a rate much slower than the difference in frequency between energy eigenstates. In this case, the energy states of the system do not make transitions, so that the quantum number is an adiabatic invariant. The old quantum theory was formulated by equating the quantum number of a system with its classical adiabatic invariant. This determined the form of the Bohr–Sommerfeld quantization rule: the quantum number is the area in phase space of the classical orbit. Thermodynamics In thermodynamics, adiabatic changes are those that do not inc
https://en.wikipedia.org/wiki/Triangulation%20%28topology%29
In mathematics, triangulation describes the replacement of topological spaces by piecewise linear spaces, i.e. the choice of a homeomorphism in a suitable simplicial complex. Spaces being homeomorphic to a simplicial complex are called triangulable. Triangulation has various uses in different branches of mathematics, for instance in algebraic topology, in complex analysis or in modeling. Motivation On the one hand, it is sometimes useful to forget about superfluous information of topological spaces: The replacement of the original spaces with simplicial complexes may help to recognize crucial properties and to gain a better understanding of the considered object. On the other hand, simplicial complexes are objects of combinatorial character and therefore one can assign them quantities rising from their combinatorial pattern, for instance, the Euler characteristic. Triangulation allows now to assign such quantities to topological spaces. Investigations concerning the existence and uniqueness of triangulations established a new branch in topology, namely the piecewise-linear-topology (short PL- topology). Its main purpose is topological properties of simplicial complexes and its generalization, cell-complexes. Simplicial complexes Abstract simplicial complexes An abstract simplicial complex above a set is a system of non-empty subsets such that: for each ; if and . The elements of are called simplices, the elements of are called vertices. A simplex with vertices has dimension by definition. The dimension of an abstract simplicial complex is defined as . Abstract simplicial complexes can be thought of as geometrical objects too. This requires the term of geometric simplex. Geometric simplices Let be affinely independent points in , i.e. the vectors are linearly independent. The set is said to be the simplex spanned by . It has dimension by definition. The points are called the vertices of , the simplices spanned by of the vertices are ca
https://en.wikipedia.org/wiki/Mondex
Mondex was a smart card electronic cash system, implemented as a stored-value card and owned by Mastercard. Pioneered by two bankers from NatWest in 1990, it was spun-off to a separate consortium later on, then sold to Mastercard. Mondex allowed users to use its electronic card as they would with cash, enabling peer-to-peer offline transfers between cards, which did not need any authorization, via Mondex ATMs, computer card readers, personal 'wallets' and specialized telephones. This offline nature of the system and other unique features made Mondex stand out from leading competitors at the time, such as Visa Cash, which was a closed system and was much closer in concept to a traditional payment cards' transactional operation. Mondex also allowed for a full-card locking mechanism, usage with multiple currencies within a single card, and a certain degree of user anonymity. Mondex cards were at some point common place in many universities at a certain point as they were mostly trialed there, and were also issued as dual-application cards - like combo credit cards, ID cards, and loyalty membership cards. The system was introduced around the world in more than a dozen nations, with various differing implementations. Despite continuous investment from Mastercard, the Mondex scheme did not seem to catch on worldwide and the last place where it operated, Taiwan, had its cards disabled in 31 May 2008, being succeeded by a similar but more technologically advanced system, named Mastercard Cash, which utilized contactless operation, culminating in the TaiwanMoney Card. The Mondex scheme was a forerunner in the cashless society that is common today via mobile payment, digital wallets, and contactless payment, and was far ahead of its time. History The Mondex scheme was invented in 1990 by Tim Jones and Graham Higgins of NatWest in the United Kingdom. In March 1992 internal tests of the system, known at the time as 'Byte', started running at one of NatWest's major comp
https://en.wikipedia.org/wiki/Visa%20Cash
Visa Cash is a smart card electronic cash system, implemented as a stored-value card owned by Visa. Trialled in various locations worldwide (including Leeds, UK in 1997), the system works via a 'chip' embedded in a bank card, and looks similar to the so-called 'Chip and PIN' cards issued, among other countries, in Europe. Another early trial was performed in conjunction with the World Ski Championship in Trondheim, Norway in late February 1997. Several different cards were issued, with loaded face values of 200, 250, 400 and 600 NOK. The cards expired Aug 97. The card is 'loaded' with cash via specialized ATMs, and the cash can later be 'spent' by inserting the card into the retailer's card-reader and pressing a button to confirm the amount. Neither PIN entry nor a signature is required, which makes for a speedy transaction for the card's owner. Other competing cashless payment systems for micro-payments (small amounts) include Mondex. A more successful smart card electronic cash system is the Octopus card system in Hong Kong & in Singapore, first generation microchip Cashcard by Network for Electronic Transfers was introduced powered by Visa Cash when it was launched in 1997 , it is currently still being used by motorist to pay their Electronic Road Pricing Toll and carpark fees via IU reader in the car. See also Digital wallet Mondex Dexit (corporation) Octopus card OnePulse Oyster card Network for Electronic Transfers T-money(visacash Korea) External links VisaCash.org a website dedicated to saving the history of the Visa Cash card program. Payment systems Smart cards Visa Inc. Stored-value payment card
https://en.wikipedia.org/wiki/Wireless%20Leiden
Wireless Leiden is a wireless community network in Leiden, Netherlands. History The Wireless Leiden Foundation (founded in 2002) set up a Wi-Fi wireless network in Leiden, the Netherlands, only with the help of volunteers, with some financial support by sponsors. The network is maintained completely by volunteers. The network is accessible free for everybody who wants to use it. This is possible because there are no expenses of any importance, as the volunteers who build and maintain the network do not receive any payment for their contribution and the materials needed were donated. The software used in the network is completely open source. Internet provider Demon Internet, donates free Internet access to the foundation. The Internet connection is, however, limited to the downloading of web pages. The network can be used at homes, schools and public buildings (like libraries). In many places in Leiden it is advisable to use an external antenna. In the city center, however, on most places a laptop antenna is enough to access the network. Wireless Leiden has been called one of the most advanced community Wi-Fi networks in the world. Wireless Leiden was published as a case study of a Wi-Fi based community network in 2010. Stefan Verhaegh of the University of Twente analysed the "Wireless Leiden" case in his 2010 PhD thesis. In February 2022, it was announced that the wifi network would be phased out due to diminished usage and cost of maintaining the network. References External links Wireless Leiden Foundation Wireless Leiden Wiki eGovernment case description Wi-Fi providers Wireless network organizations Leiden
https://en.wikipedia.org/wiki/Spacecraft%20call%20signs
Spacecraft call signs are radio call signs used for communication in crewed spaceflight. These are not formalized or regulated to the same degree as other equivalent forms of transportation, like aircraft. The three nations currently launching crewed space missions use different methods to identify the ground and space radio stations; the United States uses either the names given to the space vehicles or else the project name and mission number. Russia traditionally assigns code names as call signs to individual cosmonauts, more in the manner of aviator call signs, rather than to the spacecraft. The only continuity in call signs for spacecraft has been the issuance of "ISS"-suffixed (or "-1SS", for its visual similarity) call signs by various countries in the Amateur Radio service as a citizen of their country has been assigned there. The first Amateur Radio call sign assigned to the International Space Station was NA1SS by the United States. OR4ISS (Belgium), GB1SS (UK), DP0ISS (Germany), and RS0ISS (Russia) are examples of others, but are not all-inclusive of others also issued. United States In America's first crewed space program Project Mercury, the astronauts named their individual spacecraft. These names each consisted of a significant word followed by the number 7 (representing the seven original astronauts) and were used as the call signs by the capsule communicators (CAPCOMs). In Project Gemini, the astronauts were not officially permitted to name their two-man spacecraft, which was identified by "Gemini" followed by the mission number (3 through 12). A notable exception was that Gus Grissom named his Gemini 3 spacecraft Molly Brown after the Titanic survivor, as a joke based on his experience with his Liberty Bell 7 capsule sinking. This name was used as a call sign by CAPCOM L. Gordon Cooper, without NASA's approval. Starting with the second Gemini flight, Gemini 4, NASA used the Lyndon B. Johnson Space Center (JSC) to house the flight control ce
https://en.wikipedia.org/wiki/Cray-3/SSS
The Cray-3/SSS (Super Scalable System) was a pioneering massively parallel supercomputer project that bonded a two-processor Cray-3 to a new SIMD processing unit based entirely in the computer's main memory. It was later considered as an add-on for the Cray T90 series in the form of the T94/SSS, but there is no evidence this was ever built. Design The SSS project started after a Supercomputing Research Center (SRC) engineer, Ken Iobst, noticed a novel way to implement a parallel computer. Previous massively SIMD designs, like the Connection Machines, consisted of a large number of individual processing elements consisting of a simple processor and some local memory. Results that needed to be passed from element to element were passed along networking links at relatively slow speeds. This was a serious bottleneck in most parallel designs, which limited their use to certain roles where these interdependencies could be reduced. Iobst's idea was to use the super-fast scatter/gather hardware from the Cray-3 to move the data around instead of using a separate network. This would offer at least an order of magnitudes better performance than systems based on "commodity" hardware. Better yet, the machine would still include a complete Cray-3 CPU, allowing the machine as a whole to use either SIMD or vector instructions depending on the particulars of the problem. Now all that remained was the selection of a processor. Since the Cray-3 already had a vector processor for heavy computing, the SIMD processors themselves could be considerably simpler, handling only the most basic instructions. This is where the SSS concept was truly unique; since the problem with most SIMD machines was moving data around, Iobst suggested that the processors be built into the SRAM chips themselves. Memory is normally organized within the RAM chips in a row/column format, with a controller on the chip reading requested data from the chip in parallel across the rows, then assembling the result
https://en.wikipedia.org/wiki/Equilibrium%20point%20%28mathematics%29
In mathematics, specifically in differential equations, an equilibrium point is a constant solution to a differential equation. Formal definition The point is an equilibrium point for the differential equation if for all . Similarly, the point is an equilibrium point (or fixed point) for the difference equation if for . Equilibria can be classified by looking at the signs of the eigenvalues of the linearization of the equations about the equilibria. That is to say, by evaluating the Jacobian matrix at each of the equilibrium points of the system, and then finding the resulting eigenvalues, the equilibria can be categorized. Then the behavior of the system in the neighborhood of each equilibrium point can be qualitatively determined, (or even quantitatively determined, in some instances), by finding the eigenvector(s) associated with each eigenvalue. An equilibrium point is hyperbolic if none of the eigenvalues have zero real part. If all eigenvalues have negative real parts, the point is stable. If at least one has a positive real part, the point is unstable. If at least one eigenvalue has negative real part and at least one has positive real part, the equilibrium is a saddle point and it is unstable. If all the eigenvalues are real and have the same sign the point is called a node. See also Autonomous equation Critical point Steady state References Further reading Stability theory Dynamical systems
https://en.wikipedia.org/wiki/MIL-STD-1553
MIL-STD-1553 is a military standard published by the United States Department of Defense that defines the mechanical, electrical, and functional characteristics of a serial data bus. It was originally designed as an avionic data bus for use with military avionics, but has also become commonly used in spacecraft on-board data handling (OBDH) subsystems, both military and civil, including use on the James Webb space telescope. It features multiple (commonly dual) redundant balanced line physical layers, a (differential) network interface, time-division multiplexing, half-duplex command/response protocol, and can handle up to 31 Remote Terminals (devices); 32 is typically designated for broadcast messages. A version of MIL-STD-1553 using optical cabling in place of electrical is known as MIL-STD-1773. MIL-STD-1553 was first published as a U.S. Air Force standard in 1973, and first was used on the F-16 Falcon fighter aircraft. Other aircraft designs quickly followed, including the F/A-18 Hornet, AH-64 Apache, P-3C Orion, F-15 Eagle and F-20 Tigershark. It is now widely used by all branches of the U.S. military and by NASA. Outside of the US it has been adopted by NATO as STANAG 3838 AVS. STANAG 3838, in the form of UK MoD Def-Stan 00-18 Part 2, is used on the Panavia Tornado; BAE Systems Hawk (Mk 100 and later); and extensively, together with STANAG 3910 "EFABus", on the Eurofighter Typhoon. Saab JAS 39 Gripen uses MIL-STD-1553B. The Russian made MiG-35 also uses MIL-STD-1553. MIL-STD-1553 is being replaced on some newer U.S. designs by IEEE 1394 (commonly known as FireWire). Revisions MIL-STD-1553B, which superseded the earlier 1975 specification MIL-STD-1553A, was published in 1978. The basic difference between the 1553A and 1553B revisions is that in the latter, the options are defined rather than being left for the user to define as required. It was found that when the standard did not define an item, there was no coordination in its use. Hardware and software ha
https://en.wikipedia.org/wiki/Graphics%20pipeline
The computer graphics pipeline, also known as the rendering pipeline or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, whether it's for a video game or any other form of 3D computer animation, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D and OpenGL, were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others. The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense for the pipeline in processors: the individual steps of the pipeline run in parallel as long as any given step has what it needs. Concept The 3D pipeline usually refers to the most common form of computer 3D rendering called 3D polygon rendering, distinct from raytracing and raycasting. In raycasting, a ray originates at the point where the camera resides, and if that ray hits a surface, the color and lighting of the point on the surface where the ray hit is calculated. In 3D polygon rendering the reverse happens- the area that is in view of the camera is calculated and then rays are created from every part of every surface in view of the camera and traced back to the camera. Structure A graphic
https://en.wikipedia.org/wiki/XML%20database
An XML database is a data persistence software system that allows data to be specified, and sometimes stored, in XML format. This data can be queried, transformed, exported and returned to a calling system. XML databases are a flavor of document-oriented databases which are in turn a category of NoSQL database. Rationale for XML in databases There are a number of reasons to directly specify data in XML or other document formats such as JSON. For XML in particular, they include: An enterprise may have a lot of XML in an existing standard format Data may need to be exposed or ingested as XML, so using another format such as relational forces double-modeling of the data XML is very well suited to sparse data, deeply nested data and mixed content (such as text with embedded markup tags) XML is human readable whereas relational tables require expertise to access Metadata is often available as XML Semantic web data is available as RDF/XML Provides a solution for Object-relational impedance mismatch Steve O'Connell gives one reason for the use of XML in databases: the increasingly common use of XML for data transport, which has meant that "data is extracted from databases and put into XML documents and vice-versa". It may prove more efficient (in terms of conversion costs) and easier to store the data in XML format. In content-based applications, the ability of the native XML database also minimizes the need for extraction or entry of metadata to support searching and navigation. XML-enabled databases XML-enabled databases typically offer one or more of the following approaches to storing XML within the traditional relational structure: XML is stored into a CLOB (Character large object) XML is `shredded` into a series of Tables based on a Schema XML is stored into a native XML Type as defined by ISO Standard 9075-14 RDBMS that support the ISO XML Type are: IBM DB2 (pureXML) Microsoft SQL Server Oracle Database PostgreSQL Typically an XML-enabled database is
https://en.wikipedia.org/wiki/Transpose%20of%20a%20linear%20map
In linear algebra, the transpose of a linear map between two vector spaces, defined over the same field, is an induced map between the dual spaces of the two vector spaces. The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors. Definition Let denote the algebraic dual space of a vector space Let and be vector spaces over the same field If is a linear map, then its algebraic adjoint or dual, is the map defined by The resulting functional is called the pullback of by The continuous dual space of a topological vector space (TVS) is denoted by If and are TVSs then a linear map is weakly continuous if and only if in which case we let denote the restriction of to The map is called the transpose or algebraic adjoint of The following identity characterizes the transpose of : where is the natural pairing defined by Properties The assignment produces an injective linear map between the space of linear operators from to and the space of linear operators from to If then the space of linear maps is an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over to itself. One can identify with using the natural injection into the double dual. If and are linear maps then If is a (surjective) vector space isomorphism then so is the transpose If and are normed spaces then and if the linear operator is bounded then the operator norm of is equal to the norm of ; that is and moreover, Polars Suppose now that is a weakly continuous linear operator between topological vector spaces and with continuous dual spaces and respectively. Let denote the canonical dual system, defined by where and are s
https://en.wikipedia.org/wiki/Complex%20conjugate%20of%20a%20vector%20space
In mathematics, the complex conjugate of a complex vector space is a complex vector space , which has the same elements and additive group structure as but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of satisfies where is the scalar multiplication of and is the scalar multiplication of The letter stands for a vector in is a complex number, and denotes the complex conjugate of More concretely, the complex conjugate vector space is the same underlying vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure (different multiplication by ). Motivation If and are complex vector spaces, a function is antilinear if With the use of the conjugate vector space , an antilinear map can be regarded as an ordinary linear map of type The linearity is checked by noting: Conversely, any linear map defined on gives rise to an antilinear map on This is the same underlying principle as in defining opposite ring so that a right -module can be regarded as a left -module, or that of an opposite category so that a contravariant functor can be regarded as an ordinary functor of type Complex conjugation functor A linear map gives rise to a corresponding linear map which has the same action as Note that preserves scalar multiplication because Thus, complex conjugation and define a functor from the category of complex vector spaces to itself. If and are finite-dimensional and the map is described by the complex matrix with respect to the bases of and of then the map is described by the complex conjugate of with respect to the bases of and of Structure of the conjugate The vector spaces and have the same dimension over the complex numbers and are therefore isomorphic as complex vector spaces. However, there is no natural isomorphism from to The double conjugate is identical to Complex conjugate of a
https://en.wikipedia.org/wiki/Sequence%20homology
Sequence homology is the biological homology between DNA, RNA, or protein sequences, defined in terms of shared ancestry in the evolutionary history of life. Two segments of DNA can have shared ancestry because of three phenomena: either a speciation event (orthologs), or a duplication event (paralogs), or else a horizontal (or lateral) gene transfer event (xenologs). Homology among DNA, RNA, or proteins is typically inferred from their nucleotide or amino acid sequence similarity. Significant similarity is strong evidence that two sequences are related by evolutionary changes from a common ancestral sequence. Alignments of multiple sequences are used to indicate which regions of each sequence are homologous. Identity, similarity, and conservation The term "percent homology" is often used to mean "sequence similarity”, that is the percentage of identical residues (percent identity), or the percentage of residues conserved with similar physicochemical properties (percent similarity), e.g. leucine and isoleucine, is usually used to "quantify the homology." Based on the definition of homology specified above this terminology is incorrect since sequence similarity is the observation, homology is the conclusion. Sequences are either homologous or not. This involves that the term "percent homology" is a misnomer. As with morphological and anatomical structures, sequence similarity might occur because of convergent evolution, or, as with shorter sequences, by chance, meaning that they are not homologous. Homologous sequence regions are also called conserved. This is not to be confused with conservation in amino acid sequences, where the amino acid at a specific position has been substituted with a different one that has functionally equivalent physicochemical properties. Partial homology can occur where a segment of the compared sequences has a shared origin, while the rest does not. Such partial homology may result from a gene fusion event. Orthology Homologous s
https://en.wikipedia.org/wiki/Sintran
Sintran (a portmanteau of SINTEF and Fortran; stylized as SINTRAN) is a range of operating systems (OS) for Norsk Data's line of minicomputers. The original version of Sintran, was written in the programming language Fortran, released in 1968, and developed by the Department of Engineering Cybernetics at the Norwegian Institute of Technology together with the affiliated research institute, SINTEF. The different incarnations of the OS shared only name, and to a degree, purpose. Norsk Data took part in developing Sintran II, a multi-user software system that constituted the software platform for the Nord-1 range of terminal servers. By far the most common version of the OS was Sintran III, developed solely by Norsk Data and launched in 1974. This real-time multitasking system was used for Norsk Data's server machines (such as the Nord-10, -100) for the remainder of the company's lifetime, i.e. until 1992. See also Sintran III Fortran software Proprietary operating systems Norsk Data software
https://en.wikipedia.org/wiki/Dave%20Bayer
David Allen Bayer (born November 29, 1955) is an American mathematician known for his contributions in algebra and symbolic computation and for his consulting work in the movie industry. He is a professor of mathematics at Barnard College, Columbia University. Education and career Bayer was educated at Swarthmore College as an undergraduate, where he attended a course on combinatorial algorithms given by Herbert Wilf. During that semester, Bayer related several original ideas to Wilf on the subject. These contributions were later incorporated into the second edition of Wilf and Albert Nijenhuis' influential book Combinatorial Algorithms, with a detailed acknowledgement by its authors. Bayer subsequently earned his Ph.D. at Harvard University in 1982 under the direction of Heisuke Hironaka with a dissertation entitled The Division Algorithm and the Hilbert Scheme. He joined Columbia University thereafter. Bayer is the son of Joan and Bryce Bayer, the inventor of the Bayer filter. Contributions Bayer has worked in various areas of algebra and symbolic computation, including Hilbert functions, Betti numbers, and linear programming. He has written a number of highly cited papers in these areas with other notable mathematicians, including Bernd Sturmfels, Jeffrey Lagarias, Persi Diaconis, Irena Peeva, and David Eisenbud. Bayer is one of ten individuals cited in the white paper published by the pseudonymous Satoshi Nakamoto describing the technological underpinnings of Bitcoin. He is cited as a co-author, along with Stuart Haber and W. Scott Stornetta, of a paper to improve on a system for tamper-proofing timestamps by incorporating Merkle trees. Consulting Bayer was a mathematics consultant for the film A Beautiful Mind, the biopic of John Nash, and also had a cameo as one of the "Pen Ceremony" professors. References External links Bayer's homepage at Columbia University Dave and Beautiful Math at Swarthmore College Bulletin Living people 1955 births Mathematic
https://en.wikipedia.org/wiki/Polar%20vortex
A circumpolar vortex, or simply polar vortex, is a large region of cold, rotating air that encircles both of Earth's polar regions. Polar vortices also exist on other rotating, low-obliquity planetary bodies. The term polar vortex can be used to describe two distinct phenomena; the stratospheric polar vortex, and the tropospheric polar vortex. The stratospheric and tropospheric polar vortices both rotate in the direction of the Earth's spin, but they are distinct phenomena that have different sizes, structures, seasonal cycles, and impacts on weather. The stratospheric polar vortex is an area of high-speed, cyclonically rotating winds around 15 km to 50 km high, poleward of 50°, and is strongest in winter. It forms in Autumn when Arctic or Antarctic temperatures cool rapidly as the polar night begins. The increased temperature difference between the pole and the tropics causes strong winds, and the Coriolis effect causes the vortex to spin up. The stratospheric polar vortex breaks down in Spring as the polar night ends. A sudden stratospheric warming (SSW) is an event that occurs when the stratospheric vortex breaks down during winter, and can have significant impacts on surface weather. The tropospheric polar vortex is often defined as the area poleward of the tropospheric jet stream. The equatorward edge is around 40° to 50°, and it extends from the surface up to around 10 km to 15 km. Its yearly cycle differs from the stratospheric vortex because the tropospheric vortex exists all year, but is similar to the stratospheric vortex since it is also strongest in winter when the polar regions are coldest. The tropospheric polar vortex was first described as early as 1853. The stratospheric vortex's SSWs were discovered in 1952 with radiosonde observations at altitudes higher than 20 km. The tropospheric polar vortex was mentioned frequently in the news and weather media in the cold North American winter of 2013–2014, popularizing the term as an explanation of very
https://en.wikipedia.org/wiki/Executive%20One
Executive One is the call sign designated for any United States civil aircraft when the president of the United States is on board. Typically, the president flies in military aircraft that are under the command of the Presidential Airlift Group, which include Air Force One, Marine One, Army One, Navy One and Coast Guard One. On December 26, 1973, then-President Richard Nixon became the only sitting president to travel on a regularly scheduled commercial airline flight when he flew on United Airlines flight 55 from Washington Dulles International Airport to Los Angeles International Airport, "to set an example for the rest of the nation during the current energy crisis" and to "demonstrate his confidence in the airlines". Nixon, first lady Pat, daughter Tricia, and 22 staffers, security, and pool purchased 13 first-class tickets at $217.64, and 12 coach tickets at $167.64 aboard the DC-10 on what is traditionally not a very busy flight, and quietly boarded the plane without fanfare in order to maintain security for the flight prior to departure. A Nixon aide carried a suitcase-sized secure communication device on board the plane, so that the President could remain in contact with Washington in the event of an emergency. If the president's family members are aboard, but not the president himself, the flight can, at the discretion of the White House staff or Secret Service, use the callsign Executive One Foxtrot (EXEC1F). "Foxtrot" is the phonetic alphabet designation for the letter "F", with that being the first letter of "family". The military helicopter that normally has the call sign "Marine One" is assigned the "Executive One" call sign when it transports the outgoing president on their final flight from the Capitol, after the inauguration of their successor, as was done on January 20, 2009 for George W. Bush and on January 20, 2017 for Barack Obama. Executive Two Executive Two is the call sign designated for any United States civil aircraft when the vice pre
https://en.wikipedia.org/wiki/Scott%20Draves
Scott Draves is the inventor of fractal flames and the leader of the distributed computing project Electric Sheep. He also invented patch-based texture synthesis and published the first implementation of this class of algorithms. He is also a video artist and accomplished VJ. In summer 2010, Draves' work was exhibited at Google's New York City office, including his video piece "Generation 243" which was generated by the collaborative influences of 350,000 people and computers worldwide. Stephen Hawking's 2010 book The Grand Design used an image generated by Draves' "flame" algorithm on its cover. Known as "Spot," Draves currently resides in New York City. In July 2012 Draves won the ZKM App Art Award Special Prize for Cloud Art for the mobile Android version of Electric Sheep. Background Draves earned a Bachelor's in mathematics at Brown University, where he was a student of Andy van Dam before continuing on to earn a PhD in computer science at Carnegie Mellon University. At CMU he studied under Andy Witkin, Dana Scott, and Peter Lee. References External links Computer programmers American digital artists Fractal artists Psychedelic artists American video artists Brown University alumni Carnegie Mellon University alumni Living people 1968 births Mathematical artists
https://en.wikipedia.org/wiki/FreeBASIC
FreeBASIC is a free and open source multiplatform compiler and programming language based on BASIC licensed under the GNU GPL for Microsoft Windows, protected-mode MS-DOS (DOS extender), Linux, FreeBSD and Xbox. The Xbox version is no longer maintained. According to its official website, FreeBASIC provides syntax compatibility with programs originally written in Microsoft QuickBASIC (QB). Unlike QuickBASIC, however, FreeBASIC is a command line only compiler, unless users manually install an external integrated development environment (IDE) of their choice. IDEs specifically made for FreeBASIC include FBide and FbEdit, while more graphical options include WinFBE Suite and VisualFBEditor. Compiler features On its backend, FreeBASIC makes use of GNU Binutils in order to produce console and graphical user interface applications. FreeBASIC supports the linking and creation of C static and dynamic libraries and has limited support for C++ libraries. As a result, code compiled in FreeBASIC can be reused in most native development environments. C style preprocessing, including multiline macros, conditional compiling and file inclusion, is supported. The preprocessor also has access to symbol information and compiler settings, such as the language dialect. Syntax Initially, FreeBASIC emulated Microsoft QuickBASIC syntax as closely as possible. Beyond that, the language has continued its evolution. As a result, FreeBASIC combines several language dialects for maximum level of compatibility with QuickBASIC and full access to modern features. New features include support for concepts such as objects, operator overloading, function overloading, namespaces and others. Newline characters indicate the termination of programming statements. A programming statement can be distributed on multiple consecutive lines by using the underscore line continuation char (_), whereas multiple statements may be written on a single line by separating each statement with a colon (:).
https://en.wikipedia.org/wiki/Internationalized%20Resource%20Identifier
The Internationalized Resource Identifier (IRI) is an internet protocol standard which builds on the Uniform Resource Identifier (URI) protocol by greatly expanding the set of permitted characters. It was defined by the Internet Engineering Task Force (IETF) in 2005 in RFC 3987. While URIs are limited to a subset of the US-ASCII character set (characters outside that set must be mapped to octets according to some unspecified character encoding, then percent-encoded), IRIs may additionally contain most characters from the Universal Character Set (Unicode/ISO 10646), including Chinese, Japanese, Korean, and Cyrillic characters. Syntax IRIs extend URIs by using the Universal Character Set, where URIs were limited to ASCII, with far fewer characters. IRIs may be represented by a sequence of octets but by definition are defined as a sequence of characters, because IRIs may be spoken or written by hand. Compatibility IRIs are mapped to URIs to retain backwards-compatibility with systems that do not support the new format. For applications and protocols that do not allow direct consumption of IRIs, the IRI should first be converted to Unicode using canonical composition normalization (NFC), if not already in Unicode format. All non-ASCII code points in the IRI should next be encoded as UTF-8, and the resulting bytes percent-encoded, to produce a valid URI. Example: The IRI https://en.wiktionary.org/wiki/Ῥόδος becomes the URI https://en.wiktionary.org/wiki/%E1%BF%AC%CF%8C%CE%B4%CE%BF%CF%82 ASCII code points that are invalid URI characters may be encoded the same way, depending on implementation. This conversion is easily reversible; by definition, converting an IRI to an URI and back again will yield an IRI that is semantically equivalent to the original IRI, even though it may differ in exact representation. Some protocols may impose further transformations; e.g. Punycode for DNS labels. Advantages There are reasons to see URIs displayed in different language
https://en.wikipedia.org/wiki/Rally-X
is a maze chase arcade video game developed in Japan and Germany by Namco and released in 1980. In North America, it was distributed by Midway Manufacturing and in Europe by Karateco. Players drive a blue Formula One race car through a multidirectional scrolling maze to collect yellow flags. Boulders block some paths and must be avoided. Red enemy cars pursue the player in an attempt to collide with them. Red cars can be temporarily stunned by laying down smoke screens at the cost of fuel. Rally-X is one of the first games with bonus stages and continuously-playing background music. Rally-X was designed as a successor to Sega's Head On (1979), an earlier maze chase game with cars. It was a commercial success in Japan, where it was the sixth highest-grossing game of 1980, but Midway Manufacturing released the game in North America to largely underwhelming results. An often-repeated, though untrue, story involving its demonstration at the 1980 Amusement & Music Operators Association trade show, where the attending press believed Rally-X was of superior quality than the other games presented, specifically Pac-Man. Though it was well-received by attendees, Rally-X failed to attract much attention during its presentation. Reception for Rally-X, both at release and retrospectively, has highlighted its technological accomplishments and high difficulty. Some reviewers have found it to be influential and ahead of its time. Rally-X received several remakes and sequels, beginning with the slightly tweaked New Rally-X in 1981. It is also included in several Namco compilations. Gameplay Rally-X is a maze chase game where the player controls a blue Formula One racecar. The objective is to collect yellow flags that are scattered around an enclosed maze while avoiding collision with red-colored cars that pursue the player. Mazes scroll in the four cardinal directions and are clustered with dead ends, long corridors, and stationary boulders that are harmful to the player. Each l
https://en.wikipedia.org/wiki/Extensible%20programming
Extensible programming is a term used in computer science to describe a style of computer programming that focuses on mechanisms to extend the programming language, compiler and runtime environment. Extensible programming languages, supporting this style of programming, were an active area of work in the 1960s, but the movement was marginalized in the 1970s. Extensible programming has become a topic of renewed interest in the 21st century. Historical movement The first paper usually associated with the extensible programming language movement is M. Douglas McIlroy's 1960 paper on macros for higher-level programming languages. Another early description of the principle of extensibility occurs in Brooker and Morris's 1960 paper on the Compiler-Compiler. The peak of the movement was marked by two academic symposia, in 1969 and 1971. By 1975, a survey article on the movement by Thomas A. Standish was essentially a post mortem. The Forth programming language was an exception, but it went essentially unnoticed. Character of the historical movement As typically envisioned, an extensible programming language consisted of a base language providing elementary computing facilities, and a meta-language capable of modifying the base language. A program then consisted of meta-language modifications and code in the modified base language. The most prominent language-extension technique used in the movement was macro definition. Grammar modification was also closely associated with the movement, resulting in the eventual development of adaptive grammar formalisms. The Lisp language community remained separate from the extensible language community, apparently because, as one researcher observed, any programming language in which programs and data are essentially interchangeable can be regarded as an extendible [sic] language. ... this can be seen very easily from the fact that Lisp has been used as an extendible language for years. At the 1969 conference, Simula was
https://en.wikipedia.org/wiki/Zilog%20Z8
The Zilog Z8 is a microcontroller architecture, originally introduced in 1979, which today also includes the Z8 Encore!, eZ8 Encore!, eZ8 Encore! XP, and eZ8 Encore! MC families. Signifying features of the architecture are up to 4,096 fast on-chip registers which may be used as accumulators, pointers, or as ordinary random-access memory (RAM). A 16-bit address space for between 1 kibibyte (KB) and 64 KB of either programmable read-only memory (PROM, OTP), read-only memory (ROM), or flash memory, are used to store code and constants, and there is a second 16-bit address space which can be used for large applications. On chip peripherals include analog-to-digital converter (A/D), Serial Peripheral Interface (SPI) and Inter-Integrated Circuit (I²C) channels, IrDA encoders/decoders etc. There are versions with from 8 up to 80 pins, housed in dual in-line package (PDIP), Quad Flat No-leads package (MicroLeadFrame, MLF), small outline integrated circuit (SOIC), Shrink Small-Outline Package (SSOP), and low profile Quad Flat Package (LQFP). The eZ8 Encore! series can be programmed and debugged through a single pin serial communication interface. The basic architecture, a modified (non-strict) Harvard architecture, is technically very different from the Zilog Z80. Despite this, the instruction set and assembly language syntax are quite similar to other Zilog processors: Load/store operations use the same LD mnemonic (no MOV or MOVEs), typifying instructions such as DJNZ, are the same, and so on. An integrated development environment (IDE) named Zilog Developer's Studio (ZDS) can be downloaded from Zilog's website including an assembler. The edition of ZDS II targeting Z8 Encore! and newer derivatives also includes a free compiler claiming ANSI C89 compliance. Primary competitors include the somewhat similar Microchip Technology PIC family, and all Intel 8051 descendants. Also more traditional von Neumann architecture based single chip microcontrollers may be regarded as
https://en.wikipedia.org/wiki/Automotive%20navigation%20system
An automotive navigation system is part of the automobile controls or a third party add-on used to find direction in an automobile. It typically uses a satellite navigation device to get its position data which is then correlated to a position on a road. When directions are needed routing can be calculated. On the fly traffic information (road closures, congestion) can be used to adjust the route. Dead reckoning using distance data from sensors attached to the drivetrain, an accelerometer, a gyroscope, and a magnetometer can be used for greater reliability, as GNSS signal loss and/or multipath can occur due to urban canyons or tunnels. Mathematically, automotive navigation is based on the shortest path problem, within graph theory, which examines how to identify the path that best meets some criteria (shortest, cheapest, fastest, etc.) between two points in a large network. Automotive navigation systems are crucial for the development of self-driving cars. History Automotive navigation systems represent a convergence of a number of diverse technologies, many of which have been available for many years, but were too costly or inaccessible. Limitations such as batteries, display, and processing power had to be overcome before the product became commercially viable. 1961: Hidetsugu Yagi designed a wireless-based navigation system. This design was still primitive and intended for military-use. 1966: General Motors Research (GMR) was working on a non-satellite-based navigation and assistance system called DAIR (Driver Aid, Information & Routing). After initial tests GM found that it was not a scalable or practical way to provide navigation assistance. Decades later, however, the concept would be reborn as OnStar (founded 1996). 1973: Japan's Ministry of International Trade and Industry (MITI) and Fuji Heavy Industries sponsored CATC (Comprehensive Automobile Traffic Control), a Japanese research project on automobile navigation systems. 1979: MITI established JSK (A
https://en.wikipedia.org/wiki/Volodymyr%20Savchenko%20%28writer%29
Vladimir Ivanovich Savchenko (; ) was a Soviet Ukrainian science fiction writer and engineer. Born on February 15, 1933, in Poltava, he studied at the Moscow Power Engineering Institute and was an electronics engineer. Savchenko, who wrote in Russian , published his first short stories in the late 1950s, and his first novel (Black Stars) in 1960. His works were often self-published. Savchenko also authored several texts about physics and engineering, including the article "Sixteen New Formulas of Physics and Cosmology," which he considered to be his most important scientific text. As of today, Savchenko's works have been published in 29 countries and have been translated into many of the world's languages. He was found dead on January 19, 2005, in Kyiv. He was 71 years old. Biography Savchenko was a graduate of the Moscow Power Engineering Institute. He worked at the V.M. Glushkov Institute of Cybernetics in Kyiv. His first publication, "Toward the Stars" (1955), identified the author as an advocate of science fiction that was interested in exploring the heuristic potential of the personality. In 1956, Savchenko's story "The Awakening of Professor Bern" was published. Of publications in his native Ukrainian language, the most well-known is the story "The Ghost of Time" (1964). In the novel Black Stars, Savchenko investigated the boundaries of traditional science, putting forward original hypotheses. In particular, Savchenko's 1959 novel The Second Expedition to the Strange Planet (known in English as The Second Oddball Expedition) explored the political nuances revealed by contact with crystalline forms of life. Savchenko positioned himself as an adherent of the cybernetic view of society and the living organism, consistently developing different aspects of the process of self-discovery. After the 1967 publication of the novel Self-Discovery, in which Savchenko warned about the ethical problems involved in the creation of clones, Savchenko occupied the leading
https://en.wikipedia.org/wiki/Network%20emulation
Network emulation is a technique for testing the performance of real applications over a virtual network. This is different from network simulation where virtual models of traffic, network models, channels, and protocols are applied. The aim is to assess performance, predict the impact of change, or otherwise optimize technology decision-making. Methods of emulation Network emulation is the act of testing the behavior of a network (5G, wireless, MANETs, etc) in a lab. A personal computer or virtual machine runs software to perform the network emulation; a dedicated emulation device is sometimes used for link emulation. Networks introduce delay, errors, and drop packets. The primary goal of network emulation is to create an environment whereby users can connect the devices, applications, products, and/or services being tested to validate their performance, stability, or functionality against real-world network scenarios. Once tested in a controlled environment against actual network conditions, users can have confidence that the item being tested will perform as expected. Emulation, simulation, and traffic generation Emulation differs from simulation in that a network emulator appears to be a network; end-systems such as computers can be attached to the emulator and will behave as if they are attached to a network. A network emulator mirrors the network which connects end-systems, not the end-systems themselves. Network simulators are typically programs that run on a single computer, take an abstract description of the network traffic such as a flow arrival process, and yield performance statistics such as throughput, delay, loss etc. These products are typically found in the Development and QA environments of Service Providers, Network Equipment Manufacturers, and Enterprises. Network emulation software Software developers typically want to analyze the response time and sensitivity to packet loss of client-server applications and emulate specific network eff
https://en.wikipedia.org/wiki/Temperature%20gradient
A temperature gradient is a physical quantity that describes in which direction and at what rate the temperature changes the most rapidly around a particular location. The temperature gradient is a dimensional quantity expressed in units of degrees (on a particular temperature scale) per unit length. The SI unit is kelvin per meter (K/m). Temperature gradients in the atmosphere are important in the atmospheric sciences (meteorology, climatology and related fields). Mathematical description Assuming that the temperature T is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (often called a scalar field), i.e., that where x, y and z are the coordinates of the location of interest, then the temperature gradient is the vector quantity defined as Physical processes Climatology On a global and annual basis, the dynamics of the atmosphere (and the oceans) can be understood as attempting to reduce the large difference of temperature between the poles and the equator by redistributing warm and cold air and water, known as Earth's heat engine. Meteorology Differences in air temperature between different locations are critical in weather forecasting and climate. The absorption of solar light at or near the planetary surface increases the temperature gradient and may result in convection (a major process of cloud formation, often associated with precipitation). Meteorological fronts are regions where the horizontal temperature gradient may reach relatively high values, as these are boundaries between air masses with rather distinct properties. Clearly, the temperature gradient may change substantially in time, as a result of diurnal or seasonal heating and cooling for instance. This most likely happens during an inversion. For instance, during the day the temperature at ground level may be cold while it's warmer up in the atmosphere. As the day shifts over to night the temperature might drop rapidly while
https://en.wikipedia.org/wiki/Network%20simulation
In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions. Network simulator A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become too complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE etc. Simulations Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a lis
https://en.wikipedia.org/wiki/Institute%20of%20Biosciences%20and%20Technology
The Texas A&M Institute of Biosciences and Technology (IBT), a component of Texas A&M Health, and The Texas A&M University System, is located in the world's largest medical center, the Texas Medical Center, in Houston, Texas. The institute provides a bridge between Texas A&M University System scientists and other institutions' researchers working in the Texas Medical Center and the biomedical and biotechnology research community in Houston. It emphasizes collaboration between member scientists and others working in all the fields of the biosciences and biotechnology. IBT encourages its scientists to transfer discoveries made in their laboratories to the clinic and marketplace. History The IBT was first established as part of the Texas A&M University System with the broad goal to expand biotechnology in the system. Plans for an institutional campus in Houston came to fruition in 1986 under Mr. David Eller (Chairman of the Texas A&M University System Board of Regents) and Dr. Richard Wainerdi (Chairman and CEO of the Texas Medical Center). Dr. Eugene G. Sander, head of the Department of Biochemistry and Biophysics in the Texas A&M College of Agriculture and Life Sciences (COALS) was named first director of the institute. The Robert A. Welch Foundation endowed a $1 million Robert A. Welch Chair in 1990. In 1988 the institute administration was moved from the Texas A&M University System to the Texas A&M University under the leadership of Dr. Charles Arntzen, Vice Chancellor of Agriculture and Dean of the College of Life Sciences and Agriculture (COALS). Arntzen secured a grant from the United States Department of Agriculture for $12.5 million and other private donations including one from Texas oilman Albert B. Alkek to build the Albert B. Alkek Institute of Biosciences and Technology Building in the Texas Medical Center at a cost of $21.5 million. The 11-story building stands on the former site of the historic Shamrock Hotel. Under Arntzen's leadership, Centers