source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/FreeS/WAN
FreeS/WAN, for Free Secure Wide-Area Networking, was a free software project, which implemented a reference version of the IPsec network security layer for Linux. The project goal of ubiquitous opportunistic encryption of Internet traffic was not realized, although it did contribute to general Internet encryption. The project was founded by John Gilmore, and administered for most of its duration by Hugh Daniel. John Ioannidis and Angelos Keromytis started the codebase while outside the United States of America prior to autumn 1997. Technical lead for the project was Henry Spencer, and later Michael Richardson. The IKE keying daemon (pluto) was maintained by D. Hugh Redelmeier while the IPsec kernel module (KLIPS) was maintained by Richard Guy Briggs. Sandy Harris was the main documentation person for most of the project, later Claudia Schmeing. The final FreeS/WAN version 2.06 was released on 22 April 2004. The earlier version 2.04 was forked to form two projects, Openswan and strongSwan. Openswan has since (2012) been forked to Libreswan. External links Project website Documentation Free security software History of software IPsec
https://en.wikipedia.org/wiki/Log%E2%80%93log%20plot
In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters. Any base can be used for the logarithm, though most commonly base 10 (common logs) are used. Relation with monomials Given a monomial equation taking the logarithm of the equation (with any base) yields: Setting and which corresponds to using a log–log graph, yields the equation: where m = k is the slope of the line (gradient) and b = log a is the intercept on the (log y)-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1. Equations The equation for a line on a log–log scale would be: where m is the slope and b is the intercept point on the log plot. Slope of a log–log plot To find the slope of the plot, two points are selected on the x-axis, say x1 and x2. Using the above equation: and The slope m is found taking the difference: where F1 is shorthand for F(x1) and F2 is shorthand for F(x2). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative. The formula also provides a negative slope, as can be seen from the following property of the logarithm: Finding the function from the log–log plot The above procedure now is reversed to find the form of the function F(x) using its (assumed) known log–log plot. To find the function F, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. Then from the slope formula above: which leads to Notice that 10log10(F1) = F1. Therefor
https://en.wikipedia.org/wiki/Rafter
A rafter is one of a series of sloped structural members such as steel beams that extend from the ridge or hip to the wall plate, downslope perimeter or eave, and that are designed to support the roof shingles, roof deck, roof covering and its associated loads. A pair of rafters is called a couple. In home construction, rafters are normally made of wood. Exposed rafters are a feature of some traditional roof styles. Applications In recent buildings there is a preference for trussed rafters on the grounds of cost, economy of materials, off-site manufacture, and ease of construction, as well as design considerations including span limitations and roof loads (weight from above). Types in traditional timber framing There are many names for rafters depending on their location, shape, or size (see below). The earliest surviving roofs in Europe are of common rafters on a tie beam; this assembly is known as a "closed couple". Later, principal rafters and common rafters were mixed, which is called a major/minor or primary/secondary roof system. Historically many rafters, including hip rafters, often tapered in height 1/5 to 1/6 of their width, with the larger end at the foot. Architect George Woodward discusses the purpose of this in 1860: "The same amount of strength can be had with a less amount of lumber. There is an additional labor in sawing such rafters, as well as a different calculation to be made in using up a log to the best advantage. It is necessary always to order this special bill of rafters direct from the mill, and the result will be that the extra cost will, nine times out of ten, overbalance the amount saved." John Muller also discussed a one-sixth taper for rafters. A piece added at the foot to create an overhang or change the roof pitch is called a sprocket, or coyau in French. The projecting piece on the gable of a building forming an overhang is called a lookout. A rafter can be reinforced with a strut, principal purlin, collar beam, or, rarely
https://en.wikipedia.org/wiki/Radioresistance
Radioresistance is the level of ionizing radiation that organisms are able to withstand. Ionizing-radiation-resistant organisms (IRRO) were defined as organisms for which the dose of acute ionizing radiation (IR) required to achieve 90% reduction (D10) is greater than 1,000 gray (Gy) Radioresistance is surprisingly high in many organisms, in contrast to previously held views. For example, the study of environment, animals and plants around the Chernobyl disaster area has revealed an unexpected survival of many species, despite the high radiation levels. A Brazilian study in a hill in the state of Minas Gerais which has high natural radiation levels from uranium deposits, has also shown many radioresistant insects, worms and plants. Certain extremophiles, such as the bacteria Deinococcus radiodurans and the tardigrades, can withstand large doses of ionizing radiation on the order of 5,000 Gy. Induced radioresistance In the graph on left, a dose/survival curve for a hypothetical group of cells has been drawn with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically. Radioresistance may be induced by exposure to small doses of ionizing radiation. Several studies have documented this effect in yeast, bacteria, protozoa, algae, plants, insects, as well as in in vitro mammalian and human cells and in animal models. Several cellular radioprotection mechanisms may be involved, such as alterations in the levels of some cytoplasmic and nuclear proteins and increased gene expression, DNA repair and other processes. Also biophysical models presented general basics for this phenomenon. Many organisms have been found to possess a self-repair mechanism that can be activated by exposure to radiation in some cases. Two examples of this self-repair process in humans are described below. Devair Alves Ferreira received a large dose (7.0 Gy) during the Goiânia accident, and
https://en.wikipedia.org/wiki/Safe%20area%20%28television%29
Safe area is a term used in television production to describe the areas of the television picture that can be seen on television screens. Older televisions can display less of the space outside of the safe area than ones made more recently. Flat panel screens, plasma displays and liquid crystal display (LCD) screens generally can show most of the picture outside the safe areas. The use of safe areas in television production ensures that the most important parts of the picture are seen by the majority of viewers. The size of the title-safe area is typically specified in pixels or percent. The NTSC and PAL analog television standards do not specify official overscan amounts, and producers of television programming use their own guidelines. Some video editing software packages for non-linear editing systems (NLE) solutions have a setting which shows the safe areas while editing. Title-safe area The title-safe area or graphics-safe area is, in television broadcasting, a rectangular area which is far enough in from the four edges, such that text or graphics show neatly: with a margin and without distortion. This is applied against a worst case of on-screen location and display type. Typically corners would require more space from the edges, but due to increased quality of the average display this is no longer the concern it used to be, even on CRTs. If the editor of the content does not take care to ensure that all titles are inside the title-safe area, some titles in the content could have their edges chopped off when viewed in some screens. Video editing programs that can output video for either television or the Web can take the title-safe area into account. In Apple's consumer-grade NLE software iMovie, the user is advised to uncheck the QT Margins checkbox for content meant for television, and to check it for content meant only for QuickTime on a computer. Final Cut Pro can show two overlay rectangles in both its Viewer and Canvas; the inner rectangle is
https://en.wikipedia.org/wiki/Neurally%20controlled%20animat
A neurally controlled animat is the conjunction of a cultured neuronal network a virtual or physical robotic body, the Animat, "living" in a virtual computer generated environment or in a physical arena, connected to this array Patterns of neural activity are used to control the virtual body, and the computer is used as a sensory device to provide electrical feedback to the neural network about the Animat's movement in the virtual environment. The current aim of the Animat research is to study the neuronal activity and plasticity when learning and processing information in order to find a mathematical model for the neural network, and to determine how information is processed and encoded in the rat cortex. It leads towards interesting questions about consciousness theories as well. References T. B., Demarse, D. A. Wagenaar, A. W. Blau and S. M. Potter, ‘Neurally controlled computer-simulated animals: a new tool for studying learning and memory in vitro’ in Society for Neuroscience Annual Meeting, (2000) SFN ID: 2961. T. B., Demarse, D. A. Wagenaar, A. W. Blau and S. M. Potter, (2001). ‘The neurally controlled Animat: biological brains acting with simulated bodies’. Autonomous Robots no.11, pp.305–310 External links Neural circuits Neural engineering
https://en.wikipedia.org/wiki/Cancer%20immunotherapy
Cancer immunotherapy (sometimes called immuno-oncology) is the stimulation of the immune system to treat cancer, improving on the immune system's natural ability to fight the disease. It is an application of the fundamental research of cancer immunology and a growing subspecialty of oncology. Cancer immunotherapy exploits the fact that cancer cells often have tumor antigens, molecules on their surface that can bind to antibody proteins or T-cell receptors, triggering an immune system response. The tumor antigens are often proteins or other macromolecules (e.g., carbohydrates). Normal antibodies bind to external pathogens, but the modified immunotherapy antibodies bind to the tumor antigens marking and identifying the cancer cells for the immune system to inhibit or kill. Clinical success of cancer immunotherapy is highly variable between different forms of cancer; for instance, certain subtypes of gastric cancer react well to the approach whereas immunotherapy is not effective for other subtypes. In 2018, American immunologist James P. Allison and Japanese immunologist Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation. History "During the 17th and 18th centuries, various forms of immunotherapy in cancer became widespread... In the 18th and 19th centuries, septic dressings enclosing ulcerative tumours were used for the treatment of cancer. Surgical wounds were left open to facilitate the development of infection, and purulent sores were created deliberately... One of the most well-known effects of microorganisms on ... cancer was reported in 1891, when an American surgeon, William Coley, inoculated patients having inoperable tumours with [ Streptococcus pyogenes ]." "Coley [had] thoroughly reviewed the literature available at that time and found 38 reports of cancer patients with accidental or iatrogenic feverish erysipelas. In 12 patients, the sarcoma or carcinoma had
https://en.wikipedia.org/wiki/C.%20W.%20Woodworth%20Award
The C. W. Woodworth Award is an annual award presented by the Pacific Branch of the Entomological Society of America. This award, the PBESA's largest, is for achievement in entomology in the Pacific region of the United States over the previous ten years. The award is named in honor of Charles W. Woodworth and was established on June 25, 1968. It is principally sponsored by Woodworth's great-grandson, Brian Holden, and his wife, Joann Wilfert, with additional support by Dr. Craig W. and Kathryn Holden, and Dr. Jim and Betty Woodworth. Award recipients Source: Entomological Society of America A box containing the older records of the PBESA and which likely contains the names of the first few recipients of the award is located in the special collections section of the library at U.C. Davis. See also The John Henry Comstock Graduate Student Award List of biology awards References External links List of recipients from the PBESA PBESA agenda mentioning Dr. Nick Toscano as the 2007 winner Note about Dr. Jocelyn Millar winning the 2006 award Note about Dr. John Stark winning the 2005 award Biography of Dr. Robert S. Lane mentioning the 2001 award Article about Dr. Wyatt Cone winning the 1999 award (page 5) Note about Dr. Harry Kaya winning the 1998 award Note about Dr. Jackie Robertson winning the award Mention of the award in an article about a facility being named for Dr. Harry Laidlaw Dr. George P. Georghiou Obituary Note about Dr. William Wellington being the 11th winner of the award in 1979 Dr. Carl Barton Huffaker Obituary Dr. William Harry Lange Jr. Obituary First page of Carl Barton Huffaker's acceptance speech mentioning Ray F. Smith Entomological organizations Biology awards Awards established in 1969
https://en.wikipedia.org/wiki/Line%20Printer%20Daemon%20protocol
The Line Printer Daemon protocol/Line Printer Remote protocol (or LPD, LPR) is a network printing protocol for submitting print jobs to a remote printer. The original implementation of LPD was in the Berkeley printing system in the BSD UNIX operating system; the LPRng project also supports that protocol. The Common Unix Printing System (or CUPS), which is more common on modern Linux distributions and also found on Mac OS X, supports LPD as well as the Internet Printing Protocol (IPP). Commercial solutions are available that also use Berkeley printing protocol components, where more robust functionality and performance is necessary than is available from LPR/LPD (or CUPS) alone (such as might be required in large corporate environments). The LPD Protocol Specification is documented in RFC 1179. Usage A server for the LPD protocol listens for requests on TCP port 515. A request begins with a byte containing the request code, followed by the arguments to the request, and is terminated by an ASCII LF character. An LPD printer is identified by the IP address of the server machine and the queue name on that machine. Many different queue names may exist in one LPD server, with each queue having unique settings. Note that the LPD queue name is case sensitive. Some modern implementations of LPD on network printers might ignore the case or queue name altogether and send all jobs to the same printer. Others have the option to automatically create a new queue when a print job with a new queue name is received. This helps to simplify the setup of the LPD server. Some companies (e.g. D-Link in model DP-301P+) have a tradition of calling the queue name “lpt1” or “LPT1”. A printer that supports LPD/LPR is sometimes referred to as a "TCP/IP printer" (TCP/IP is used to establish connections between printers and clients on a network), although that term would be equally applicable to a printer that supports the Internet Printing Protocol. See also Lp (Unix) LPRng Le
https://en.wikipedia.org/wiki/Client%20confidentiality
Client confidentiality is the principle that an institution or individual should not reveal information about their clients to a third party without the consent of the client or a clear legal reason. This concept, sometimes referred to as social systems of confidentiality, is outlined in numerous laws throughout many countries. The access to a client's data as provided by the institution in question is usually limited to law enforcement agencies and requires some legal procedures to be accomplished prior to such action (e.g.: court order issued, etc.). This applies to bank account information or medical record. In some cases the data is by definition inaccessible to third parties and should never be revealed; this can include confidential information gathered by attorneys, psychiatrists, psychologists, or priests. One well known result that can seem hard to reconcile is that of a priest hearing a murder confession, but being unable to reveal details to the authorities. However, had it not been for the assumed confidentiality, it is unlikely that the information would have been shared in the first place, and to breach this trust would then discourage others from confiding with priests in the future. So, even if justice was served in that particular case (assuming the confession led to a correct conviction), it would result in fewer people taking part in what is generally considered a beneficial process. This could also be said of a patient sharing information with a psychiatrist, or a client seeking legal advice from a lawyer. See also Privilege (evidence) Health Insurance Portability and Accountability Act References Information privacy
https://en.wikipedia.org/wiki/Gene-centered%20view%20of%20evolution
The gene-centered view of evolution, gene's eye view, gene selection theory, or selfish gene theory holds that adaptive evolution occurs through the differential survival of competing genes, increasing the allele frequency of those alleles whose phenotypic trait effects successfully promote their own propagation. The proponents of this viewpoint argue that, since heritable information is passed from generation to generation almost exclusively by DNA, natural selection and evolution are best considered from the perspective of genes. Proponents of the gene-centered viewpoint argue that it permits understanding of diverse phenomena such as altruism and intragenomic conflict that are otherwise difficult to explain from an organism-centered viewpoint. The gene-centered view of evolution is a synthesis of the theory of evolution by natural selection, the particulate inheritance theory, and the rejection of transmission of acquired characters. It states that those alleles whose phenotypic effects successfully promote their own propagation will be favorably selected relative to their competitor alleles within the population. This process produces adaptations for the benefit of alleles that promote the reproductive success of the organism, or of other organisms containing the same allele (kin altruism and green-beard effects), or even its own propagation relative to the other genes within the same organism (selfish genes and intragenomic conflict). Overview The gene-centered view of evolution is a model for the evolution of social characteristics such as selfishness and altruism, with gene defined as "not just one single physical bit of DNA [but] all replicas of a particular bit of DNA distributed throughout the world". Acquired characteristics The formulation of the central dogma of molecular biology was summarized by Maynard Smith: The rejection of the inheritance of acquired characters, combined with Ronald Fisher the statistician, giving the subject a mathematical
https://en.wikipedia.org/wiki/Mapping%20cylinder
In mathematics, specifically algebraic topology, the mapping cylinder of a continuous function between topological spaces and is the quotient where the denotes the disjoint union, and ∼ is the equivalence relation generated by That is, the mapping cylinder is obtained by gluing one end of to via the map . Notice that the "top" of the cylinder is homeomorphic to , while the "bottom" is the space . It is common to write for , and to use the notation or for the mapping cylinder construction. That is, one writes with the subscripted cup symbol denoting the equivalence. The mapping cylinder is commonly used to construct the mapping cone , obtained by collapsing one end of the cylinder to a point. Mapping cylinders are central to the definition of cofibrations. Basic properties The bottom Y is a deformation retract of . The projection splits (via ), and the deformation retraction is given by: (where points in stay fixed because for all ). The map is a homotopy equivalence if and only if the "top" is a strong deformation retract of . An explicit formula for the strong deformation retraction can be worked out. Examples Mapping cylinder of a fiber bundle For a fiber bundle with fiber , the mapping cylinder has the equivalence relation for . Then, there is a canonical map sending a point to the point , giving a fiber bundle whose fiber is the cone . To see this, notice the fiber over a point is the quotient space where every point in is equivalent. Interpretation The mapping cylinder may be viewed as a way to replace an arbitrary map by an equivalent cofibration, in the following sense: Given a map , the mapping cylinder is a space , together with a cofibration and a surjective homotopy equivalence (indeed, Y is a deformation retract of ), such that the composition equals f. Thus the space Y gets replaced with a homotopy equivalent space , and the map f with a lifted map . Equivalently, the diagram gets replaced with a diagram toge
https://en.wikipedia.org/wiki/Geometry%20of%20Complex%20Numbers
Geometry of Complex Numbers: Circle Geometry, Moebius Transformation, Non-Euclidean Geometry is an undergraduate textbook on geometry, whose topics include circles, the complex plane, inversive geometry, and non-Euclidean geometry. It was written by Hans Schwerdtfeger, and originally published in 1962 as Volume 13 of the Mathematical Expositions series of the University of Toronto Press. A corrected edition was published in 1979 in the Dover Books on Advanced Mathematics series of Dover Publications (). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics The book is divided into three chapters, corresponding to the three parts of its subtitle: circle geometry, Möbius transformations, and non-Euclidean geometry. Each of these is further divided into sections (which in other books would be called chapters) and sub-sections. An underlying theme of the book is the representation of the Euclidean plane as the plane of complex numbers, and the use of complex numbers as coordinates to describe geometric objects and their transformations. The chapter on circles covers the analytic geometry of circles in the complex plane. It describes the representation of circles by Hermitian matrices, the inversion of circles, stereographic projection, pencils of circles (certain one-parameter families of circles) and their two-parameter analogue, bundles of circles, and the cross-ratio of four complex numbers. The chapter on Möbius transformations is the central part of the book, and defines these transformations as the fractional linear transformations of the complex plane (one of several standard ways of defining them). It includes material on the classification of these transformations, on the characteristic parallelograms of these transformations, on the subgroups of the group of transformations, on iterated transformations that either return to the identity (forming a periodic sequ
https://en.wikipedia.org/wiki/Prime%20model
In mathematics, and in particular model theory, a prime model is a model that is as simple as possible. Specifically, a model is prime if it admits an elementary embedding into any model to which it is elementarily equivalent (that is, into any model satisfying the same complete theory as ). Cardinality In contrast with the notion of saturated model, prime models are restricted to very specific cardinalities by the Löwenheim–Skolem theorem. If is a first-order language with cardinality and is a complete theory over then this theorem guarantees a model for of cardinality Therefore no prime model of can have larger cardinality since at the very least it must be elementarily embedded in such a model. This still leaves much ambiguity in the actual cardinality. In the case of countable languages, all prime models are at most countably infinite. Relationship with saturated models There is a duality between the definitions of prime and saturated models. Half of this duality is discussed in the article on saturated models, while the other half is as follows. While a saturated model realizes as many types as possible, a prime model realizes as few as possible: it is an atomic model, realizing only the types that cannot be omitted and omitting the remainder. This may be interpreted in the sense that a prime model admits "no frills": any characteristic of a model that is optional is ignored in it. For example, the model is a prime model of the theory of the natural numbers N with a successor operation S; a non-prime model might be meaning that there is a copy of the full integers that lies disjoint from the original copy of the natural numbers within this model; in this add-on, arithmetic works as usual. These models are elementarily equivalent; their theory admits the following axiomatization (verbally): There is a unique element that is not the successor of any element; No two distinct elements have the same successor; No element satisfies Sn(x) =
https://en.wikipedia.org/wiki/Allometry
Allometry is the study of the relationship of body size to shape, anatomy, physiology and finally behaviour, first outlined by Otto Snell in 1892, by D'Arcy Thompson in 1917 in On Growth and Form and by Julian Huxley in 1932. Overview Allometry is a well-known study, particularly in statistical shape analysis for its theoretical developments, as well as in biology for practical applications to the differential growth rates of the parts of a living organism's body. One application is in the study of various insect species (e.g., Hercules beetles), where a small change in overall body size can lead to an enormous and disproportionate increase in the dimensions of appendages such as legs, antennae, or horns The relationship between the two measured quantities is often expressed as a power law equation (allometric equation) which expresses a remarkable scale symmetry: or in a logarithmic form, or similarly, where is the scaling exponent of the law. Methods for estimating this exponent from data can use type-2 regressions, such as major axis regression or reduced major axis regression, as these account for the variation in both variables, contrary to least-squares regression, which does not account for error variance in the independent variable (e.g., log body mass). Other methods include measurement-error models and a particular kind of principal component analysis. The allometric equation can also be acquired as a solution of the differential equation Allometry often studies shape differences in terms of ratios of the objects' dimensions. Two objects of different size, but common shape, have their dimensions in the same ratio. Take, for example, a biological object that grows as it matures. Its size changes with age, but the shapes are similar. Studies of ontogenetic allometry often use lizards or snakes as model organisms both because they lack parental care after birth or hatching and because they exhibit a large range of body sizes between the juv
https://en.wikipedia.org/wiki/Multimap
In computer science, a multimap (sometimes also multihash, multidict or multidictionary) is a generalization of a map or associative array abstract data type in which more than one value may be associated with and returned for a given key. Both map and multimap are particular cases of containers (for example, see C++ Standard Template Library containers). Often the multimap is implemented as a map with lists or sets as the map values. Examples In a student enrollment system, where students may be enrolled in multiple classes simultaneously, there might be an association for each enrollment of a student in a course, where the key is the student ID and the value is the course ID. If a student is enrolled in three courses, there will be three associations containing the same key. The index of a book may report any number of references for a given index term, and thus may be coded as a multimap from index terms to any number of reference locations or pages. Querystrings may have multiple values associated with a single field. This is commonly generated when a web form allows multiple check boxes or selections to be chosen in response to a single form element. Language support C++ C++'s Standard Template Library provides the multimap container for the sorted multimap using a self-balancing binary search tree, and SGI's STL extension provides the hash_multimap container, which implements a multimap using a hash table. As of C++11, the Standard Template Library provides the unordered_multimap for the unordered multimap. Dart Quiver provides a Multimap for Dart. Java Apache Commons Collections provides a MultiMap interface for Java. It also provides a MultiValueMap implementing class that makes a MultiMap out of a Map object and a type of Collection. Google Guava provides a Multimap interface and implementations of it. Python Python provides a collections.defaultdict class that can be used to create a multimap. The user can instantiate the class as collections.de
https://en.wikipedia.org/wiki/MPLS%20VPN
MPLS VPN is a family of methods for using Multiprotocol Label Switching (MPLS) to create virtual private networks (VPNs). MPLS VPN is a flexible method to transport and route several types of network traffic using an MPLS backbone. There are three types of MPLS VPNs deployed in networks today: 1. Point-to-point (Pseudowire) 2. Layer 2 (VPLS) 3. Layer 3 (VPRN) Point-to-point (pseudowire) Point-to-point MPLS VPNs employ VLL (virtual leased lines) for providing Layer 2 point-to-point connectivity between two sites. Ethernet, TDM, and ATM frames can be encapsulated within these VLLs. Some examples of how point-to-point VPNs might be used by utilities include: encapsulating TDM T1 circuits attached to Remote Terminal Units forwarding non-routed DNP3 traffic across the backbone network to the SCADA master controller. Layer 2 VPN (VPLS) Layer 2 MPLS VPNs, or VPLS (virtual private LAN service), offers a “switch in the cloud” style service. VPLS provides the ability to span VLANs between sites. L2 VPNs are typically used to route voice, video, and AMI traffic between substation and data center locations. Layer 3 VPN (VPRN) Layer 3, or VPRN (virtual private routed network), utilizes layer 3 VRF (VPN/virtual routing and forwarding) to segment routing tables for each customer utilizing the service. The customer peers with the service provider router and the two exchange routes, which are placed into a routing table specific to the customer. Multiprotocol BGP (MP-BGP) is required in the cloud to utilize the service, which increases complexity of design and implementation. L3 VPNs are typically not deployed on utility networks due to their complexity; however, a L3 VPN could be used to route traffic between corporate or datacenter locations. See also Segment Routing Ethernet VPN External links RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) Virtual Private Network (VPN): A Very Detailed Guide for Newbies MPLS networking Virtual private networks
https://en.wikipedia.org/wiki/Panela
Panela () or rapadura (Portuguese pronunciation: ) is an unrefined whole cane sugar, typical of Central and Latin America. It is a solid form of sucrose derived from the boiling and evaporation of sugarcane juice. Panela is known by other names in Latin America, such as chancaca in Chile, Bolivia, and Peru, piloncillo in Mexico (where panela refers to a type of cheese, queso panela). Just like brown sugar, two varieties of piloncillo are available; one is lighter (blanco) and one darker (oscuro). Unrefined, it is commonly used in Mexico, where it has been around for at least 500 years. Made from crushed sugar cane, the juice is collected, boiled, and poured into molds, where it hardens into blocks. Elsewhere in the world, the word jaggery describes a similar foodstuff. Both are considered non-centrifugal cane sugars. Panela is sold in many forms, including liquid, granulated, and solid blocks, and is used in the canning of foods, as well as in confectionery, soft drinks, baking, and vinegar, beer, and winemaking. Regional names Chancaca in Bolivia, Chile and Peru; also the name of a sweet sauce made from this Dulce de panela or dulce de atado in El Salvador Đường phên in Vietnam Gura in Afghanistan Gurr in Pakistan Jaggery, Bella (ಬೆಲ್ಲ), Gur, Sharkara, or Vellam in India Nam oy in Laos Panela in Colombia, Ecuador, and Venezuela Panocha in the Mexican State of Sinaloa and the Philippines Papelón in Venezuela Uluru Dust in Australia Piloncillo ("little pylon", so named for the cone shape) in Mexico and Spain Rapadou in Haiti Rapadura in Argentina, Brazil, Cuba, Guatemala, Honduras, Panama, Paraguay, and the Dominican Republic Raspadura in Cuba, Ecuador, and Panama Tapa de dulce or Dulce (de tapa) in Costa Rica and Nicaragua Economics The main producer of panela is Colombia (about 1.4 million tons/year), where panela production is one of the most important economic activities, with the highest index of panela consumption per capita world
https://en.wikipedia.org/wiki/Rework%20%28electronics%29
Rework (or re-work) is the term for the refinishing operation or repair of an electronic printed circuit board (PCB) assembly, usually involving desoldering and re-soldering of surface-mounted electronic components (SMD). Mass processing techniques are not applicable to single device repair or replacement, and specialized manual techniques by expert personnel using appropriate equipment are required to replace defective components; area array packages such as ball grid array (BGA) devices particularly require expertise and appropriate tools. A hot air gun or hot air station is used to heat devices and melt solder, and specialised tools are used to pick up and position often tiny components. A rework station is a place to do this work—the tools and supplies for this work, typically on a workbench. Other kinds of rework require other tools. Reasons for rework Rework is practiced in many kinds of manufacturing when defective products are found. For electronics, defects may include: Poor solder joints because of faulty assembly or thermal cycling. Solder bridges—unwanted drops of solder that connect points that should be isolated from each other. Faulty components. Engineering parts changes, upgrades, etc. Components broken due to natural wear, physical stress or excessive current. Components damaged due to liquid ingress, leading to corrosion, weak solder joints or physical damage. Process The rework may involve several components, which must be worked on one by one without damage to surrounding parts or the PCB itself. All parts not being worked on are protected from heat and damage. Thermal stress on the electronic assembly is kept as low as possible to prevent unnecessary contractions of the board which might cause immediate or future damage. In the 21st century, almost all soldering is carried out with lead-free solder, both on manufactured assemblies and in rework, to avoid the health and environmental hazards of lead. Where this precaution is not n
https://en.wikipedia.org/wiki/Mereotopology
In formal ontology, a branch of metaphysics, and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. History and motivation Mereotopology begins in philosophy with theories articulated by A. N. Whitehead in several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics was Karl Menger in his book Dimensionstheorie (1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1). The theory of Whitehead's 1929 Process and Reality augmented the part-whole relation with topological notions such as contiguity and connection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology. The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entry Whitehead's point-free geometry includes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section. Although mereotopology is a mathematical theory, we owe its subsequent development to logicians and theoretical computer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course in first-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment of poin
https://en.wikipedia.org/wiki/UniProt
UniProt is a freely accessible database of protein sequence and functional information, many entries being derived from genome sequencing projects. It contains a large amount of information about the biological function of proteins derived from the research literature. It is maintained by the UniProt consortium, which consists of several European bioinformatics organisations and a foundation from Washington, DC, United States. The UniProt consortium The UniProt consortium comprises the European Bioinformatics Institute (EBI), the Swiss Institute of Bioinformatics (SIB), and the Protein Information Resource (PIR). EBI, located at the Wellcome Trust Genome Campus in Hinxton, UK, hosts a large resource of bioinformatics databases and services. SIB, located in Geneva, Switzerland, maintains the ExPASy (Expert Protein Analysis System) servers that are a central resource for proteomics tools and databases. PIR, hosted by the National Biomedical Research Foundation (NBRF) at the Georgetown University Medical Center in Washington, DC, US, is heir to the oldest protein sequence database, Margaret Dayhoff's Atlas of Protein Sequence and Structure, first published in 1965. In 2002, EBI, SIB, and PIR joined forces as the UniProt consortium. The roots of the UniProt databases Each consortium member is heavily involved in protein database maintenance and annotation. Until recently, EBI and SIB together produced the Swiss-Prot and TrEMBL databases, while PIR produced the Protein Sequence Database (PIR-PSD). These databases coexisted with differing protein sequence coverage and annotation priorities. Swiss-Prot was created in 1986 by Amos Bairoch during his PhD and developed by the Swiss Institute of Bioinformatics and subsequently developed by Rolf Apweiler at the European Bioinformatics Institute. Swiss-Prot aimed to provide reliable protein sequences associated with a high level of annotation (such as the description of the function of a protein, its domain structure, post-
https://en.wikipedia.org/wiki/Immunogen
An immunogen is any substance that generates B-cell (humoral/antibody) and/or T-cell (cellular) adaptive immune responses upon exposure to a host organism. Immunogens that generate antibodies are called antigens ("antibody-generating"). Immunogens that generate antibodies are directly bound by host antibodies and lead to the selective expansion of antigen-specific B-cells. Immunogens that generate T-cells are indirectly bound by host T-cells after processing and presentation by host antigen-presenting cells. An immunogen can be defined as a complete antigen which is composed of the macromolecular carrier and epitopes (determinants) that can induce immune response. An explicit example is a hapten. Haptens are low-molecular-weight compounds that may be bound by antibodies, but cannot elicit an immune response. Consequently, the haptens themselves are nonimmunogenic and they cannot evoke an immune response until they bind with a larger carrier immunogenic molecule. The hapten-carrier complex, unlike free hapten, can act as an immunogen and can induce an immune response. Until 1959, the terms immunogen and antigen were not distinguished. Used carrier proteins Keyhole limpet hemocyanin It is copper-containing respiratory protein, isolated from keyhole limpets (Megathura crenulata). Because of its evolutionary distance from mammals, high molecular weight and complex structure it is usually immunogenic in vertebrate animals. Concholepas concholepas hemocyanin (also blue carrier immunogenic orotein) It is alternative to KLH isolated from Concholepas concholepas. It has the similar immunogenic properties as KLH but better solubility and therefore better flexibility. Bovine serum albumin It is from the blood sera of cows and has similarly immunogenic properties as KLH or CCH. The cationized form of BSA (cBSA) is highly positively charged protein with significantly increased immunogenicity. This change possesses a greater number of possible conjugated antigens to t
https://en.wikipedia.org/wiki/Electronic%20waste%20recycling
Electronic waste recycling, electronics recycling or e-waste recycling is the disassembly and separation of components and raw materials of waste electronics; when referring to specific types of e-waste, the terms like computer recycling or mobile phone recycling may be used. Like other waste streams, re-use, donation and repair are common sustainable ways to dispose of IT waste. Since its inception in the early 1990s, more and more devices are recycled worldwide due to increased awareness and investment. Electronic recycling occurs primarily in order to recover valuable rare earth metals and precious metals, which are in short supply, as well as plastics and metals. These are resold or used in new devices after purification, in effect creating a circular economy. Such processes involve specialised facilities and premises, but within the home or ordinary workplace, sound components of damaged or obsolete computers can often be reused, reducing replacement costs. Recycling is considered environmentally friendly because it prevents hazardous waste, including heavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of the European Union and the United States National Computer Recycling Act. In 2009, 38% of computers and a quarter of total electronic waste was recycled in the United States, 5% and 3% up from 3 years prior respectively. Reasons for recycling Obsolete computers and old electronics are valuable sources for secondary raw materials if recycled; otherwise, these devices are a source of toxins and carcinogens. Rapid technology change, low initial cost, and planned obsolescence have resulted in a fast-growing surplus of computers and other electronic comp
https://en.wikipedia.org/wiki/Relvar
In relational databases, relvar is a term introduced by C. J. Date and Hugh Darwen as an abbreviation for relation variable in their 1995 paper The Third Manifesto, to avoid the confusion sometimes arising from the use of the term relation, by the inventor of the relational model, E. F. Codd, for a variable to which a relation is assigned as well as for the relation itself. The term is used in Date's well-known database textbook An Introduction to Database Systems and in various other books authored or coauthored by him. Some database textbooks use the term relation for both the variable and the data it contains. Similarly, texts on SQL tend to use the term table for both purposes, though the qualified term base table is used in the standard for the variable. A closely related term often used in academic texts is relation schema, this being a set of attributes paired with a set of constraints, together defining a set of relations for the purpose of some discussion (typically, database normalization). Constraints that mention just one relvar are termed relvar constraints, so relation schema can be regarded as a single term encompassing a relvar and its relvar constraints. References C.J. Date. An Introduction to Database Systems, 8th Ed. (Addison-Wesley, 2004, ), pp. 65–6. C.J. Date and Hugh Darwen. Databases, Types, and The Relational Model: The Third Manifesto (Addison-Wesley, 2007, ), p.85 Relational model Data modeling Databases Variable (computer science)
https://en.wikipedia.org/wiki/Analogue%20electronics
Analogue electronics () are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word meaning "proportional". Analogue signals An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle on top of a contracting and expanding box as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone). The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system, 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees. Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used. In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in t
https://en.wikipedia.org/wiki/Abyssal%20zone
The abyssal zone or abyssopelagic zone is a layer of the pelagic zone of the ocean. The word abyss comes from the Greek word (), meaning "bottomless". At depths of , this zone remains in perpetual darkness. It covers 83% of the total area of the ocean and 60% of Earth's surface. The abyssal zone has temperatures around through the large majority of its mass. The water pressure can reach up to . Due to there being no light, there are no plants producing oxygen, which instead primarily comes from ice that had melted long ago from the polar regions. The water along the seafloor of this zone is actually devoid of oxygen, resulting in a death trap for organisms unable to quickly return to the oxygen-enriched water above or survive in the low-oxygen environment. This region also contains a much higher concentration of nutrient salts, like nitrogen, phosphorus, and silica, due to the large amount of dead organic material that drifts down from the above ocean zones and decomposes. The area below the abyssal zone is the sparsely inhabited hadal zone. The zone above is the bathyal zone. Trenches The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the mid-oceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe Trieste, the remote control submarine Kaikō and the Nereus have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the Deepsea Challenger was able to penetrate to a depth of 10,898.4 meters (35,756 ft). Ecosystem The relative sparsity of primary producers means that the majority of organisms living in the abyssal zone depend on the marine snow that falls from oceanic layers above. The biomass of the abyssal zone actually increases near the seafloor as most of the decomposing material and decomposers rest on the seabed. The composition of the abyssal plain depends on the depth of the sea floor. Above 4000 meters the seafloor
https://en.wikipedia.org/wiki/Euclid%20number
In mathematics, Euclid numbers are integers of the form , where pn # is the nth primorial, i.e. the product of the first n prime numbers. They are named after the ancient Greek mathematician Euclid, in connection with Euclid's theorem that there are infinitely many prime numbers. Examples For example, the first three primes are 2, 3, 5; their product is 30, and the corresponding Euclid number is 31. The first few Euclid numbers are 3, 7, 31, 211, 2311, 30031, 510511, 9699691, 223092871, 6469693231, 200560490131, ... . History It is sometimes falsely stated that Euclid's celebrated proof of the infinitude of prime numbers relied on these numbers. Euclid did not begin with the assumption that the set of all primes is finite. Rather, he said: consider any finite set of primes (he did not assume that it contained only the first n primes, e.g. it could have been ) and reasoned from there to the conclusion that at least one prime exists that is not in that set. Nevertheless, Euclid's argument, applied to the set of the first n primes, shows that the nth Euclid number has a prime factor that is not in this set. Properties Not all Euclid numbers are prime. E6 = 13# + 1 = 30031 = 59 × 509 is the first composite Euclid number. Every Euclid number is congruent to 3 modulo 4 since the primorial of which it is composed is twice the product of only odd primes and thus congruent to 2 modulo 4. This property implies that no Euclid number can be a square. For all the last digit of En is 1, since is divisible by 2 and 5. In other words, since all primorial numbers greater than E2 have 2 and 5 as prime factors, they are divisible by 10, thus all En ≥ 3 + 1 have a final digit of 1. Unsolved problems It is not known whether there is an infinite number of prime Euclid numbers (primorial primes). It is also unknown whether every Euclid number is a squarefree number. Generalization A Euclid number of the second kind (also called Kummer number) is an integer of the form En = pn
https://en.wikipedia.org/wiki/Crystal%20Analysis
Crystal Analysis (a.k.a. Crystal Analysis Professional) is an On Line Analytical Processing (OLAP) application for analysing business data originally developed by Seagate Software. It was first released under the name Seagate Analysis as a free application written in Java released in 1999. After disappointing application performance, a decision was made to rewrite using ATL COM in C++. The initial rewrite only supported Microsoft Analysis Services, but support for other vendors soon followed, with Holos cubes in version 8.5, Essbase, IBM Db2 and SAP BW following in later releases. The web client was rewritten using an XSLT abstraction layer for the version 9.0 release, with better standards compliance to support Mozilla based browsers—this work also set the building blocks for support for Safari. Crystal Analysis relies on Crystal Enterprise for distribution of analytical applications created with it. Release timeline Seagate Analysis 1999, by Seagate Software Crystal Analysis Professional v8.0, 29 May 2001 by Crystal Decisions Crystal Analysis Professional v8.1, Q4 2001 by Crystal Decisions Crystal Analysis Professional v8.5 9 July 2002 , by Crystal Decisions Crystal Analysis Professional v9.0 9 April 2003 , by Crystal Decisions Crystal Analysis Professional v10.0 8 January 2004 , by Business Objects Crystal Analysis Professional v11.0 31 January 2005, by Business Objects Crystal Analysis Professional v11.0 Release 2 30 November 2005 , by Business Objects Future versions will be released under the name, BusinessObjects OLAP Intelligence. External links Product page at Business Objects Business intelligence software Online analytical processing
https://en.wikipedia.org/wiki/Fundamental%20polygon
In mathematics, a fundamental polygon can be defined for every compact Riemann surface of genus greater than 0. It encodes not only the topology of the surface through its fundamental group but also determines the Riemann surface up to conformal equivalence. By the uniformization theorem, every compact Riemann surface has simply connected universal covering surface given by exactly one of the following: the Riemann sphere, the complex plane, the unit disk D or equivalently the upper half-plane H. In the first case of genus zero, the surface is conformally equivalent to the Riemann sphere. In the second case of genus one, the surface is conformally equivalent to a torus C/Λ for some lattice Λ in C. The fundamental polygon of Λ, if assumed convex, may be taken to be either a period parallelogram or a centrally symmetric hexagon, a result first proved by Fedorov in 1891. In the last case of genus g > 1, the Riemann surface is conformally equivalent to H/Γ where Γ is a Fuchsian group of Möbius transformations. A fundamental domain for Γ is given by a convex polygon for the hyperbolic metric on H. These can be defined by Dirichlet polygons and have an even number of sides. The structure of the fundamental group Γ can be read off from such a polygon. Using the theory of quasiconformal mappings and the Beltrami equation, it can be shown there is a canonical convex Dirichlet polygon with 4g sides, first defined by Fricke, which corresponds to the standard presentation of Γ as the group with 2g generators a1, b1, a2, b2, ..., ag, bg and the single relation [a1,b1][a2,b2] ⋅⋅⋅ [ag,bg] = 1, where [a,b] = a b a−1b−1. Any Riemannian metric on an oriented closed 2-manifold M defines a complex structure on M, making M a compact Riemann surface. Through the use of fundamental polygons, it follows that two oriented closed 2-manifolds are classified by their genus, that is half the rank of the Abelian group Γ/[Γ,Γ], where Γ = 1(M). Moreover, it also follows from the theory of
https://en.wikipedia.org/wiki/Severi%E2%80%93Brauer%20variety
In mathematics, a Severi–Brauer variety over a field K is an algebraic variety V which becomes isomorphic to a projective space over an algebraic closure of K. The varieties are associated to central simple algebras in such a way that the algebra splits over K if and only if the variety has a rational point over K. studied these varieties, and they are also named after Richard Brauer because of their close relation to the Brauer group. In dimension one, the Severi–Brauer varieties are conics. The corresponding central simple algebras are the quaternion algebras. The algebra (a,b)K corresponds to the conic C(a,b) with equation and the algebra (a,b)K splits, that is, (a,b)K is isomorphic to a matrix algebra over K, if and only if C(a,b) has a point defined over K: this is in turn equivalent to C(a,b) being isomorphic to the projective line over K. Such varieties are of interest not only in diophantine geometry, but also in Galois cohomology. They represent (at least if K is a perfect field) Galois cohomology classes in H1(PGLn), where PGLn is the projective linear group, and n is the dimension of the variety V. There is a short exact sequence 1 → GL1 → GLn → PGLn → 1 of algebraic groups. This implies a connecting homomorphism H1(PGLn) → H2(GL1) at the level of cohomology. Here H2(GL1) is identified with the Brauer group of K, while the kernel is trivial because H1(GLn) = {1} by an extension of Hilbert's Theorem 90. Therefore, Severi–Brauer varieties can be faithfully represented by Brauer group elements, i.e. classes of central simple algebras. Lichtenbaum showed that if X is a Severi–Brauer variety over K then there is an exact sequence Here the map δ sends 1 to the Brauer class corresponding to X. As a consequence, we see that if the class of X has order d in the Brauer group then there is a divisor class of degree d on X. The associated linear system defines the d-dimensional embedding of X over a splitting field L. See also projective bundle
https://en.wikipedia.org/wiki/Image%20plane
In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the display monitor used to view the image that is being rendered. It is also referred to as screen space. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the mapping between pixels on the monitor and points (or rather, rays) in the 3D world. The plane is not usually an actual geometric object in a 3D scene, but instead is usually a collection of target coordinates or dimensions that are used during the rasterization process so the final output can be displayed as intended on the physical screen. In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal plane. See also Focal plane Picture plane Projection plane Real image References External links 3D computer graphics Planes (geometry)
https://en.wikipedia.org/wiki/Unbeatable%20strategy
In biology, the idea of an unbeatable strategy was proposed by W.D. Hamilton in his 1967 paper on sex ratios in Science. In this paper Hamilton discusses sex ratios as strategies in a game, and cites Verner as using this language in his 1965 paper which "claims to show that, given factors causing fluctuations of the population's primary sex ratio, a 1:1 sex-ratio production proves the best overall genotypic strategy". "In the way in which the success of a chosen sex ratio depends on choices made by the co-parasitizing females, this problem resembles certain problems discussed in the "theory of games." In the foregoing analysis a game-like element, of a kind, was present and made necessary the use of the word unbeatable to describe the ratio finally established. This word was applied in just the same sense in which it could be applied to the "minimax" strategy of a zero-sum two-person game. Such a strategy should not, without qualification, be called optimum because it is not optimum against -although unbeaten by- any strategy differing from itself. This is exactly the case with the "unbeatable" sex ratios referred to." Hamilton (1967). "[...] But if, on the contrary, players of such a game were motivated to outscore, they would find that is beaten by a higher ratio, ; the value of which gives its player the greatest possible advantage over the player playing , is found to be given by the relationship and shows to be the unbeatable play." Hamilton (1967). The concept can be traced through R.A. Fisher (1930) to Darwin (1859); see Edwards (1998). Hamilton did not explicitly define the term "unbeatable strategy" or apply the concept beyond the evolution of sex-ratios, but the idea was very influential. George R. Price generalised the verbal argument, which was then formalised mathematically by John Maynard Smith, into the evolutionarily stable strategy (ESS). References External links http://www.iiasa.ac.at/Publications/Documents/IR-02-019.pdf Strateg
https://en.wikipedia.org/wiki/Genetic%20load
Genetic load is the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. The average individual taken from a population with a low genetic load will generally, when grown in the same conditions, have more surviving offspring than the average individual from a population with a high genetic load. Genetic load can also be seen as reduced fitness at the population level compared to what the population would have if all individuals had the reference high-fitness genotype. High genetic load may put a population in danger of extinction. Fundamentals Consider n genotypes , which have the fitnesses and frequencies , respectively. Ignoring frequency-dependent selection, the genetic load may be calculated as: where is either some theoretical optimum, or the maximum fitness observed in the population. In calculating the genetic load, must be actually found in at least a single copy in the population, and is the average fitness calculated as the mean of all the fitnesses weighted by their corresponding frequencies: where the genotype is and has the fitness and frequency and respectively. One problem with calculating genetic load is that it is difficult to evaluate either the theoretically optimal genotype, or the maximally fit genotype actually present in the population. This is not a problem within mathematical models of genetic load, or for empirical studies that compare the relative value of genetic load in one setting to genetic load in another. Causes Deleterious mutation Deleterious mutation load is the main contributing factor to genetic load overall. The Haldane-Muller theorem of mutation–selection balance says that the load depends only on the deleterious mutation rate and not on the selection coefficient. Specifically, relative to an ideal genotype of fitness 1, the mean population fitness is
https://en.wikipedia.org/wiki/Clamshell%20design
A clamshell design is a kind of form factor for electronic devices in the shape of a clamshell. Mobile phones, handheld game consoles, and especially laptops, are often designed like clamshells. Clamshell devices are usually made of two sections connected by a hinge, each section containing either a flat panel display or an alphanumeric keyboard/keypad, which can fold into contact together like a bivalve shell. A clamshell mobile phone is sometimes also called a flip phone, especially if the hinge is on the short edge. If the hinge is on a long edge (e.g., Nokia Communicators), the device is more likely to be called just a "clamshell" rather than a flip phone. Generally speaking, the interface components such as keys and display are kept inside the closed clamshell, protecting them from damage and unintentional use while also making the device shorter or narrower so it is easier to carry around. In many cases, opening the clamshell offers more surface area than when the device is closed, allowing interface components to be larger and easier to use than on devices which do not flip open. A disadvantage of the clamshell design is the connecting hinge, which is prone to fatigue or failure. Etymology The clamshell form factor is most closely associated with the cell phone market, as Motorola used to have a trademark on the term "flip phone", but the term "flip phone" has become genericized to be used more frequently than "clamshell" in colloquial speech. History A "flip phone" like communication device appears in chapter 3 of Armageddon 2419 A.D., a science fiction novella by Philip Francis Nowlan, which was first published in the August 1928 issue of the pulp magazine Amazing Stories: "Alan took a compact packet about six inches square from a holster attached to her belt and handed it to Wilma. So far as I could see, it had no special receiver for the ear. Wilma merely threw back a lid, as though she was opening a book, and began to talk. The voice that came bac
https://en.wikipedia.org/wiki/Backtracking%20line%20search
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known. The method involves starting with a relatively large estimate of the step size for movement along the line search direction, and iteratively shrinking the step size (i.e., "backtracking") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is the Armijo–Goldstein condition. Backtracking line search is typically used for gradient descent (GD), but it can also be used in other contexts. For example, it can be used with Newton's method if the Hessian matrix is positive definite. Motivation Given a starting position and a search direction , the task of a line search is to determine a step size that adequately reduces the objective function (assumed i.e. continuously differentiable), i.e., to find a value of that reduces relative to . However, it is usually undesirable to devote substantial resources to finding a value of to precisely minimize . This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of . The backtracking line search starts with a large estimate of and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective func
https://en.wikipedia.org/wiki/Deep-level%20transient%20spectroscopy
Deep-level transient spectroscopy (DLTS) is an experimental tool for studying electrically active defects (known as charge carrier traps) in semiconductors. DLTS establishes fundamental defect parameters and measures their concentration in the material. Some of the parameters are considered as defect "finger prints" used for their identifications and analysis. DLTS investigates defects present in a space charge (depletion) region of a simple electronic device. The most commonly used are Schottky diodes or p-n junctions. In the measurement process the steady-state diode reverse polarization voltage is disturbed by a voltage pulse. This voltage pulse reduces the electric field in the space charge region and allows free carriers from the semiconductor bulk to penetrate this region and recharge the defects causing their non-equilibrium charge state. After the pulse, when the voltage returns to its steady-state value, the defects start to emit trapped carriers due to the thermal emission process. The technique observes the device space charge region capacitance where the defect charge state recovery causes the capacitance transient. The voltage pulse followed by the defect charge state recovery are cycled allowing an application of different signal processing methods for defect recharging process analysis. The DLTS technique has a higher sensitivity than almost any other semiconductor diagnostic technique. For example, in silicon it can detect impurities and defects at a concentration of one part in 1012 of the material host atoms. This feature together with a technical simplicity of its design made it very popular in research labs and semiconductor material production factories. The DLTS technique was pioneered by David Vern Lang at Bell Laboratories in 1974. A US Patent was awarded to Lang in 1975. DLTS methods Conventional DLTS In conventional DLTS the capacitance transients are investigated by using a lock-in amplifier or double box-car averaging technique whe
https://en.wikipedia.org/wiki/ALTQ
ALTQ (ALTernate Queueing) is the network scheduler for Berkeley Software Distribution. ALTQ provides queueing disciplines, and other components related to quality of service (QoS), required to realize resource sharing. It is most commonly implemented on BSD-based routers. ALTQ is included in the base distribution of FreeBSD, NetBSD, and DragonFly BSD, and was integrated into the pf packet filter of OpenBSD but later replaced by a new queueing subsystem (it was deprecated with OpenBSD 5.5 release, and completely removed with 5.6 in 2014). With ALTQ, packets can be assigned to queues for the purpose of bandwidth control. The scheduler defines the algorithm used to decide which packets get delayed, dropped or sent out immediately. There are five schedulers currently supported in the FreeBSD implementation of ALTQ: — Class-based Queueing. Queues attached to an interface build a tree, thus each queue can have further child queues. Each queue can have a priority and a bandwidth assigned. Priority mainly controls the time packets take to get sent out, while bandwidth has primarily effects on throughput. — Controlled Delay. Attempts to combat bufferbloat. — Fair Queuing. Attempts to fairly distribute bandwidth among all connections. — Hierarchical Fair Service Curve. Queues attached to an interface build a tree, thus each queue can have further child queues. Each queue can have a priority and a bandwidth assigned. Priority mainly controls the time packets take to get sent out, while bandwidth has primarily effects on throughput. — Priority Queueing. Queues are flat attached to the interface, thus, queues cannot have further child queues. Each queue has a unique priority assigned, ranging from 0 to 15. Packets in the queue with the highest priority are processed first. See also Traffic shaping KAME project References External links ALTQ home Configuring ALTQ in OpenBSD 5.4 and earlier PF and ALTQ documentation by the FreeBSD project pfSense Documentation
https://en.wikipedia.org/wiki/Tetrasodium%20pyrophosphate
Tetrasodium pyrophosphate, also called sodium pyrophosphate, tetrasodium phosphate or TSPP, is an inorganic compound with the formula Na4P2O7. As a salt, it is a white, water-soluble solid. It is composed of pyrophosphate anion and sodium ions. Toxicity is approximately twice that of table salt when ingested orally. Also known is the decahydrate Na4P2O710(H2O). Use Tetrasodium pyrophosphate is used as a buffering agent, an emulsifier, a dispersing agent, and a thickening agent, and is often used as a food additive. Common foods containing tetrasodium pyrophosphate include chicken nuggets, marshmallows, pudding, crab meat, imitation crab, canned tuna, and soy-based meat alternatives and cat foods and cat treats where it is used as a palatability enhancer. In toothpaste and dental floss, tetrasodium pyrophosphate acts as a tartar control agent, serving to remove calcium and magnesium from saliva and thus preventing them from being deposited on teeth. Tetrasodium pyrophosphate is used in commercial dental rinses before brushing to aid in plaque reduction. Tetrasodium pyrophosphate is sometimes used in household detergents to prevent similar deposition on clothing, but due to its phosphate content it causes eutrophication of water, promoting algae growth. Production Tetrasodium pyrophosphate is produced by the reaction of furnace-grade phosphoric acid with sodium carbonate to form disodium phosphate, which is then heated to 450 °C to form tetrasodium pyrophosphate: 2 Na2HPO4 → Na4P2O7 + H2O References Sodium compounds Pyrophosphates Food additives Edible thickening agents
https://en.wikipedia.org/wiki/Wolfe%20conditions
In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find for some smooth . Each step often involves approximately solving the subproblem where is the current best guess, is a search direction, and is the step length. The inexact line searches provide an efficient way of computing an acceptable step length that reduces the objective function 'sufficiently', rather than minimizing the objective function over exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessed , before finding a new search direction . Armijo rule and curvature A step length is said to satisfy the Wolfe conditions, restricted to the direction , if the following two inequalities hold: with . (In examining condition (ii), recall that to ensure that is a descent direction, we have , as in the case of gradient descent, where , or Newton–Raphson, where with positive definite.) is usually chosen to be quite small while is much larger; Nocedal and Wright give example values of and for Newton or quasi-Newton methods and for the nonlinear conjugate gradient method. Inequality i) is known as the Armijo rule and ii) as the curvature condition; i) ensures that the step length decreases 'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values. Strong Wolfe condition on curvature Denote a univariate function restricted to the direction as . The Wolfe conditions can result in a value for the step length that is not close to a minimizer of . If we modify the curvature condition to the following, then i) and iii) together form the so-called strong Wolfe conditions, and force to lie close to a critical point of . Rationale The
https://en.wikipedia.org/wiki/California%20Games
California Games is a 1987 sports video game originally released by Epyx for the Apple II and Commodore 64, and ported to other home computers and video game consoles. Branching from their Summer Games and Winter Games series, this game consists of a collection of outdoor sports purportedly popular in California. The game was successful and spawned a sequel, California Games II. Gameplay The events available vary slightly depending on the platform, but include all of the following: Half-pipe Footbag Surfing (starring Rippin' Rick) Roller skating BMX Flying disc Development Several members of the development team moved on to other projects. Chuck Sommerville, the designer of the half-pipe game in California Games, later developed the game Chip's Challenge, while Ken Nicholson, the designer of the footbag game, was the inventor of the technology used in Microsoft's DirectX. Kevin Norman, the designer of the BMX game, went on to found the educational science software company Norman & Globus, makers of the ElectroWiz series of products. The sound design for the original version of California Games was done by Chris Grigg, member of the band Negativland. Ports Originally written for the Apple II and Commodore 64, it was eventually ported to Amiga, Apple IIGS, Atari 2600, Atari ST, MS-DOS, Genesis, Amstrad CPC, ZX Spectrum, Nintendo Entertainment System, MSX and Master System. The Atari Lynx version was the pack-in game for the system when it was launched in June 1989. An Atari XE version was planned and contracted out by Atari Corp. to Epyx in 1988 but no code was delivered by the publication deadline. Reception California Games was a commercial blockbuster. With more than 300,000 copies sold in the first nine months, it was the most-successful Epyx game, outselling each of the four previous and two subsequent titles in the company's "Games" series. CEO David Shannon Morse said that it was the first Epyx game to appeal equally to boys and girls during playte
https://en.wikipedia.org/wiki/Dave%20Smith%20%28engineer%29
David Joseph Smith (April 2, 1950 – May 31, 2022) was an American engineer and founder of the synthesizer company Sequential. Smith created the first polyphonic synthesizer with fully programmable memory, the Prophet-5, which had a major impact on the music industry. He also led the development of MIDI, a standard interface protocol for synchronizing electronic instruments and audio equipment. In 2005, Smith was inducted into the Mix Foundation TECnology (Technical Excellence and Creativity) Hall of Fame for the MIDI specification. In 2013, he and the Japanese businessman Ikutaro Kakehashi received a Technical Grammy Award for their contributions to the development of MIDI. Career Smith was born on April 2, 1950, in San Francisco. He had degrees in both Computer Science and Electronic Engineering from UC Berkeley. Sequential Circuits and Prophet-5 He purchased a Minimoog in 1972 and later built his own analog sequencer, founding Sequential Circuits in 1974 and advertising his product for sale in Rolling Stone. By 1977 he was working at Sequential full-time, and later that year he designed the Prophet 5, the world's first microprocessor-based musical instrument and also the first programmable polyphonic synth, an innovation that marked a crucial step forward in synthesizer design and functionality. Sequential went on to become one of the most successful music synthesizer manufacturers of the time. MIDI In 1981 Smith set out to create a standard protocol for communication between electronic musical instruments from different manufacturers worldwide. He presented a paper outlining the idea of a Universal Synthesizer Interface (USI) to the Audio Engineering Society (AES) in 1981 after meetings with Tom Oberheim and Roland founder Ikutaro Kakehashi. After some enhancements and revisions, the new standard was introduced as "Musical Instrument Digital Interface" (MIDI) at the Winter NAMM Show in 1983, when a Sequential Circuits Prophet-600 was successfully connecte
https://en.wikipedia.org/wiki/Cyclic%20number
A cyclic number is an integer for which cyclic permutations of the digits are successive integer multiples of the number. The most widely known is the six-digit number 142857, whose first six integer multiples are 142857 × 1 = 142857 142857 × 2 = 285714 142857 × 3 = 428571 142857 × 4 = 571428 142857 × 5 = 714285 142857 × 6 = 857142 Details To qualify as a cyclic number, it is required that consecutive multiples be cyclic permutations. Thus, the number 076923 would not be considered a cyclic number, because even though all cyclic permutations are multiples, they are not consecutive integer multiples: 076923 × 1 = 076923 076923 × 3 = 230769 076923 × 4 = 307692 076923 × 9 = 692307 076923 × 10 = 769230 076923 × 12 = 923076 The following trivial cases are typically excluded: single digits, e.g.: 5 repeated digits, e.g.: 555 repeated cyclic numbers, e.g.: 142857142857 If leading zeros are not permitted on numerals, then 142857 is the only cyclic number in decimal, due to the necessary structure given in the next section. Allowing leading zeros, the sequence of cyclic numbers begins: (106 − 1) / 7 = 142857 (6 digits) (1016 − 1) / 17 = 0588235294117647 (16 digits) (1018 − 1) / 19 = 052631578947368421 (18 digits) (1022 − 1) / 23 = 0434782608695652173913 (22 digits) (1028 − 1) / 29 = 0344827586206896551724137931 (28 digits) (1046 − 1) / 47 = 0212765957446808510638297872340425531914893617 (46 digits) (1058 − 1) / 59 = 0169491525423728813559322033898305084745762711864406779661 (58 digits) (1060 − 1) / 61 = 016393442622950819672131147540983606557377049180327868852459 (60 digits) (1096 − 1) / 97 = 010309278350515463917525773195876288659793814432989690721649484536082474226804123711340206185567 (96 digits) Relation to repeating decimals Cyclic numbers are related to the recurring digital representations of unit fractions. A cyclic number of length L is the digital representation of 1/(L + 1). Conversely, if the digital period of 1/p (where p is prime) is p − 1, then
https://en.wikipedia.org/wiki/Invariable%20plane
The invariable plane of a planetary system, also called Laplace's invariable plane, is the plane passing through its barycenter (center of mass) perpendicular to its angular momentum vector. Solar System In the Solar System, about 98% of this effect is contributed by the orbital angular momenta of the four jovian planets (Jupiter, Saturn, Uranus, and Neptune). The invariable plane is within 0.5° of the orbital plane of Jupiter, and may be regarded as the weighted average of all planetary orbital and rotational planes. Terminology and definition This plane is sometimes called the "Laplacian" or "Laplace plane" or the "invariable plane of Laplace", though it should not be confused with the Laplace plane, which is the plane about which the individual orbital planes of planetary satellites precess. Both derive from the work of (and are at least sometimes named for) the French astronomer Pierre Simon Laplace. The two are equivalent only in the case where all perturbers and resonances are far from the precessing body. The invariable plane is derived from the sum of angular momenta, and is "invariable" over the entire system, while the Laplace plane for different orbiting objects within a system may be different. Laplace called the invariable plane the plane of maximum areas, where the "area" in this case is the product of the radius and its time rate of change , that is, its radial velocity, multiplied by the mass. Description The magnitude of the orbital angular momentum vector of a planet is where is the orbital radius of the planet (from the barycenter), is the mass of the planet, and is its orbital angular velocity. That of Jupiter contributes the bulk of the Solar System's angular momentum, 60.3%. Then comes Saturn at 24.5%, Neptune at 7.9%, and Uranus at 5.3%. The Sun forms a counterbalance to all of the planets, so it is near the barycenter when Jupiter is on one side and the other three jovian planets are diametrically opposite on the other side, but the S
https://en.wikipedia.org/wiki/Timeboxing
In agile principles, timeboxing allocates a maximum unit of time to an activity, called a timebox, within which a planned activity takes place. It is used by agile principles-based project management approaches and for personal time management. In project management Timeboxing is used as a project planning technique. The schedule is divided into a number of separate time periods (timeboxes), with each part having its own deliverables, deadline and budget. Sometimes referred to as schedule as independent variable (SAIV). "Timeboxing works best in multistage projects or tasks that take little time and you can fit them in the same time slot. It is also worth implementing in case of duties that have foreseeable time-frames of completion." As an alternative to fixing scope In project management, there are generally considered to be three constraints: time (sometimes schedule), cost (sometimes budget), and scope. (Quality is often added as a fourth constraint---represented as the middle of a triangle.) The assumption is that a change in one constraint will affect the others. Without timeboxing, projects usually work to a fixed scope, in which case when it becomes clear that some deliverables cannot be completed within the planned timescales, either the deadline has to be extended (to allow more time to complete the fixed scope) or more people are involved (to complete the fixed scope in the same time). Often both happen, resulting in delayed delivery, increased costs, and often reduced quality (as per The Mythical Man-Month principle). With timeboxing, the deadline is fixed, meaning that the scope would have to be reduced. As this means organizations have to focus on completing the most important deliverables first, timeboxing often goes hand-in-hand with a scheme for prioritizing of deliverables (such as with the MoSCoW method). To manage risk Timeboxes are used as a form of risk management, to explicitly identify uncertain task/time relationships, i.e., work
https://en.wikipedia.org/wiki/Blobotics
Blobotics is a term describing research into chemical-based computer processors based on ions rather than electrons. Andrew Adamatzky, a computer scientist at the University of the West of England, Bristol used the term in an article in New Scientist March 28, 2005 . The aim is to create 'liquid logic gates' which would be 'infinitely reconfigurable and self-healing'. The process relies on the Belousov–Zhabotinsky reaction, a repeating cycle of three separate sets of reactions. Such a processor could form the basis of a robot which, using artificial sensors, interact with its surroundings in a way which mimics living creatures. The coining of the term was featured by ABC radio in Australia . References Motoike I., Adamatzky A. "Three-valued logic gates in reaction-diffusion excitable media." Chaos, Solitons & Fractals 24 (2005) 107-114 Adamatzky, A. "Collision-based computing in Belousov–Zhabotinsky medium." Chaos, Solitons & Fractals 21:(5), (2004), p1259-1264 Robotics Classes of computers
https://en.wikipedia.org/wiki/Institute%20for%20Systems%20Biology
Institute for Systems Biology (ISB) is a non-profit research institution located in Seattle, Washington, United States. ISB concentrates on systems biology, the study of relationships and interactions between various parts of biological systems, and advocates an interdisciplinary approach to biological research. Goals Systems biology is the study of biological systems in a holistic manner by integrating data at all levels of the biological information hierarchy, from global down to the individual organism, and below down to the molecular level. The vision of ISB is to integrate these concepts using a cross-disciplinary approach combining the efforts of biologists, chemists, computer scientists, engineers, mathematicians, physicists, and physicians. On its website, ISB has defined four areas of focus: P4 Medicine - This acronym refers to predictive, preventive, personalized and participatory medicine, which focuses on wellness rather than mere treatment of disease. Global Health - Use of the systems approach towards the study of infectious diseases, vaccine development, emergence of chronic diseases, and maternal and child health. Sustainable Environment - Applying systems biology for a better understanding of the role of microbes in the environment and their relation to human health. Education & Outreach - Knowledge transfer to society through a variety of educational programs and partnerships, including the spin out of new companies. Early history Leroy Hood co-founded the Institute with Alan Aderem and Ruedi Aebersold in 2000. However, the story of how ISB got started actually begins in 1990. Lee Hood was the director of a large molecular biotechnology lab at the California Institute of Technology in Pasadena, and was a key advisor in the Human Genome Project, having overseen development of machines that were instrumental to its later success. The University of Washington (UW), like many other universities, was eager to recruit Hood, but had neither the
https://en.wikipedia.org/wiki/Convex%20optimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming. Definition A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function mapping some subset of into is convex if its domain is convex and for all and all in its domain, the following condition holds: . A set S is convex if for all members and all , we have that . Concretely, a convex optimization problem is the problem of finding some attaining , where the objective function is convex, as is the feasible set . If such a point exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set. If is unbounded below over or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if is the empty set, then the problem is said to be infeasible. Standard form A convex optimization problem is in standard form if it is written as where: is the optimization variable; The objective function is a convex function; The inequality constraint functions , , are convex functions; The equality constraint functions , , are affine transformations, that
https://en.wikipedia.org/wiki/Vibration%20theory%20of%20olfaction
The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium), such as electrostatic and Van der Waals interactions as well as H-bonding, dipole attraction, pi-stacking, metal ion, Cation–pi interaction, and hydrophobic effects, in addition to the molecule's conformation. Introduction The current vibration theory has recently been called the "swipe card" model, in contrast with "lock and key" models based on shape theory. As proposed by Luca Turin, the odorant molecule must first fit in the receptor's binding site. Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling, triggering the signal transduction pathway. The vibration theory is discussed in a popular but controversial book by Chandler Burr. The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the "lock and key" model. For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link betwe
https://en.wikipedia.org/wiki/Meta-process%20modeling
Meta-process modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined problems. Meta-process modeling supports the effort of creating flexible process models. The purpose of process models is to document and communicate processes and to enhance the reuse of processes. Thus, processes can be better taught and executed. Results of using meta-process models are an increased productivity of process engineers and an improved quality of the models they produce. Overview Meta-process modeling focuses on and supports the process of constructing process models. Its main concern is to improve process models and to make them evolve, which in turn, will support the development of systems. This is important due to the fact that "processes change with time and so do the process models underlying them. Thus, new processes and models may have to be built and existing ones improved". "The focus has been to increase the level of formality of process models in order to make possible their enactment in process-centred software environments". A process meta-model is a meta model, "a description at the type level of a process model. A process model is, thus, an instantiation of a process meta-model. [..] A meta-model can be instantiated several times in order to define various process models. A process meta-model is at the meta-type level with respect to a process." There exist standards for several domains: Software engineering Software Process Engineering Metamodel (SPEM) which is defined as a profile (UML) by the Object Management Group. Topics in metadata modeling There are different techniques for constructing process models. "Construction techniques used in the information systems area have developed independently of those in software engineering. In information systems, construction techniques exploit the notion of a meta-model and the two principal techniqu
https://en.wikipedia.org/wiki/Process%20modeling
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model. Overview Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. The goals of a process model are to be: Descriptive Track what actually happens during a process Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently. Prescriptive Define the desired processes and how they should/could/might be performed. Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance. Explanatory Provide explanations about the rationale of processes. Explore and evaluate the several possible courses of action based on rational arguments. Establish an explicit link between processes and the requirements that the model needs to fulfill. Pre-defines points at which data can be extracted for reporting purposes. Purpose From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process mode
https://en.wikipedia.org/wiki/Metamodeling
A metamodel is a model of a model, and metamodeling is the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction and development of the frames, rules, constraints, models and theories applicable and useful for modeling a predefined class of problems. As its name implies, this concept applies the notions of meta- and modeling in software engineering and systems engineering. Metamodels are of many types and have diverse applications. Overview A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software like entity. Metamodel can be a mathematical relation or algorithm representing input and output relations. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural network, Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting right metamodels to represent that behavior. Common uses for metamodels are: As a schema for semantic data that needs to be exchanged or stored As a language that supports a particular method or process As a language to express additional semantics of existing information As a mechanism to create tools that work with a broad class of models at run time As a schema for modeling and automatically exploring sentences of a language with applications to automated test synthesis As an approximation of a higher-fidelity model for use when reducing time, cost, or computational effort is necessary Because of the "meta" character of metamodeling, both the praxis and theory
https://en.wikipedia.org/wiki/Abstract%20index%20notation
Abstract index notation (also referred to as slot-naming index notation) is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any basis and, in particular, are non-numerical. Thus it should not be confused with the Ricci calculus. The notation was introduced by Roger Penrose as a way to use the formal aspects of the Einstein summation convention to compensate for the difficulty in describing contractions and covariant differentiation in modern abstract tensor notation, while preserving the explicit covariance of the expressions involved. Let be a vector space, and its dual space. Consider, for example, an order-2 covariant tensor . Then can be identified with a bilinear form on . In other words, it is a function of two arguments in which can be represented as a pair of slots: Abstract index notation is merely a labelling of the slots with Latin letters, which have no significance apart from their designation as labels of the slots (i.e., they are non-numerical): A tensor contraction (or trace) between two tensors is represented by the repetition of an index label, where one label is contravariant (an upper index corresponding to the factor ) and one label is covariant (a lower index corresponding to the factor ). Thus, for instance, is the trace of a tensor over its last two slots. This manner of representing tensor contractions by repeated indices is formally similar to the Einstein summation convention. However, as the indices are non-numerical, it does not imply summation: rather it corresponds to the abstract basis-independent trace operation (or natural pairing) between tensor factors of type and those of type . Abstract indices and tensor spaces A general homogeneous tensor is an element of a tensor product of copies of and , such as Label each factor in this tensor product with a Latin letter
https://en.wikipedia.org/wiki/Visual%20radio
Visual radio is a generic term for adding visuals to audio radio broadcasts. Visual Radio is also a trademark for a Nokia product which delivers interactive FM radio over a data connection. Visual Radio Visual Radio is a technology developed by Nokia. Visual Radio is built-in functionality available in an increasing number of phones that are already equipped with analog FM radio. Workings The audio is received via a regular analog FM radio embedded in the phone. A presentation of graphics and text, synchronized to the audio programming, is streamed to the phone over a data connection and the FM transmission chain is unaffected by the addition of Visual Radio. Limitations On phones with built-in Wi-Fi (tested on Nokia E51, E63, E66, E71, N78, N79, N81, N82 a, and N95 8GB), the Nokia application does not allow a Wi-Fi access point to be used for the data connection, only GPRS access points are allowed, allowing the possibility of revenue sharing between Nokia, the Radio stations and GPRS network operators. Platform components The platform is composed of three parts: A Visual Radio Tool that can be integrated with the radio station's legacy play-out system, so the interactive visual channel created by the radio station's content producer is synchronized with the audio programming. A Visual Radio server that handles the two-way traffic between the audience and radio stations; A Visual Radio client application on the mobile phone, that displays the interactive visual channel and takes care of user interaction. The Visual Radio concept was created by Nokia and the platform was originally offered to radio stations and operators globally by HP. Since October 2007, Nokia has been collaborating with RCS Inc., of New York, whose Selector music scheduling system is used by thousands of radio stations around the world. RCS produces the second-generation version of the Visual Radio platform and also markets a similar product for the Internet (and most other digita
https://en.wikipedia.org/wiki/NIS%2B
NIS+ is a directory service developed by Sun Microsystems to replace its older 'NIS' (Network Information Service). It is designed to eliminate the need for duplication across many computers of configuration data such as user accounts, host names and addresses, printer information and NFS disk mounts on individual systems, instead using a central repository on a master server, simplifying system administration. NIS+ client software has been ported to other Unix and Unix-like platforms. Prior to the release of Solaris 9 in 2002, Sun announced its intent to remove NIS+ from Solaris in a future release and now recommends that customers instead use an LDAP-based lookup scheme. NIS+ was present in Solaris 9 and 10 (although both releases include tools to migrate NIS+ data to an LDAP server) and it has been removed from Solaris 11. NIS vs. NIS+ NIS and NIS+ are similar only in purpose and name, otherwise, they are completely different implementations. They differ in the following ways: NIS+ is hierarchical. NIS+ is based around Secure RPC (servers must authenticate clients and vice versa). NIS+ may be replicated (replicas are read-only). NIS+ implements permissions on directories, tables, columns and rows. NIS+ also implements permissions on operations, such as being able to use to transfer changed data from a master to a replica. The problem of managing network information In the 1970s, when computers were expensive, and networks consisted of a small number of nodes, administering network information was manageable, and a centralized system was not needed. As computers became cheaper and networks grew larger, it became increasingly difficult to maintain separate copies of network configurations on individual systems. For example, when a new user was added to the network, the following files would need to be updated on every existing system: Likewise, would have needed updating every time a new group was added and would have needed updating every time
https://en.wikipedia.org/wiki/Lockstep%20%28computing%29
Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel. The redundancy (duplication) allows error detection and error correction: the output from lockstep operations can be compared to determine if there has been a fault if there are at least two systems (dual modular redundancy), and the error can be automatically corrected if there are at least three systems (triple modular redundancy), via majority vote. The term "lockstep" originates from army usage, where it refers to synchronized walking, in which marchers walk as closely together as physically practical. To run in lockstep, each system is set up to progress from one well-defined state to the next well-defined state. When a new set of inputs reaches the system, it processes them, generates new outputs and updates its state. This set of changes (new inputs, new outputs, new state) is considered to define that step, and must be treated as an atomic transaction; in other words, either all of it happens, or none of it happens, but not something in between. Sometimes a timeshift (delay) is set between systems, which increases the detection probability of errors induced by external influences (e.g. voltage spikes, ionizing radiation, or in situ reverse engineering). Lockstep memory Some vendors, including Intel, use the term lockstep memory to describe a multi-channel memory layout in which cache lines are distributed between two memory channels, so one half of the cache line is stored in a DIMM on the first channel, while the second half goes to a DIMM on the second channel. By combining the single error correction and double error detection (SECDED) capabilities of two ECC-enabled DIMMs in a lockstep layout, their single-device data correction (SDDC) nature can be extended into double-device data correction (DDDC), providing protection against the failure of any single memory chip. Downsides of the Intel's lockstep memory layout are the reduction
https://en.wikipedia.org/wiki/Wollaston%20prism
A Wollaston prism is an optical device, invented by William Hyde Wollaston, that manipulates polarized light. It separates light into two separate linearly polarized outgoing beams with orthogonal polarization. The two beams will be polarized according to the optical axis of the two right angle prisms. The Wollaston prism consists of two orthogonal prisms of birefringent material—typically a uniaxial material such as calcite. These prisms are cemented together on their base (traditionally with Canada balsam) to form two right triangle prisms with perpendicular optic axes. Outgoing light beams diverge from the prism as ordinary and extraordinary rays due to the differences in the indexes of refraction, with the angle of divergence determined by the prisms' wedge angle and the wavelength of the light. Commercial prisms are available with divergence angles from less than 1° to about 45°. See also Other types of polarizing prisms References Polarization (waves) Prisms (optics)
https://en.wikipedia.org/wiki/MIC-1
The MIC-1 is a processor architecture invented by Andrew S. Tanenbaum to use as a simple but complete example in his teaching book Structured Computer Organization. It consists of a very simple control unit that runs microcode from a 512-words store. The Micro-Assembly Language (MAL) is engineered to allow simple writing of an IJVM interpreter, and the source code for such an interpreter can be found in the book. Hardware Data path The data path is the core of the MIC-1. It contains 32-bit registers, buses, an ALU and a shifter. Buses There are 2 main buses of 32 lines (or 32 bits) each: B bus: connected to the output of the registers and to the input of the ALU. C bus: connected to the output of the shifter and to the input of the registers. Registers Registers are selected by 2 control lines: one to enable the B bus and the other to enable the C bus. The B bus can be enabled by just one register at a time, since the transfer of data from 2 registers at the same time, would make this data inconsistent. In contrast, the C bus can be enabled by more than 1 register at the same time; as a matter of fact, the current value present in the C bus can be written to more than 1 register without problems. The reading and writing operations are carried out in 1 clock cycle. The MBR register is a readonly register, and it contains 2 control lines. Since it is an 8-bit register, its output is connected to the least significant 8 bits of the B bus. It can be set to provide its output in 2 ways: 2's complement (MBR): all the remaining 24 bits of the B bus are set to 1, if it's a negative number, or they are set to 0, if it's a positive number (sign extension). Without complement (MBRU): the remaining 24 bits (of 32 total) are set to 0. ALU The ALU (or arithmetic logic unit) has the following input, output and control lines: 2 32-bit input lines: one for the B bus and one for the bus that is connected directly to the H register. 1 32-bit output line, whi
https://en.wikipedia.org/wiki/K-d%20tree
In computer science, a k-d tree (short for k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space. K-dimensional is that which concerns exactly k orthogonal axes or a space of any number of dimensions. k-d trees are a useful data structure for several applications, such as: Searches involving a multidimensional search key (e.g. range searches and nearest neighbor searches) & Creating point clouds. k-d trees are a special case of binary space partitioning trees. Description The k-d tree is a binary tree in which every node is a k-dimensional point. Every non-leaf node can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points to the left of this hyperplane are represented by the left subtree of that node and points to the right of the hyperplane are represented by the right subtree. The hyperplane direction is chosen in the following way: every node in the tree is associated with one of the k dimensions, with the hyperplane perpendicular to that dimension's axis. So, for example, if for a particular split the "x" axis is chosen, all points in the subtree with a smaller "x" value than the node will appear in the left subtree and all points with a larger "x" value will be in the right subtree. In such a case, the hyperplane would be set by the x value of the point, and its normal would be the unit x-axis. Operations on k-d trees Construction Since there are many possible ways to choose axis-aligned splitting planes, there are many different ways to construct k-d trees. The canonical method of k-d tree construction has the following constraints: As one moves down the tree, one cycles through the axes used to select the splitting planes. (For example, in a 3-dimensional tree, the root would have an x-aligned plane, the root's children would both have y-aligned planes, the root's grandchildren would all have z-aligned planes, the root's
https://en.wikipedia.org/wiki/Campus%20network
A campus network, campus area network, corporate area network or CAN is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. The networking equipments (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned by the campus tenant / owner: an enterprise, university, government etc. A campus area network is larger than a local area network but smaller than a metropolitan area network (MAN) or wide area network (WAN). University campuses College or university campus area networks often interconnect a variety of buildings, including administrative buildings, academic buildings, university libraries, campus or student centers, residence halls, gymnasiums, and other outlying structures, like conference centers, technology centers, and training institutes. Early examples include the Stanford University Network at Stanford University, Project Athena at MIT, and the Andrew Project at Carnegie Mellon University. Corporate campuses Much like a university campus network, a corporate campus network serves to connect buildings. Examples of such are the networks at Googleplex and Microsoft's campus. Campus networks are normally interconnected with high speed Ethernet links operating over optical fiber such as gigabit Ethernet and 10 Gigabit Ethernet. Area range The range of CAN is 1 km to 5 km. If two buildings have the same domain and they are connected with a network, then it will be considered as CAN only. Though the CAN is mainly used for corporate campuses so the data link will be high speed. References Metropolitan area networks Computer networks
https://en.wikipedia.org/wiki/Aeroplankton
Aeroplankton (or aerial plankton) are tiny lifeforms that float and drift in the air, carried by wind. Most of the living things that make up aeroplankton are very small to microscopic in size, and many can be difficult to identify because of their tiny size. Scientists collect them for study in traps and sweep nets from aircraft, kites or balloons. The study of the dispersion of these particles is called aerobiology. Aeroplankton is made up mostly of microorganisms, including viruses, about 1,000 different species of bacteria, around 40,000 varieties of fungi, and hundreds of species of protists, algae, mosses, and liverworts that live some part of their life cycle as aeroplankton, often as spores, pollen, and wind-scattered seeds. Additionally, microorganisms are swept into the air from terrestrial dust storms, and an even larger amount of airborne marine microorganisms are propelled high into the atmosphere in sea spray. Aeroplankton deposits hundreds of millions of airborne viruses and tens of millions of bacteria every day on every square meter around the planet. Small, drifting aeroplankton are found everywhere in the atmosphere, reaching concentration up to 106 microbial cells per cubic metre. Processes such as aerosolisation and wind transport determine how the microorganisms are distributed in the atmosphere. Air mass circulation globally disperses vast numbers of the floating aerial organisms, which travel across and between continents, creating biogeographic patterns by surviving and settling in remote environments. As well as the colonization of pristine environments, the globetrotting behaviour of these organisms has human health consequences. Airborne microorganisms are also involved in cloud formation and precipitation, and play important roles in the formation of the phyllosphere, a vast terrestrial habitat involved in nutrient cycling. Overview The atmosphere is the least understood biome on Earth despite its critical role as a microbial transpo
https://en.wikipedia.org/wiki/Field%20%28video%29
In video, a field is one of the many still images displayed sequentially to create the impression of motion on the screen. Two fields comprise one video frame. When the fields are displayed on a video monitor they are "interlaced" so that the content of one field will be used on all of the odd-numbered lines on the screen, and the other field will be displayed on the even lines. Converting fields to a still frame image requires a process called deinterlacing, in which the missing lines are duplicated or interpolated to recreate the information that would have been contained in the discarded field. Since each field contains only half of the information of a full frame, however, deinterlaced images do not have the resolution of a full frame. To increase the resolution of video images, new schemes have been created that capture full-frame images for each frame. Video composed of such frames is called progressive scan video. Video shot with a standard video camera format such as S-VHS or Mini-DV is often interlaced when created. In contrast, video shot with a film-based camera is almost always progressive. Free-to-air analog TV was mostly broadcast as interlaced material because the trade-off of spatial resolution for frame rate reduced flickering on Cathode ray tube (CRT) televisions. High-definition digital television (see: HDTV) today can be broadcast terrestrially or distributed through cable systems in either interlaced (1080i) or progressive scan formats (720p or 1080p). Most prosumer camcorders can record in progressive scan formats. In video editing, knowing which of the two (odd or even) fields is "dominant." Selecting edit points on the wrong field can result in a "flash" at each edit point, and playing the video fields in reverse order creates a flickering image. See also Federal Standard 1037C: defines the field in interlaced video. Color framing External links All About Video Fields: technical information with emphasis on the programming implica
https://en.wikipedia.org/wiki/Miniature%20book
A miniature book is a very small book. Standards for what may be termed a miniature rather than just a small book have changed through time. Today, most collectors consider a book to be miniature only if it is 3 inches or smaller in height, width, and thickness, particularly in the United States. Many collectors consider nineteenth-century and earlier books of 4 inches to fit in the category of miniatures. Book from 3–4 inches in all dimensions are termed macrominiature books. Books less than 1 inch in all dimensions are called microminiature books. Books less than 1/4 inch in all dimensions are known as ultra-microminiature books. History Miniature books stretch back far in history; many collections contain cuneiform tablets stretching back thousands of years, and exquisite medieval Books of Hours. Printers began testing the limits of size not long after the technology of printing began, and around 200 miniature books were printed in the sixteenth century. Exquisite specimens from the 17th century abound. In the 19th century, technological innovations in printing enabled the creation of smaller and smaller type. Fine and popular additions alike grew in number throughout the 19th century in what was considered the golden age for miniature books. While some miniature books are objects of high craft, bound in fine Moroccan leather, with gilt decoration and excellent examples of woodcuts, etchings, and watermarks, others are cheap, disposable, sometimes highly functional items not expected to survive. Today, miniature books are produced both as fine works of craft and as commercial products found in chain bookstores. Miniature books were produced for personal convenience. Miniature books could be easily be carried in the pocket of a waistcoat or a woman's reticule. Victorian women used miniature etiquette books to subtly ascertain information on polite behavior in society. Along with etiquette books, Victorian women that had copies of The Little Flirt learned to
https://en.wikipedia.org/wiki/Space%20environment
Space environment is a branch of astronautics, aerospace engineering and space physics that seeks to understand and address conditions existing in space that affect the design and operation of spacecraft. A related subject, space weather, deals with dynamic processes in the solar-terrestrial system that can give rise to effects on spacecraft, but that can also affect the atmosphere, ionosphere and geomagnetic field, giving rise to several other kinds of effects on human technologies. Effects on spacecraft can arise from radiation, space debris and meteoroid impact, upper atmospheric drag and spacecraft electrostatic charging. Radiation in space usually comes from three main sources: The Van Allen radiation belts Solar proton events and solar energetic particles; and Galactic cosmic rays. For long-duration missions, the high doses of radiation can damage electronic components and solar cells. A major concern is also radiation-induced "single-event effects" such as single event upset. Crewed missions usually avoid the radiation belts and the International Space Station is at an altitude well below the most severe regions of the radiation belts. During solar energetic events (solar flares and coronal mass ejections) particles can be accelerated to very high energies and can reach the Earth in times as short as 30 minutes (but usually take some hours). These particles are mainly protons and heavier ions that can cause radiation damage, disruption to logic circuits, and even hazards to astronauts. Crewed missions to return to the Moon or to travel to Mars will have to deal with the major problems presented by solar particle events to radiation safety, in addition to the important contribution to doses from the low-level background cosmic rays. In near-Earth orbits, the Earth's geomagnetic field screens spacecraft from a large part of these hazards - a process called geomagnetic shielding. Space debris and meteoroids can impact spacecraft at high speeds, causing mec
https://en.wikipedia.org/wiki/Seed%20dispersal
In spermatophyte plants, seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their seeds, including both abiotic vectors, such as the wind, and living (biotic) vectors such as birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. These modes are typically inferred based on adaptations, such as wings or fleshy fruit. However, this simplified view may ignore complexity in dispersal. Plants can disperse via modes without possessing the typical associated adaptations and plant traits may be multifunctional. Benefits Seed dispersal is likely to have several benefits for different plant species. Seed survival is often higher away from the parent plant. This higher survival may result from the actions of density-dependent seed and seedling predators and pathogens, which often target the high concentrations of seeds beneath adults. Competition with adult plants may also be lower when seeds are transported away from their parent. Seed dispersal also allows plants to reach specific habitats that are favorable for survival, a hypothesis known as directed dispersal. For example, Ocotea endresiana (Lauraceae) is a tree species from Latin America which is dispersed by several species of birds, including the three-wattled bellbird. Male bellbirds perch on dead trees in order to attract mates, and often defecate seeds beneath these perches where the see
https://en.wikipedia.org/wiki/Mantel%20test
The Mantel test, named after Nathan Mantel, is a statistical test of the correlation between two matrices. The matrices must be of the same dimension; in most applications, they are matrices of interrelations between the same vectors of objects. The test was first published by Nathan Mantel, a biostatistician at the National Institutes of Health, in 1967. Accounts of it can be found in advanced statistics books (e.g., Sokal & Rohlf 1995). Usage The test is commonly used in ecology, where the data are usually estimates of the "distance" between objects such as species of organisms. For example, one matrix might contain estimates of the genetic distances (i.e., the amount of difference between two different genomes) between all possible pairs of species in the study, obtained by the methods of molecular systematics; while the other might contain estimates of the geographical distance between the ranges of each species to every other species. In this case, the hypothesis being tested is whether the variation in genetics for these organisms is correlated to the variation in geographical distance. Method If there are n objects, and the matrix is symmetrical (so the distance from object a to object b is the same as the distance from b to a) such a matrix contains distances. Because distances are not independent of each other – since changing the "position" of one object would change of these distances (the distance from that object to each of the others) – we can not assess the relationship between the two matrices by simply evaluating the correlation coefficient between the two sets of distances and testing its statistical significance. The Mantel test deals with this problem. The procedure adopted is a kind of randomization or permutation test. The correlation between the two sets of distances is calculated, and this is both the measure of correlation reported and the test statistic on which the test is based. In principle, any correlation coefficient could be
https://en.wikipedia.org/wiki/Temperate%20forest
A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest biome on our planet, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest. Climate The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry subtropical climate. Types of temperate forest Deciduous They are found in Europe, East Asia, North America, and in some parts of South America. Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia. Coniferous Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow. Mixed As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia,
https://en.wikipedia.org/wiki/Mystery%20House
Mystery House is an adventure game released by On-Line Systems in 1980. It was designed, written and illustrated by Roberta Williams, and programmed by Ken Williams for the Apple II. Mystery House is the first graphical adventure game and the first game produced by On-Line Systems, the company which would evolve into Sierra On-Line. It is one of the earliest horror video games. Plot The game starts near an abandoned Victorian mansion. The player is soon locked inside the house with no other option than to explore. The mansion contains many interesting rooms and seven other people: Tom, a plumber; Sam, a mechanic; Sally, a seamstress; Dr. Green, a surgeon; Joe, a grave-digger; Bill, a butcher; Daisy, a cook. Initially, the player has to search the house in order to find a hidden cache of jewels. Soon, dead bodies (of the other people) begin appearing and it is obvious there is a murderer on the loose in the house. The player must discover who it is or become the next victim. Development and release At the end of the 1970s, Ken Williams sought to set up a company for enterprise software for the market-dominating Apple II computer. One day, he took a teletype terminal to his house to work on the development of an accounting program. Looking through a catalog, he found a game called Colossal Cave Adventure. He bought the game and introduced it to his wife, Roberta, and they both played through it. They began to search for something similar but found the market underdeveloped. Roberta decided that she could write her own, and conceived of the plot for Mystery House, taking inspiration from Agatha Christie's novel And Then There Were None. She was also inspired by the board game Clue, which helped to break her out from a linear structure to the game. Recognizing that though she knew some programming, she needed someone else to code the game, she convinced her husband to help her. Ken agreed and borrowed his brother's Apple II computer to write the game on. Ken sugges
https://en.wikipedia.org/wiki/Perceptual%20control%20theory
Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels. Principles and differences from other theories The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world. The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment
https://en.wikipedia.org/wiki/Fodor%27s%20lemma
In mathematics, particularly in set theory, Fodor's lemma states the following: If is a regular, uncountable cardinal, is a stationary subset of , and is regressive (that is, for any , ) then there is some and some stationary such that for any . In modern parlance, the nonstationary ideal is normal. The lemma was first proved by the Hungarian set theorist, Géza Fodor in 1956. It is sometimes also called "The Pressing Down Lemma". Proof We can assume that (by removing 0, if necessary). If Fodor's lemma is false, for every there is some club set such that . Let . The club sets are closed under diagonal intersection, so is also club and therefore there is some . Then for each , and so there can be no such that , so , a contradiction. Fodor's lemma also holds for Thomas Jech's notion of stationary sets as well as for the general notion of stationary set. Fodor's lemma for trees Another related statement, also known as Fodor's lemma (or Pressing-Down-lemma), is the following: For every non-special tree and regressive mapping (that is, , with respect to the order on , for every ), there is a non-special subtree on which is constant. References G. Fodor, Eine Bemerkung zur Theorie der regressiven Funktionen, Acta Sci. Math. Szeged, 17(1956), 139-142 . Karel Hrbacek & Thomas Jech, Introduction to Set Theory, 3rd edition, Chapter 11, Section 3. Mark Howard, Applications of Fodor's Lemma to Vaught's Conjecture. Ann. Pure and Appl. Logic 42(1): 1-19 (1989). Simon Thomas, The Automorphism Tower Problem. PostScript file at S. Todorcevic, Combinatorial dichotomies in set theory. pdf at Articles containing proofs Lemmas in set theory
https://en.wikipedia.org/wiki/Stationary%20set
In mathematics, specifically set theory and model theory, a stationary set is a set that is not too small in the sense that it intersects all club sets and is analogous to a set of non-zero measure in measure theory. There are at least three closely related notions of stationary set, depending on whether one is looking at subsets of an ordinal, or subsets of something of given cardinality, or a powerset. Classical notion If is a cardinal of uncountable cofinality, and intersects every club set in then is called a stationary set. If a set is not stationary, then it is called a thin set. This notion should not be confused with the notion of a thin set in number theory. If is a stationary set and is a club set, then their intersection is also stationary. This is because if is any club set, then is a club set, thus is nonempty. Therefore, must be stationary. See also: Fodor's lemma The restriction to uncountable cofinality is in order to avoid trivialities: Suppose has countable cofinality. Then is stationary in if and only if is bounded in . In particular, if the cofinality of is , then any two stationary subsets of have stationary intersection. This is no longer the case if the cofinality of is uncountable. In fact, suppose is moreover regular and is stationary. Then can be partitioned into many disjoint stationary sets. This result is due to Solovay. If is a successor cardinal, this result is due to Ulam and is easily shown by means of what is called an Ulam matrix. H. Friedman has shown that for every countable successor ordinal , every stationary subset of contains a closed subset of order type . Jech's notion There is also a notion of stationary subset of , for a cardinal and a set such that , where is the set of subsets of of cardinality : . This notion is due to Thomas Jech. As before, is stationary if and only if it meets every club, where a club subset of is a set unbounded under and closed under union of chains of lengt
https://en.wikipedia.org/wiki/Diagonal%20intersection
Diagonal intersection is a term used in mathematics, especially in set theory. If is an ordinal number and is a sequence of subsets of , then the diagonal intersection, denoted by is defined to be That is, an ordinal is in the diagonal intersection if and only if it is contained in the first members of the sequence. This is the same as where the closed interval from 0 to is used to avoid restricting the range of the intersection. See also Club filter Club set Fodor's lemma References Thomas Jech, Set Theory, The Third Millennium Edition, Springer-Verlag Berlin Heidelberg New York, 2003, page 92. Akihiro Kanamori, The Higher Infinite, Second Edition, Springer-Verlag Berlin Heidelberg, 2009, page 2. Ordinal numbers Set theory
https://en.wikipedia.org/wiki/Club%20filter
In mathematics, particularly in set theory, if is a regular uncountable cardinal then the filter of all sets containing a club subset of is a -complete filter closed under diagonal intersection called the club filter. To see that this is a filter, note that since it is thus both closed and unbounded (see club set). If then any subset of containing is also in since and therefore anything containing it, contains a club set. It is a -complete filter because the intersection of fewer than club sets is a club set. To see this, suppose is a sequence of club sets where Obviously is closed, since any sequence which appears in appears in every and therefore its limit is also in every To show that it is unbounded, take some Let be an increasing sequence with and for every Such a sequence can be constructed, since every is unbounded. Since and is regular, the limit of this sequence is less than We call it and define a new sequence similar to the previous sequence. We can repeat this process, getting a sequence of sequences where each element of a sequence is greater than every member of the previous sequences. Then for each is an increasing sequence contained in and all these sequences have the same limit (the limit of ). This limit is then contained in every and therefore and is greater than To see that is closed under diagonal intersection, let be a sequence of club sets, and let To show is closed, suppose and Then for each for all Since each is closed, for all so To show is unbounded, let and define a sequence as follows: and is the minimal element of such that Such an element exists since by the above, the intersection of club sets is club. Then and since it is in each with See also References Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Set theory
https://en.wikipedia.org/wiki/Triangulated%20category
In mathematics, a triangulated category is a category with the additional structure of a "translation functor" and a class of "exact triangles". Prominent examples are the derived category of an abelian category, as well as the stable homotopy category. The exact triangles generalize the short exact sequences in an abelian category, as well as fiber sequences and cofiber sequences in topology. Much of homological algebra is clarified and extended by the language of triangulated categories, an important example being the theory of sheaf cohomology. In the 1960s, a typical use of triangulated categories was to extend properties of sheaves on a space X to complexes of sheaves, viewed as objects of the derived category of sheaves on X. More recently, triangulated categories have become objects of interest in their own right. Many equivalences between triangulated categories of different origins have been proved or conjectured. For example, the homological mirror symmetry conjecture predicts that the derived category of a Calabi–Yau manifold is equivalent to the Fukaya category of its "mirror" symplectic manifold. Shift operator is a decategorified analogue of triangulated category. History Triangulated categories were introduced independently by Dieter Puppe (1962) and Jean-Louis Verdier (1963), although Puppe's axioms were less complete (lacking the octahedral axiom (TR 4)). Puppe was motivated by the stable homotopy category. Verdier's key example was the derived category of an abelian category, which he also defined, developing ideas of Alexander Grothendieck. The early applications of derived categories included coherent duality and Verdier duality, which extends Poincaré duality to singular spaces. Definition A shift or translation functor on a category D is an additive automorphism (or for some authors, an auto-equivalence) from D to D. It is common to write for integers n. A triangle (X, Y, Z, u, v, w) consists of three objects X, Y, and Z, together with
https://en.wikipedia.org/wiki/Electric%20power%20quality
Electric power quality is the degree to which the voltage, frequency, and waveform of a power supply system conform to established specifications. Good power quality can be defined as a steady supply voltage that stays within the prescribed range, steady AC frequency close to the rated value, and smooth voltage curve waveform (which resembles a sine wave). In general, it is useful to consider power quality as the compatibility between what comes out of an electric outlet and the load that is plugged into it. The term is used to describe electric power that drives an electrical load and the load's ability to function properly. Without the proper power, an electrical device (or load) may malfunction, fail prematurely or not operate at all. There are many ways in which electric power can be of poor quality, and many more causes of such poor quality power. The electric power industry comprises electricity generation (AC power), electric power transmission and ultimately electric power distribution to an electricity meter located at the premises of the end user of the electric power. The electricity then moves through the wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variations in weather, generation, demand and other factors provide many opportunities for the quality of supply to be compromised. While "power quality" is a convenient term for many, it is the quality of the voltage—rather than power or electric current—that is actually described by the term. Power is simply the flow of energy, and the current demanded by a load is largely uncontrollable. Introduction The quality of electrical power may be described as a set of values of parameters, such as: Continuity of service (whether the electrical power is subject to voltage drops or overages below or above a threshold level thereby causing blackouts or brownouts) Variation i
https://en.wikipedia.org/wiki/Register%20file
A register file is an array of processor registers in a central processing unit (CPU). Register banking is the method of using a single name to access multiple different physical registers depending on the operating mode. Modern integrated circuit-based register files are usually implemented by way of fast static RAMs with multiple ports. Such RAMs are distinguished by having dedicated read and write ports, whereas ordinary multiported SRAMs will usually read and write through the same ports. The instruction set architecture of a CPU will almost always define a set of registers which are used to stage data between memory and the functional units on the chip. In simpler CPUs, these architectural registers correspond one-for-one to the entries in a physical register file (PRF) within the CPU. More complicated CPUs use register renaming, so that the mapping of which physical entry stores a particular architectural register changes dynamically during execution. The register file is part of the architecture and visible to the programmer, as opposed to the concept of transparent caches. Register-bank switching Register files may be clubbed together as register banks. A processor may have more than one register bank. ARM processors have both banked and unbanked registers. While all modes always share the same physical registers for the first eight general-purpose registers, R0 to R7, the physical register which the banked registers, R8 to R14, point to depends on the operating mode the processor is in. Notably, Fast Interrupt Request (FIQ) mode has its own bank of registers for R8 to R12, with the architecture also providing a private stack pointer (R13) for every interrupt mode. x86 processors use context switching and fast interrupt for switching between instruction, decoder, GPRs and register files, if there is more than one, before the instruction is issued, but this is only existing on processors that support superscalar. However, context switching is a totall
https://en.wikipedia.org/wiki/IBM%207090/94%20IBSYS
IBSYS is the discontinued tape-based operating system that IBM supplied with its IBM 709, IBM 7090 and IBM 7094 computers. A similar operating system (but with several significant differences), also called IBSYS, was provided with IBM 7040 and IBM 7044 computers. IBSYS was based on FORTRAN Monitor System (FMS) and (more likely) Bell Labs' "BESYS" rather than the SHARE Operating System. IBSYS directly supported several old language processors on the $EXECUTE card: 9PAC, FORTRAN and IBSFAP. Newer language processors ran under IBJOB. IBM later provided similar facilities for the 7040/7044 as IBM 7040/7044 Operating System (16K/32K) 7040-PR-150 and for the IBM 1410/IBM 7010 as IBM 1410/7010 Operating System 1410-PR-155. IBSYS System Supervisor IBSYS itself is a resident monitor program, that reads control card images placed between the decks of program and data cards of individual jobs. An IBSYS control card begins with a "$" in column 1, immediately followed by a Control Name that selects the various IBSYS utility programs needed to set up and run the job. These card deck images are usually read from magnetic tapes prepared offline, not directly from the card reader. IBJOB Processor The IBJOB Processor is a subsystem that runs under the IBSYS System Supervisor. It reads control cards that request, e.g., compilation, execution. The languages supported include COBOL. Commercial Translator (COMTRAN), Fortran IV (IBFTC) and Macro Assembly Program (IBMAP). See also University of Michigan Executive System Timeline of operating systems Further reading Noble, A. S., Jr., "Design of an integrated programming and operating system", IBM Systems Journal, June 1963. "The present paper considers the underlying design concepts of IBSYS/IBJOB, an integrated programming and operating system. The historical background and over-all structure of the system are discussed. Flow of jobs through the IBJOB processor, as controlled by the monitor, is also described." "IBM 7090/7094
https://en.wikipedia.org/wiki/Effector%20cell
In cell biology, an effector cell is any of various types of cell that actively responds to a stimulus and effects some change (brings it about). Examples of effector cells include: The muscle, gland or organ cell capable of responding to a stimulus at the terminal end of an efferent nerve fiber Plasma cell, an effector B cell in the immune system Effector T cells, T cells that actively respond to a stimulus Cytokine-induced killer cells, strongly productive cytotoxic effector cells that are capable of lysing tumor cells Microglia, a glial effector cell that reconstructs the Central nervous system after a bone marrow transplant Fibroblast, a cell that is most commonly found within connective tissue Mast cell, the primary effector cell involved in the development of asthma Cytokine-induced killer cells as effector cells As an effector cell, cytokine-induced killer cells can recognize infected or malignant cells even when antibodies and major histocompatibility complex (MHC) are not available. This allows a quick immune reaction to take place. Cytokine-Induced killer (CIK) cells are important because harmful cells that do not contain MHC cannot be traced and removed by other immune cells. CIK cells are being studied intensely as a possible therapy treatment for cancer and other types of viral infections. CIK cells respond to lymphokines by lysing tumorous cells that are resistant to NK cells or LAK cell activity. CIK cells show a large amount of cytotoxic potential against various types of tumors. Side effects of CIK cells are also considered very minor. In a few cases, CIK cell treatment lead to the complete disappearance of tumor burdens, extended periods of survival, and improved quality of life, even if the cancerous tumor cells were in advanced stages. At the moment, the exact mechanism of tumor recognition in CIK cells are not completely understood. Fibroblast as effector cells Fibroblast are types of cells that form the extracellular matrix and col
https://en.wikipedia.org/wiki/Cycle%20graph%20%28algebra%29
In group theory, a subfield of abstract algebra, a group cycle graph illustrates the various cycles of a group and is particularly useful in visualizing the structure of small finite groups. A cycle is the set of powers of a given group element a, where an, the n-th power of an element a is defined as the product of a multiplied by itself n times. The element a is said to generate the cycle. In a finite group, some non-zero power of a must be the group identity, e; the lowest such power is the order of the cycle, the number of distinct elements in it. In a cycle graph, the cycle is represented as a polygon, with the vertices representing the group elements, and the connecting lines indicating that all elements in that polygon are members of the same cycle. Cycles Cycles can overlap, or they can have no element in common but the identity. The cycle graph displays each interesting cycle as a polygon. If a generates a cycle of order 6 (or, more shortly, has order 6), then a6 = e. Then the set of powers of a2, {a2, a4, e} is a cycle, but this is really no new information. Similarly, a5 generates the same cycle as a itself. So, only the primitive cycles need be considered, namely those that are not subsets of another cycle. Each of these is generated by some primitive element, a. Take one point for each element of the original group. For each primitive element, connect e to a, a to a2, ..., an−1 to an, etc., until e is reached. The result is the cycle graph. When a2 = e, a has order 2 (is an involution), and is connected to e by two edges. Except when the intent is to emphasize the two edges of the cycle, it is typically drawn as a single line between the two elements. Properties As an example of a group cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, and the cycle graph is shown on the right with e specifying the identity element. Notice the cycle {e, a, a2, a3} in the multiplication table, with
https://en.wikipedia.org/wiki/Turing%20jump
In computability theory, the Turing jump or Turing jump operator, named for Alan Turing, is an operation that assigns to each decision problem a successively harder decision problem with the property that is not decidable by an oracle machine with an oracle for . The operator is called a jump operator because it increases the Turing degree of the problem . That is, the problem is not Turing-reducible to . Post's theorem establishes a relationship between the Turing jump operator and the arithmetical hierarchy of sets of natural numbers. Informally, given a problem, the Turing jump returns the set of Turing machines that halt when given access to an oracle that solves that problem. Definition The Turing jump of X can be thought of as an oracle to the halting problem for oracle machines with an oracle for X. Formally, given a set and a Gödel numbering of the -computable functions, the Turing jump of is defined as The th Turing jump is defined inductively by The jump of is the effective join of the sequence of sets for : where denotes the th prime. The notation or is often used for the Turing jump of the empty set. It is read zero-jump or sometimes zero-prime. Similarly, is the th jump of the empty set. For finite , these sets are closely related to the arithmetic hierarchy, and is in particular connected to Post's theorem. The jump can be iterated into transfinite ordinals: there are jump operators for sets of natural numbers when is an ordinal that has a code in Kleene's (regardless of code, the resulting jumps are the same by a theorem of Spector), in particular the sets for , where is the Church–Kleene ordinal, are closely related to the hyperarithmetic hierarchy. Beyond , the process can be continued through the countable ordinals of the constructible universe, using Jensen's work on fine structure theory of Godel's L. The concept has also been generalized to extend to uncountable regular cardinals. Examples The Tur
https://en.wikipedia.org/wiki/Time%20and%20motion%20study
A time and motion study (or time-motion study) is a business efficiency technique combining the Time Study work of Frederick Winslow Taylor with the Motion Study work of Frank and Lillian Gilbreth (the same couple as is best known through the biographical 1950 film and book Cheaper by the Dozen). It is a major part of scientific management (Taylorism). After its first introduction, time study developed in the direction of establishing standard times, while motion study evolved into a technique for improving work methods. The two techniques became integrated and refined into a widely accepted method applicable to the improvement and upgrading of work systems. This integrated approach to work system improvement is known as methods engineering and it is applied today to industrial as well as service organizations, including banks, schools and hospitals. Time studies Time study is a direct and continuous observation of a task, using a timekeeping device (e.g., decimal minute stopwatch, computer-assisted electronic stopwatch, and videotape camera) to record the time taken to accomplish a task and it is often used in at least one of the following applies: There are repetitive work cycles of short to long duration. A wide variety of dissimilar work is performed. Process control elements constitute a part of the cycle. The Industrial Engineering Terminology Standard, defines time study as "a work measurement technique consisting of careful time measurement of the task with a time measuring instrument, adjusted for any observed variance from normal effort or pace and to allow adequate time for such items as foreign elements, unavoidable or machine delays, rest to overcome fatigue, and personal needs." The systems of time and motion studies are frequently assumed to be interchangeable terms that are descriptive of equivalent theories. However, the underlying principles and the rationale for the establishment of each respective method are dissimilar, despite originating
https://en.wikipedia.org/wiki/Digital%20access%20carrier%20system
Digital access carrier system (DACS) is the name used by British Telecom (BT Group plc) in the United Kingdom for a 0+2 pair gain system. Usage For almost as long as telephones have been a common feature in homes and offices, telecommunication companies have regularly been faced with a situation where demand in a particular street or area exceeds the number of physical copper pairs available from the pole to the exchange. Until the early 1980s, this situation was often dealt with by providing shared or 'party' lines, which were connected to multiple customers. This raised privacy problems since any subscriber connected to the line could listen to (or indeed, interrupt) another subscriber's call. With advances in the size, price, and reliability of electronic equipment, it eventually became possible to provide two normal subscriber lines over one copper pair, eliminating the need for party lines. The more modern ISDN technology based digital systems that perform this task are known in Britain by the generic name 'DACS'. DACS works by digitising the analogue signal and sending the combined digital information for both lines over the same copper pair between the exchange and the pole. The cost of the DACS equipment is significantly less than the cost of installing additional copper pairs. Overview The DACS system consists of three main parts: The exchange unit (EU), which connects multiple pairs of analogue lines to their corresponding single digital lines. One Telspec EU rack connects as many as 80 analogue lines over 40 digital copper pairs. The copper pair between the exchange and the remote unit, carrying the digital signal between the exchange unit and the remote unit. The remote unit (RU), which connects two analogue customer lines to one digital copper pair. The RUs are usually to be found on poles within a few hundred metres of the subscribers' homes or businesses. Advantages Because it uses a digital signal along most of the distance between subscrib
https://en.wikipedia.org/wiki/Error-tolerant%20design
An error-tolerant design (or human-error-tolerant design) is one that does not unduly penalize user or human errors. It is the human equivalent of fault tolerant design that allows equipment to continue functioning in the presence of hardware faults, such as a "limp-in" mode for an automobile electronics unit that would be employed if something like the oxygen sensor failed. Use of behavior shaping constraints to prevent errors Use of forcing functions or behavior-shaping constraints is one technique in error-tolerant design. An example is the interlock or lockout of reverse in the transmission of a moving car. This prevents errors, and prevention of errors is the most effective technique in error-tolerant design. The practice is known as poka-yoke in Japan where it was introduced by Shigeo Shingo as part of the Toyota Production System. Mitigation of the effects of errors The next most effective technique in error-tolerant design is the mitigation or limitation of the effects of errors after they have been made. An example is a checking or confirmation function such as an "Are you sure" dialog box with the harmless option preselected in computer software for an action that could have severe consequences if made in error, such as deleting or overwriting files (although the consequence of inadvertent file deletion has been reduced from the DOS days by a concept like the trash can in Mac OS, which has been introduced in most GUI interfaces). Adding too great a mitigating factor in some circumstances can become a hindrance, where the confirmation becomes mechanical this may become detrimental - for example, if a prompt is asked for every file in a batch delete, one may be tempted to simply agree to each prompt, even if a file is deleted accidentally. Another example is Google's use of spell checking on searches performed through their search engine. The spell checking minimises the problems caused by incorrect spelling by not only highlighting the error to th
https://en.wikipedia.org/wiki/Birkhoff%27s%20theorem%20%28electromagnetism%29
In physics, in the context of electromagnetism, Birkhoff's theorem concerns spherically symmetric static solutions of Maxwell's field equations of electromagnetism. The theorem is due to George D. Birkhoff. It states that any spherically symmetric solution of the source-free Maxwell equations is necessarily static. Pappas (1984) gives two proofs of this theorem, using Maxwell's equations and Lie derivatives. It is a limiting case of Birkhoff's theorem (relativity) by taking the flat metric without backreaction. Derivation from Maxwell's equations The source-free Maxwell's equations state that Since the fields are spherically symmetric, they depend only on the radial distance in spherical coordinates. The field is purely radial as non-radial components cannot be invariant under rotation, which would be necessary for symmetry. Therefore, we can rewrite the fields as We find that the curls must be zero, since, Moreover, we can substitute into the source-free Maxwell equations, to find that Simply dividing by the constant coefficients, we find that both the magnetic and electric field are static Derivation using Lie derivatives Defining the 1-form and 2-form in as: Using the Hodge star operator, we can rewrite Maxwell's Equations with these forms as . The spherical symmetry condition requires that the Lie derivatives of and with respect to the vector field that represents their rotations are zero By the definition of the Lie derivative as the directional derivative along . Therefore, is equivalent to under rotation and we can write for some function . Because the product of the components of the vector are just its length . And substituting back into our equation and rewriting for a function . Taking the exterior derivative of , we find by definition that, . And using our Maxwell equation that , . Thus, we find that the magnetic field is static. Similarly, using the second rotational invariance equation, we can find that the electric
https://en.wikipedia.org/wiki/Lumber%20Cartel
The Lumber Cartel was a facetious conspiracy theory popularized on USENET that claimed anti-spammers were secretly paid agents of lumber companies. In November 1997, a participant on news.admin.net-abuse.email posted an essay to the newsgroup. The essay described a conspiracy theory: The reasoning provided in the essay was that certain companies first destroy forests and make paper out of them, which is in turn used to send bulk mail. Since sending e-mail spam does not use paper at all, the essay argued, the lumber companies would want to stop it before it would surpass paper-based bulk mailing, and consequently only those in the pay of the lumber companies would be anti-spam. The rationale was based in disclaimers in certain spam messages that they were using electronic means in order to save paper. The joke eventually led to a club and numerous parody websites, most of which have long since disappeared. Gatherings of anti-spammers on Usenet began to ridicule proponents of this theory, and many participants in news.admin.net-abuse.email chose to dub themselves as members of "the Lumber Cartel" in their signatures, followed immediately by the acronymic disclaimer "TinLC" (There is no Lumber Cartel), reminiscent of the There Is No Cabal catchphrase. People were able to register with a website about the Lumber Cartel and were given a sequential membership number. That was added to email sig files in news.admin.net-abuse.email and used on personal websites. There was no verification or requirement to receive the membership number. See also Culture jamming References External links How the Lumber Cartel started The Canadian Branch of the Lumber Cartel (local 42) The Netherlands Lumber Cartel The United Kingdom Lumber Cartel in Craggy Island The ZhongGuo (China) Lumber Cartel, local 88 The Jargon File: "Lumber Cartel" Glossary at the Abusive Hosts Blocklist Other Ways to Fry Spam at Wired Gambling Magazine's 1999 article on spam, mentioning the Lumber Cartel Th
https://en.wikipedia.org/wiki/Schools%20Interoperability%20Framework
The Schools Interoperability Framework, Systems Interoperability Framework (UK), or SIF, is a data-sharing open specification for academic institutions from kindergarten through workforce. This specification is being used primarily in the United States, Canada, the UK, Australia, and New Zealand; however, it is increasingly being implemented in India, and elsewhere. The specification comprises two parts: an XML specification for modeling educational data which is specific to the educational locale (such as North America, Australia or the UK), and a service-oriented architecture (SOA) based on both direct and brokered RESTful-models for sharing that data between institutions, which is international and shared between the locales. SIF is not a product, but an industry initiative that enables diverse applications to interact and share data. , SIF was estimated to have been used in more than 48 US states and 6 countries, supporting five million students. The specification was started and maintained by its specification body, the Schools Interoperability Framework Association, renamed the Access For Learning Community (A4L) in 2015. History Traditionally, the standalone applications used by public school districts have the limitation of data isolation; that is, it is difficult to access and share their data. This often results in redundant data entry, data integrity problems, and inefficient or incomplete reporting. In such cases, a student's information can appear in multiple places but may not be identical, for example, or decision makers may be working with incomplete or inaccurate information. Many district and site technology coordinators also experience an increase in technical support problems from maintaining numerous proprietary systems. SIF was created to solve these issues. The Schools Interoperability Framework (SIF) began as an initiative chiefly championed initially by Microsoft to create "a blueprint for educational software interoperability and
https://en.wikipedia.org/wiki/Zanac
is a shoot 'em up video game developed by Compile and published in Japan by Pony Canyon and in North America by FCI. It was released for the MSX computer, the Family Computer Disk System, the Nintendo Entertainment System, and for the Virtual Console. It was reworked for the MSX2 computer as Zanac EX and for the PlayStation as Zanac X Zanac. Players fly a lone starfighter, dubbed the AFX-6502 Zanac, through twelve levels; their goal is to destroy the System—a part-organic, part-mechanical entity bent on destroying mankind. Zanac was developed by main core developers of Compile, including Masamitsu "Moo" Niitani, Koji "Janus" Teramoto, and Takayuki "Jemini" Hirono. All of these developers went on to make other popular similarly based games such as The Guardian Legend, Blazing Lazers, and the Puyo Puyo series. The game is known for its intense and fast-paced gameplay, level of difficulty, and music which seems to match the pace of the game. It has been praised for its unique adaptive artificial intelligence, in which the game automatically adjusts the difficulty level according to the player's skill level, rate of fire and the ship's current defensive status/capability. Gameplay In Zanac, the player controls the spaceship AFX-6502 Zanac as it flies through various planets, space stations, and outer space and through an armada of enemies comprising the defenses of the game's main antagonist—the "System". The player must fight through twelve levels and destroy the System and its defenses. The objective is to shoot down enemies and projectiles and accumulate points. Players start with three lives, and they lose a life if they get hit by an enemy or projectile. After losing a life, gameplay continues with the player reappearing on the screen and losing all previously accumulated power-ups; the player remains temporarily invincible for a moment upon reappearing on the screen. The game ends when all the player's lives have been lost or after completing the twelfth and fin
https://en.wikipedia.org/wiki/Nutritional%20yeast
Nutritional yeast (also known as nooch) is a deactivated yeast, often a strain of Saccharomyces cerevisiae, that is sold commercially as a food product. It is sold in the form of yellow flakes, granules, or powder and can be found in the bulk aisle of most natural food stores. It is popular with vegans and vegetarians and may be used as an ingredient in recipes or as a condiment. It is a significant source of some B-complex vitamins and contains trace amounts of several other vitamins and minerals. Sometimes nutritional yeast is fortified with vitamin B12, another reason it is popular with vegans. Nutritional yeast has a strong flavor that is described as nutty or cheesy, which makes it popular as an ingredient in cheese substitutes. It is often used by vegans in place of cheese in, for example, mashed and fried potatoes or scrambled tofu, or as a topping for popcorn. In Australia, it is sometimes sold as "savoury yeast flakes". In New Zealand, it has long been known as Brufax. Though "nutritional yeast" usually refers to commercial products, inadequately fed prisoners of war have used "home-grown" yeast to prevent vitamin deficiency. Nutritional yeast is a whole-cell inactive yeast that contains both soluble and insoluble parts, which is different from yeast extract. Yeast extract is made by centrifuging inactive nutritional yeast and concentrating the water-soluble yeast cell proteins which are rich in glutamic acid, nucleotides, and peptides, the flavor compounds responsible for umami taste. Commercial production Nutritional yeast is produced by culturing yeast in a nutrient medium for several days. The primary ingredient in the growth medium is glucose, often from either sugarcane or beet molasses. When the yeast is ready, it is killed with heat and then harvested, washed, dried and packaged. The species of yeast used is often a strain of Saccharomyces cerevisiae. The strains are cultured and selected for desirable characteristics and often exhibit a differ
https://en.wikipedia.org/wiki/Chemical%20biology
Chemical biology is a scientific discipline between the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology (synthesis of biomolecules, the simulation of biological systems, etc.). Introduction Some forms of chemical biology attempt to answer biological questions by studying biological systems at the chemical level. In contrast to research using biochemistry, genetics, or molecular biology, where mutagenesis can provide a new version of the organism, cell, or biomolecule of interest, chemical biology probes systems in vitro and in vivo with small molecules that have been designed for a specific purpose or identified on the basis of biochemical or cell-based screening (see chemical genetics). Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering. Systems of interest Enrichment techniques for proteomics Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a di
https://en.wikipedia.org/wiki/Cycling%20probe%20technology
Cycling probe technology (CPT) is a molecular biological technique for detecting specific DNA sequences. CPT operates under isothermal conditions. In some applications, CPT offers an alternative to PCR. However, unlike PCR, CPT does not generate multiple copies of the target DNA itself, and the amplification of the signal is linear, in contrast to the exponential amplification of the target DNA in PCR. CPT uses a sequence specific chimeric probe which hybridizes to a complementary target DNA sequence and becomes a substrate for RNase H. Cleavage occurs at the RNA internucleotide linkages and results in dissociation of the probe from the target, thereby making it available for the next probe molecule. Integrated electrokinetic systems have been developed for use in CPT. Probe Cycling probe technology makes use of a chimeric nucleic acid probe to detect the presence of a particular DNA sequence. The chimeric probe consists of an RNA segment sandwiched between two DNA segments. The RNA segment contains 4 contiguous purine nucleotides. The probes should be less than 30 nucleotides in length and designed to minimize intra-probe and inter-probe interactions. Process Cycling probe technology utilizes a cyclic, isothermal process that begins with the hybridization of the chimeric probe with the target DNA. Once hybridized, the probe becomes a suitable substrate for RNase H. RNase H, an endonuclease, cleaves the RNA portion of the probe, resulting in two chimeric fragments. The melting temperature (Tm) of the newly cleaved fragments is lower than the melting temperature of original probe. Because the CPT reaction is isothermally kept just above the melting point of the original probe, the cleaved fragments dissociate from the target DNA. Once dissociated, the target DNA is free to hybridize with a new probe, beginning the cycle again. After the fragments have been cleaved and dissociated, they become detectable. A common strategy for detecting the fragments involves fl
https://en.wikipedia.org/wiki/Voltage%20source
A voltage source is a two-terminal device which can maintain a fixed voltage. An ideal voltage source can maintain the fixed voltage independent of the load resistance or the output current. However, a real-world voltage source cannot supply unlimited current. A voltage source is the dual of a current source. Real-world sources of electrical energy, such as batteries and generators, can be modeled for analysis purposes as a combination of an ideal voltage source and additional combinations of impedance elements. Ideal voltage sources An ideal voltage source is a two-terminal device that maintains a fixed voltage drop across its terminals. It is often used as a mathematical abstraction that simplifies the analysis of real electric circuits. If the voltage across an ideal voltage source can be specified independently of any other variable in a circuit, it is called an independent voltage source. Conversely, if the voltage across an ideal voltage source is determined by some other voltage or current in a circuit, it is called a dependent or controlled voltage source. A mathematical model of an amplifier will include dependent voltage sources whose magnitude is governed by some fixed relation to an input signal, for example. In the analysis of faults on electrical power systems, the whole network of interconnected sources and transmission lines can be usefully replaced by an ideal (AC) voltage source and a single equivalent impedance. |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Ideal Voltage Source | Ideal Current Source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Controlled Voltage Source | Controlled Current Source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Battery of cells | Single cell The internal resistance of an ideal voltage source is zero; it is able to supply or absorb any amount of
https://en.wikipedia.org/wiki/Infinity%20plus%20one
In mathematics, infinity plus one is a concept which has a well-defined formal meaning in some number systems, and may refer to: Transfinite numbers, numbers that are larger than all the finite numbers. Cardinal numbers, representations of sizes (cardinalities) of abstract sets, which may be infinite. Ordinal numbers, representations of order types of well-ordered sets, which may also be infinite. Hyperreal numbers, an extension of the real number system that contains infinite and infinitesimal numbers. Surreal numbers, another extension of the real numbers, contain the hyperreal and all the transfinite ordinal numbers. English phrases Infinity
https://en.wikipedia.org/wiki/Primary%20succession
Primary succession is the beginning step of ecological succession after an extreme disturbance, which usually occurs in an environment devoid of vegetation and other organisms. These environments are typically lacking in soil, as disturbances like lava flow or retreating glaciers scour the environment clear of nutrients. In contrast, secondary succession occurs on substrates that previously supported vegetation before an ecological disturbance. This occurs when smaller disturbances like floods, hurricanes, tornadoes, and fires destroy only the local plant life and leave soil nutrients for immediate establishment by intermediate community species. Occurrence In primary succession pioneer species like lichen, algae and fungi as well as abiotic factors like wind and water start to "normalise" the habitat or in other words start to develop soil and other important mechanisms for greater diversity to flourish. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. Primary succession leads to conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil, and the increased amount of shade are the most important processes. These pioneer lichen, algae, and fungi are then dominated and often replaced by plants that are better adapted to less harsh conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral-based. Water and nutrient levels increase with the amount of succession exhibited. The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae, fungi, and lichens—stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. Unlike in primary succession, the species that dominate secondary success
https://en.wikipedia.org/wiki/Porism
A porism is a mathematical proposition or corollary. It has been used to refer to a direct consequence of a proof, analogous to how a corollary refers to a direct consequence of a theorem. In modern usage, it is a relationship that holds for an infinite range of values but only if a certain condition is assumed, such as Steiner's porism. The term originates from three books of Euclid that have been lost. A proposition may not have been proven, so a porism may not be a theorem or true. Origins The book that talks about porisms first is Euclid's Porisms. What is known of it is in Pappus of Alexandria's Collection, who mentions it along with other geometrical treatises, and gives several lemmas necessary for understanding it. Pappus states: The porisms of all classes are neither theorems nor problems, but occupy a position intermediate between the two, so that their enunciations can be stated either as theorems or problems, and consequently some geometers think that they are theorems, and others that they are problems, being guided solely by the form of the enunciation. But it is clear from the definitions that the old geometers understood better the difference between the three classes. The older geometers regarded a theorem as directed to proving what is proposed, a problem as directed to constructing what is proposed, and finally a porism as directed to finding what is proposed (). Pappus said that the last definition was changed by certain later geometers, who defined a porism as an accidental characteristic as (to leîpon hypothései topikoû theōrḗmatos), that which falls short of a locus-theorem by a (or in its) hypothesis. Proclus pointed out that the word porism was used in two senses: one sense is that of "corollary", as a result unsought but seen to follow from a theorem. In the other sense, he added nothing to the definition of "the older geometers", except to say that the finding of the center of a circle and the finding of the greatest common measure are
https://en.wikipedia.org/wiki/Hotline%20Communications
Hotline Communications Limited (HCL) was a software company founded in 1997, based in Toronto, Canada, with employees also in the United States and Australia. Hotline Communications' main activity was the publishing and distribution of a multi-purpose client/server communication software product named Hotline Connect, informally called, simply, Hotline. Initially, Hotline Communications sought a wide audience for its products, and organizations as diverse as Avid Technology, Apple Computer Australia, and public high schools used Hotline. At its peak, Hotline received millions of dollars in venture capital funding, grew to employ more than fifty people, served millions of users, and won accolades at trade shows and in newspapers and computer magazines around the world. Hotline eventually attracted more of an "underground" community, which saw it as an easier to use successor to the Internet Relay Chat (IRC) community. In 2001 Hotline Communications lost the bulk of its VC funding, and went out of business later that year. All of its assets were acquired in 2002 by Hotsprings, Inc., a new company formed by some ex-employees and shareholders. Hotsprings Inc. has since also abandoned development of the Hotline Connect software suite; the last iteration of Hotline Connect was released in December 2003. Currently, only a few servers and trackers remain but the Hotline community is still alive. History Hotline was designed in 1996 and known as "hotwire" by Australian programmer Adam Hinkley (known online by his username, "Hinks"), then 17 years old, as a classic Mac OS application. The source code for the Hotline applications was based on a class library, "AppWarrior" (AW), which Hinkley wrote. AppWarrior would later become litigious, as Hinkley wrote parts of it while he was employed by an Australian company, Redrock Holdings. Six other fans of Hotline, David Murphy, Alex King, Phil Hilton, Jason Roks, David Bordin, and Terrance Gregory, joined Adam Hinkley's efforts to
https://en.wikipedia.org/wiki/History%20of%20materials%20science
Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries. Prehistory In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools. The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE,
https://en.wikipedia.org/wiki/List%20of%20solar%20car%20teams
This is a list of solar car racing teams. Australia Belgium Brazil Canada Chile Colombia Denmark France Germany Greece India Iran Italy Jordan Japan Indonesia Malaysia Morocco The Netherlands Pakistan Poland Puerto Rico Saudi Arabia South Africa Sweden Switzerland Turkey United Arab Emirates United Kingdom United States Venezuela New Zealand See also World Solar Challenge North American Solar Challenge References Solar car Engineering education Solar car teams Solar
https://en.wikipedia.org/wiki/Synanthrope
A synanthrope (from ancient Greek σύν sýn "together, with" and ἄνθρωπος ánthrōpos "man") is an organism that lives near and benefits from humans and their environmental modifications (see also anthropophilia for animals who live close to humans as parasites). The term synanthrope includes many species regarded as pests or weeds, but does not include domesticated animals. Common synanthrope habitats include houses, gardens, farms, parks, roadsides and rubbish dumps. Zoology Examples of synanthropes are various insect species (ants, lice, silverfish, cockroaches, etc.), house sparrows, rock doves (pigeons), crows, various rodent species, Virginia opossums, raccoons, certain monkey species, coyotes, deer, passerines, and other urban wildlife. The brown rat is counted as one of the most prominent synanthropic animals and can be found in almost every place there are people. Botany Synanthropic plants include pineapple weed, dandelion, chicory, and plantain. Plant synanthropes are classified into two main types - apophytes and anthropophytes. Apophytes are synanthropic species that are native in origin. They can be subdivided into the following: Cultigen apophytes – spread by cultivation methods Ruderal apophytes – spread by development of marginal areas Pyrophyte apophytes – spread by fires Zoogen apophytes – spread by grazing animals Substitution apophytes – spread by logging or voluntary extension Anthropophytes are synanthropic species of foreign origin, whether introduced voluntarily or involuntarily. They can be subdivided into the following: Archaeophytes – introduced before the end of the 15th century Kenophytes – introduced after the 15th century Ephemerophytes – anthropophytic plants that appear episodically Subspontaneous – voluntarily introduced plants that have escaped cultivation and survived in the wild without further human intervention for a certain period. Adventive – involuntarily introduced plants that have escaped cultivation and survived in t
https://en.wikipedia.org/wiki/Software%20deployment
Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur on the producer side or on the consumer side or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "deployment" should be interpreted as a general process that has to be customized according to specific requirements or characteristics. History When computers were extremely large, expensive, and bulky (mainframes and minicomputers), the software was often bundled together with the hardware by manufacturers. If business software needed to be installed on an existing computer, this might require an expensive, time-consuming visit by a systems architect or a consultant. For complex, on-premises installation of enterprise software today, this can still sometimes be the case. However, with the development of mass-market software for the new age of microcomputers in the 1980s came new forms of software distribution first cartridges, then Compact Cassettes, then floppy disks, then (in the 1990s and later) optical media, the internet and flash drives. This meant that software deployment could be left to the customer. However, it was also increasingly recognized over time that configuration of the software by the customer was important and that this should ideally have a user-friendly interface (rather than, for example, requiring the customer to edit registry entries on Windows). In pre-internet software deployments, deployments (and their closely related cousin, new software releases) were of necessity expensive, infrequent, bulky affairs. It is arguable therefore that the spread of the internet made end-to-end agile software development possible. Indeed, the advent of cloud computing and software as a service meant that software could be deployed to a
https://en.wikipedia.org/wiki/Lysochrome
A lysochrome is a soluble dye used for histochemical staining of lipids, which include triglycerides, fatty acids, and lipoproteins. Lysochromes such as Sudan IV dissolve in the lipid and show up as colored regions. The dye does not stick to any other substrates, so a quantification or qualification of lipid presence can be obtained. The name was coined by John Baker (biologist) in his book "Principles of Biological Microtechnique", published in 1958, from the Greek words lysis (solution) and chroma (colour). References Biochemistry methods Lipids Histochemistry
https://en.wikipedia.org/wiki/Alloimmunity
Alloimmunity (sometimes called isoimmunity) is an immune response to nonself antigens from members of the same species, which are called alloantigens or isoantigens. Two major types of alloantigens are blood group antigens and histocompatibility antigens. In alloimmunity, the body creates antibodies (called alloantibodies) against the alloantigens, attacking transfused blood, allotransplanted tissue, and even the fetus in some cases. Alloimmune (isoimmune) response results in graft rejection, which is manifested as deterioration or complete loss of graft function. In contrast, autoimmunity is an immune response to the self's own antigens. (The allo- prefix means "other", whereas the auto- prefix means "self".) Alloimmunization (isoimmunization) is the process of becoming alloimmune, that is, developing the relevant antibodies for the first time. Alloimmunity is caused by the difference between products of highly polymorphic genes, primarily genes of the major histocompatibility complex, of the donor and graft recipient. These products are recognized by T-lymphocytes and other mononuclear leukocytes which infiltrate the graft and damage it. Types of the rejection Transfusion reaction Blood transfusion can result in alloantibodies reacting towards the transfused cells, resulting in a transfusion reaction. Even with standard blood compatibility testing, there is a risk of reaction against human blood group systems other than ABO and Rh. Hemolytic disease of the fetus and newborn Hemolytic disease of the fetus and newborn is similar to a transfusion reaction in that the mother's antibodies cannot tolerate the fetus's antigens, which happens when the immune tolerance of pregnancy is impaired. In many instances the maternal immune system attacks the fetal blood cells, resulting in fetal anemia. HDN ranges from mild to severe. Severe cases require intrauterine transfusions or early delivery to survive, while mild cases may only require phototherapy at birth. Transplan