source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Stirling%20polynomials
In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references. Definition and examples For nonnegative integers k, the Stirling polynomials, Sk(x), are a Sheffer sequence for defined by the exponential generating function The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials) each with exponential generating function given by the relation . The first 10 Stirling polynomials are given in the following table: {| class="wikitable" !k !! Sk(x) |- | 0 || |- | 1 || |- | 2 || |- | 3 || |- | 4 || |- | 5 || |- | 6 || |- | 7 || |- | 8 || |- | 9 || |} Yet another variant of the Stirling polynomials is considered in (see also the subsection on Stirling convolution polynomials below). In particular, the article by I. Gessel and R. P. Stanley defines the modified Stirling polynomial sequences, and where are the unsigned Stirling numbers of the first kind, in terms of the two Stirling number triangles for non-negative integers . For fixed , both and are polynomials of the input each of d
https://en.wikipedia.org/wiki/Source%20code%20escrow
Source code escrow is the deposit of the source code of software with a third-party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the software license agreement. Necessity of escrow As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guarded trade secrets. As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions. Escrow agreements Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties: one or several licensors, one or several licensees, the escrow agent. The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met. Source code escrow agreements provide for the following: They specify the subject and scope of the escrow. This is generally the source code of a specific software, accompanied by everything that the licensee requires to independently maintain the software, such as documentation, software tool
https://en.wikipedia.org/wiki/ToonTalk
ToonTalk is a computer programming system intended to be programmed by children. The "Toon" part stands for cartoon. The system's presentation is in the form of animated characters, including robots that can be trained by example. It is one of the few successful implementations outside academia of the concurrent constraint logic programming paradigm. It was created by Kenneth M. Kahn in 1995, and implemented as part of the ToonTalk IDE, a software package distributed worldwide between 1996 and 2009. Since 2009, its specification is scholarly published and its implementation is freely available. Beginning 2014 a JavaScript HTML5 version of ToonTalk called ToonTalk Reborn for the Web has been available. It runs on any modern web browser and differs from the desktop version of ToonTalk in a few ways. ToonTalk programs can run on any DOM element and various browser capabilities (audio, video, style sheets, speech input and output, and browser events) are available to ToonTalk programs. Web services such as Google Drive are integrated. ToonTalk Reborn is free and open source. Beyond its life as a commercial product, ToonTalk evolved via significant academic use in various research projects, notably at the London Knowledge Lab and the Institute of Education - projects Playground and WebLabs, which involved research partners from Cambridge (Addison Wesley Longman through their Logotron subsidiary), Portugal (Cnotinfor and the University of Lisbon), Sweden (Royal Institute of Technology), Slovakia (Comenius University), Bulgaria (Sofia University), Cyprus (University of Cyprus), and Italy (Institute for Educational Technology of the Consiglio Nazionale delle Ricerche). It was also source of academic interest in Sweden, where Mikael Kindborg proposed a static representation of ToonTalk programs and in Portugal, where Leonel Morgado studied its potential to enable computer programming by preliterate children. ToonTalk was influenced by the Janus computer programming lan
https://en.wikipedia.org/wiki/Lunar%20distance%20%28navigation%29
In celestial navigation, lunar distance, also called a lunar, is the angular distance between the Moon and another celestial body. The lunar distances method uses this angle and a nautical almanac to calculate Greenwich time if so desired, or by extension any other time. That calculated time can be used in solving a spherical triangle. The theory was first published by Johannes Werner in 1524, before the necessary almanacs had been published. A fuller method was published in 1763 and used until about 1850 when it was superseded by the marine chronometer. A similar method uses the positions of the Galilean moons of Jupiter. Purpose In celestial navigation, knowledge of the time at Greenwich (or another known place) and the measured positions of one or more celestial objects allows the navigator to calculate latitude and longitude. Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. After the method was first published in 1763 by British Astronomer Royal Nevil Maskelyne, based on pioneering work by Tobias Mayer, for about a hundred years (until about 1850) mariners lacking a chronometer used the method of lunar distances to determine Greenwich time as a key step in determining longitude. Conversely, a mariner with a chronometer could check its accuracy using a lunar determination of Greenwich time. The method saw usage all the way up to the beginning of the 20th century on smaller vessels that could not afford a chronometer or had to rely on this technique for correction of the chronometer. Method Summary The method relies on the relatively quick movement of the moon across the background sky, completing a circuit of 360 degrees in 27.3 days (the sidereal month), or 13.2 degrees per day. In one hour it will move approximately half a degree, roughly its own angular diameter, with respect to the background stars and the Sun. Using a sextant, the navigator precisely measures the angle between the m
https://en.wikipedia.org/wiki/Confluency
In cell culture biology, confluence refers to the percentage of the surface of a culture dish that is covered by adherent cells. For example, 50 percent confluence means roughly half of the surface is covered, while 100 percent confluence means the surface is completely covered by the cells, and no more room is left for the cells to grow as a monolayer. The cell number refers to, trivially, the number of cells in a given region. Impact on research Many cell lines exhibit differences in growth rate or gene expression depending on the degree of confluence. Cells are typically passaged before becoming fully confluent in order to maintain their proliferation phenotype. Some cell types are not limited by contact inhibition, such as immortalized cells, and may continue to divide and form layers on top of the parent cells. To achieve optimal and consistent results, experiments are usually performed using cells at a particular confluence, depending on the cell type. Extracellular export of cell free material is also dependent on the cell confluence . Estimation Rule of thumb Comparing the amount of space covered by cells with unoccupied space using the naked eye can provide a rough estimate of confluency. Hemocytometer A hemocytometer can be used to count cells, giving the cell number.
https://en.wikipedia.org/wiki/Liesegang%20rings
Liesegang rings () are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection. Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices, and "Saturn rings" in a test tube. Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear. History The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge. He observed them in the course of experiments on the precipitation of reagents in blotting paper. In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate. After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings. Silver nitrate–potassium dichromate reaction The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants. If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube. After some hours, the continuous region of precipitation is followed
https://en.wikipedia.org/wiki/Criterion-referenced%20test
A criterion-referenced test is a style of test which uses test scores to generate a statement about the behavior that can be expected of a person with that score. Most tests and quizzes that are written by school teachers can be considered criterion-referenced tests. In this case, the objective is simply to see whether the student has learned the material. Criterion-referenced assessment can be contrasted with norm-referenced assessment and ipsative assessment. Criterion-referenced testing was a major focus of psychometric research in the 1970s. Definition of criterion A common misunderstanding regarding the term is the meaning of criterion. Many, if not most, criterion-referenced tests involve a cutscore, where the examinee passes if their score exceeds the cutscore and fails if it does not (often called a mastery test). The criterion is not the cutscore; the criterion is the domain of subject matter that the test is designed to assess. For example, the criterion may be "Students should be able to correctly add two single-digit numbers," and the cutscore may be that students should correctly answer a minimum of 80% of the questions to pass. The criterion-referenced interpretation of a test score identifies the relationship to the subject matter. In the case of a mastery test, this does mean identifying whether the examinee has "mastered" a specified level of the subject matter by comparing their score to the cutscore. However, not all criterion-referenced tests have a cutscore, and the score can simply refer to a person's standing on the subject domain. The ACT is an example of this; there is no cutscore, it simply is an assessment of the student's knowledge of high-school level subject matter. Because of this common misunderstanding, criterion-referenced tests have also been called standards-based assessments by some education agencies, as students are assessed with regards to standards that define what they "should" know, as defined by the state. Co
https://en.wikipedia.org/wiki/Data%20architecture
Data architecture consist of models, policies, rules, and standards that govern which data is collected and how it is stored, arranged, integrated, and put to use in data systems and in organizations. Data is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture. Overview A data architecture aims to set data standards for all its data systems as a vision or a model of the eventual interactions between those data systems. Data integration, for example, should be dependent upon data architecture standards since data integration requires data interactions between two or more data systems. A data architecture, in part, describes the data structures used by a business and its computer applications software. Data architectures address data in storage, data in use, and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc. Essential to realizing the target state, data architecture describes how data is processed, stored, and used in an information system. It provides criteria for data processing operations to make it possible to design data flows and also control the flow of data in the system. The data architect is typically responsible for defining the target state, aligning during development and then following up to ensure enhancements are done in the spirit of the original blueprint. During the definition of the target state, the data architecture breaks a subject down to the atomic level and then builds it back up to the desired form. The data architect breaks the subject down by going through three traditional architectural stages: Conceptual - represents all business entities. Logical - represents the logic of how entities are related. Physical - the realization of the data mechanisms for a specific type of functionality. The "data" column of the Zachman Framework for enterprise architec
https://en.wikipedia.org/wiki/Kosaraju%27s%20algorithm
In computer science, Kosaraju-Sharir's algorithm (also known as Kosaraju's algorithm) is a linear time algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to S. Rao Kosaraju and Micha Sharir. Kosaraju suggested it in 1978 but did not publish it, while Sharir independently discovered it and published it in 1981. It makes use of the fact that the transpose graph (the same graph with the direction of every edge reversed) has exactly the same strongly connected components as the original graph. The algorithm The primitive graph operations that the algorithm uses are to enumerate the vertices of the graph, to store data per vertex (if not in the graph data structure itself, then in some table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse edges in the backward direction); however the last can be done without, at the price of constructing a representation of the transpose graph during the forward traversal phase. The only additional data structure needed by the algorithm is an ordered list of graph vertices, that will grow to contain each vertex once. If strong components are to be represented by appointing a separate root vertex for each component, and assigning to each vertex the root vertex of its component, then Kosaraju's algorithm can be stated as follows. For each vertex of the graph, mark as unvisited. Let be empty. For each vertex of the graph do , where is the recursive subroutine: If is unvisited then: Mark as visited. For each out-neighbour of , do . Prepend to . Otherwise do nothing. For each element of in order, do where is the recursive subroutine: If has not been assigned to a component then: Assign as belonging to the component whose root is . For each in-neighbour of , do . Otherwise do nothing. Trivial variations are to instead assign a
https://en.wikipedia.org/wiki/Informational%20self-determination
The term informational self-determination was first used in the context of a German constitutional ruling relating to personal information collected during the 1983 census. The German term is informationelle Selbstbestimmung. It is formally defined as "the authority of the individual to decide himself, on the basis of the idea of self-determination, when and within what limits information about his private life should be communicated to others." Freedom of speech, protection of privacy, right to active private life, right to education, protection of personal data, and the right to public sector information all fall under the umbrella of informational self-determination. On that occasion, the German Federal Constitutional Court ruled that: “[...] in the context of modern data processing, the protection of the individual against unlimited collection, storage, use and disclosure of his/her personal data is encompassed by the general personal rights of the German constitution. This basic right warrants in this respect the capacity of the individual to determine in principle the disclosure and use of his/her personal data. Limitations to this informational self-determination are allowed only in case of overriding public interest.” Informational self-determination is often considered similar to the right to privacy but has unique characteristics that distinguish it from the "right to privacy" in the United States tradition. Informational self-determination reflects Westin's description of privacy: “The right of the individual to decide what information about himself should be communicated to others and under what circumstances” (Westin, 1970). In contrast, the "right to privacy" in the United States legal tradition is commonly considered to originate in Warren and Brandeis' article, which focuses on the right to "solitude" (i.e., being "left alone") and in the Constitution's Fourth Amendment, which protects persons and their belongings from warrantless search. Views fr
https://en.wikipedia.org/wiki/L-reduction
In computer science, particularly the study of approximation algorithms, an L-reduction ("linear reduction") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems. The term L reduction is sometimes used to refer to log-space reductions, by analogy with the complexity class L, but this is a different concept. Definition Let A and B be optimization problems and cA and cB their respective cost functions. A pair of functions f and g is an L-reduction if all of the following conditions are met: functions f and g are computable in polynomial time, if x is an instance of problem A, then f(x) is an instance of problem B, if y' is a solution to f(x), then g(y' ) is a solution to x, there exists a positive constant α such that , there exists a positive constant β such that for every solution y' to f(x) . Properties Implication of PTAS reduction An L-reduction from problem A to problem B implies an AP-reduction when A and B are minimization problems and a PTAS reduction when A and B are maximization problems. In both cases, when B has a PTAS and there is an L-reduction from A to B, then A also has a PTAS. This enables the use of L-reduction as a replacement for showing the existence of a PTAS-reduction; Crescenzi has suggested that the more natural formulation of L-reduction is actually more useful in many cases due to ease of usage. Proof (minimization case) Let the approximation ratio of B be . Begin with the approximation ratio of A, . We can remove absolute values around the third condition of the L-reduction definition since we know A and B are minimization problems. Substitute that condition to obtain Simplifying, and substituting the first condition, we have But the term
https://en.wikipedia.org/wiki/UKNC
UKNC () is a Soviet PDP-11-compatible educational micro computer, aimed at teaching school informatics courses. It is also known as Elektronika MS-0511. UKNC stands for Educational Computer by Scientific Centre. Hardware Processor: KM1801VM2 1801 series CPU @ 8 MHz, 16 bit data bus, 17 bit address bus Peripheral processor: KM1801VM2 @ 6.25 MHz CPU RAM: 64 KiB PPU RAM: 32 KiB ROM: 32 KiB video RAM: 96 KiB (3 planes 32 KiB each, each 3-bit pixel had a bit in each plane) Graphics: max 640×288 with 8 colors in one line (16 or 53 colors on whole screen), it is possible to set an individual palette, resolution (80, 160, 320, or 640 dots per line) and memory address for each of 288 screen lines; no text mode. Keyboard: 88 keys (MS-7007), JCUKEN layout built-in LAN controller built-in controller for common or special tape-recorder with computer control (to use for data storage, usually 5-inch FDD's were used) One unique part of the design is the usage of a peripheral processing unit (PPU). Management of peripheral devices (display, audio, and so on) was offloaded to the PPU, which can also run user programs. The computer was released in 3 sub-models: 0511, 0511.1, 0511.2. The 0511.1 model, intended for home use, has a power supply for 220 V AC, while others use 42 V AC. The 0511.2 features new firmware with extended functionality and changed the marking of the keyboard's gray keys, compared to the initial version. The photo shows an 0511.2 variant. There is no active cooling, and at least the 0511.2 variant tends to overheat and halt after several hours of operation. The design of the case, the layout of the keyboard, the location and the shape of expansion slots are inspired by the Yamaha MSX system, which was purchased by the Soviet Union in the early 1980s for use in schools. The same case, with changed markings, is found with the IBM PC clone called Elektronika MS-1502. The same case and keyboard are found on another educational computer called Rusich (i8085 based)
https://en.wikipedia.org/wiki/Lamellipodium
The lamellipodium (: lamellipodia) (from Latin lamella, related to , "thin sheet", and the Greek radical pod-, "foot") is a cytoskeletal protein actin projection on the leading edge of the cell. It contains a quasi-two-dimensional actin mesh; the whole structure propels the cell across a substrate. Within the lamellipodia are ribs of actin called microspikes, which, when they spread beyond the lamellipodium frontier, are called filopodia. The lamellipodium is born of actin nucleation in the plasma membrane of the cell and is the primary area of actin incorporation or microfilament formation of the cell. Description Lamellipodia are found primarily in all mobile cells, such as the keratinocytes of fish and frogs, which are involved in the quick repair of wounds. The lamellipodia of these keratinocytes allow them to move at speeds of 10–20 μm / min over epithelial surfaces. When separated from the main part of a cell, a lamellipodium can still crawl about freely on its own. Lamellipodia are a characteristic feature at the front, leading edge, of motile cells. They are believed to be the actual motor which pulls the cell forward during the process of cell migration. The tip of the lamellipodium is the site where exocytosis occurs in migrating mammalian cells as part of their clathrin-mediated endocytic cycle. This, together with actin-polymerisation there, helps extend the lamella forward and thus advance the cell's front. It thus acts as a steering device for cells in the process of chemotaxis. It is also the site from which particles or aggregates attached to the cell surface migrate in a process known as cap formation. Structure Structurally, the barbed ends of the microfilaments (localized actin monomers in an ATP-bound form) face the "seeking" edge of the cell, while the pointed ends (localized actin monomers in an ADP-bound form) face the lamella behind. This creates treadmilling throughout the lamellipodium, which aids in the retrograde flow of particles thr
https://en.wikipedia.org/wiki/Cycle%20stealing
In computing, traditionally cycle stealing is a method of accessing computer memory (RAM) or bus without interfering with the CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to read or write RAM without CPU intervention. Clever exploitation of specific CPU or bus timings can permit the CPU to run at full speed without any delay if external devices access memory not actively participating in the CPU's current activity and complete the operations before any possible CPU conflict. Cycle stealing was common in older platforms, first on supercomputers which used complex systems to time their memory access, and later on early microcomputers where cycle stealing was used both for peripherals as well as display drivers. It is more difficult to implement in modern platforms because there are often several layers of memory running at different speeds, and access is often mediated by the memory management unit. In the cases where the functionality is needed, modern systems often use dual-port RAM which allows access by two systems, but this tends to be expensive. In older references, the term is also used to describe traditional DMA systems where the CPU stops during memory transfers. In this case the device is stealing cycles from the CPU, so it is the opposite sense of the more modern usage. In the smaller models of the IBM System/360 and System/370, the control store contains microcode for both the processor architecture and the channel architecture. When a channel needs service, the hardware steals cycles from the CPU microcode in order to run the channel microcode. Common implementations Some processors were designed to allow cycle stealing, or at least supported it easily. This was the case for the Motorola 6800 and MOS 6502 systems due to a design feature which meant the CPU only accessed memory every other clock cycle. Using RAM that was running twice as fast as the CPU clock allowed a second system to interleave its accesses between t
https://en.wikipedia.org/wiki/Model%20elimination
Model elimination is the name attached to a pair of proof procedures invented by Donald W. Loveland, the first of which was published in 1968 in the Journal of the ACM. Their primary purpose is to carry out automated theorem proving, though they can readily be extended to logic programming, including the more general disjunctive logic programming. Model elimination is closely related to resolution while also bearing characteristics of a tableaux method. It is a progenitor of the SLD resolution procedure used in the Prolog logic programming language. While somewhat eclipsed by attention to, and progress in, resolution theorem provers, model elimination has continued to attract the attention of researchers and software developers. Today there are several theorem provers under active development that are based on the model elimination procedure.
https://en.wikipedia.org/wiki/Stochastic%20modelling%20%28insurance%29
This page is concerned with the stochastic modelling as applied to the insurance industry. For other stochastic modelling applications, please see Monte Carlo method and Stochastic asset models. For mathematical definition, please see Stochastic process. "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time-series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s). Its application initially started in physics. It is now being applied in engineering, life sciences, social sciences, and finance. See also Economic capital. Valuation Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on. So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the company's level of solvency. Deterministic approach The simplest way of doing this, and indeed the primary method used, is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the most likely rate and the most critical rate. The result provides a point estimate - the best single estimate of what the company's current solvency position is, or m
https://en.wikipedia.org/wiki/Glycyrrhizol
Glycyrrhizol A is a prenylated pterocarpan and an isoflavonoid derivative. It is a compound isolated from the root of the Chinese licorice plant (Glycyrrhiza uralensis). It may has in vitro antibacterial properties. In one study, the strongest antibacterial activity was observed against Streptococcus mutans, an organism known to cause tooth decay in humans.
https://en.wikipedia.org/wiki/GpsOne
gpsOne is the brand name for a cellphone chipset manufactured by Qualcomm for mobile phone tracking. It uses A-GPS or Assisted-GPS to locate the phone more quickly, accurately and reliably than by GPS alone, especially in places with poor GPS reception. Current uses gpsOne is primarily used today for Enhanced-911 E911 service, allowing a cell phone to relay its location to emergency dispatchers, thus overcoming one of the traditional shortcomings of cellular phone technology. Using a combination of GPS satellite signals and the cell sites themselves, gpsOne plots the location with greater accuracy than traditional GPS systems in areas where satellite reception is problematic due to buildings or terrain. Geotagging - addition of location information to the pictures taken with a camera phone. Location-based information delivery, (i.e. local weather and traffic alerts). Verizon Wireless uses gpsOne to support its VZ Navigator automotive navigation system. Verizon disables gpsOne in some phones for other applications as compared to AT&T and T-Mobile. gpsOne in other systems besides Verizon can be used with any third-party applications. Future uses Some vendors are also looking at GPS phone technology as a method of implementing location-based solutions, such as: Employers can track vehicles or employees, allowing quick response from the nearest representative. Restaurants, clubs, theatres and other venues could relay SMS special offers to patrons within a certain range. When using a phone as a 'wallet' and making e-payments, the user's location can be verified as an additional layer of security against cloning. For example, John Doe in AverageTown USA is most likely not purchasing a candy bar from a machine at LAX if he was logged paying for the subway token in NYC, and calling his wife from the Empire State Building. Location-based games. Functions gpsOne can operate in four modes: Standalone - The handset has no connection to the network, and uses o
https://en.wikipedia.org/wiki/Indiscernibles
In mathematical logic, indiscernibles are objects that cannot be distinguished by any property or relation defined by a formula. Usually only first-order formulas are considered. Examples If a, b, and c are distinct and {a, b, c} is a set of indiscernibles, then, for example, for each binary formula , we must have Historically, the identity of indiscernibles was one of the laws of thought of Gottfried Leibniz. Generalizations In some contexts one considers the more general notion of order-indiscernibles, and the term sequence of indiscernibles often refers implicitly to this weaker notion. In our example of binary formulas, to say that the triple (a, b, c) of distinct elements is a sequence of indiscernibles implies Applications Order-indiscernibles feature prominently in the theory of Ramsey cardinals, Erdős cardinals, and zero sharp. See also Identity of indiscernibles Rough set
https://en.wikipedia.org/wiki/System%20Contention%20Scope
In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope.
https://en.wikipedia.org/wiki/Hexadimethrine%20bromide
Hexadimethrine bromide (commercial brand name Polybrene) is a cationic polymer with several uses. Currently, it is primarily used to increase the efficiency of transduction of certain cells with retrovirus in cell culture. Hexadimethrine bromide acts by neutralizing the charge repulsion between virions and sialic acid on the cell surface. Use of Polybrene can improve transduction efficiency 100-1000 fold although it can be toxic to some cell types. Polybrene in combination with DMSO shock is used to transfect some cell types such as NIH-3T3 and CHO. It has other uses, including a role in protein sequencing. Hexadimethrine bromide also reverses heparin anticoagulation during open-heart surgery, and it was the original reversal agents used in the 1950s and 1960s. It was replaced by protamine sulfate in 1969, after it was shown that hexadimethrine bromide could potentially cause kidney failure in dogs when used in doses in excess of its therapeutic range. It is still used as an alternative to protamine sulfate for patients who are sensitive to protamine, and at least one surgical center has gone back to using it as their standard reversal agent, since protamine sulfate causes at least a mild hypotensive reaction in most or all patients Hexadimethrine bromide is also used in enzyme kinetic assays in order to reduce spontaneous activation of zymogens that are prone to auto activation.
https://en.wikipedia.org/wiki/Gentzen%27s%20consistency%20proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0. Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees. Formally, ε0 is the first ordinal such that , i.e. the limit of the sequence It is a countable ordinal much smaller than large countable ordinals. To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε0. This can be done in various ways, one example provided by Cantor's normal form theorem.
https://en.wikipedia.org/wiki/Stone%27s%20method
In numerical analysis, Stone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU decomposition, to get an iterative solution of the problem. The method is named after Harold S. Stone, who proposed it in 1968. The LU decomposition is an excellent general-purpose linear equation solver. The biggest disadvantage is that it fails to take advantage of coefficient matrix to be a sparse matrix. The LU decomposition of a sparse matrix is usually not sparse, thus, for a large system of equations, LU decomposition may require a prohibitive amount of memory and number of arithmetical operations. In the preconditioned iterative methods, if the preconditioner matrix M is a good approximation of coefficient matrix A then the convergence is faster. This brings one to idea of using approximate factorization LU of A as the iteration matrix M. A version of incomplete lower-upper decomposition method was proposed by Stone in 1968. This method is designed for equation system arising from discretisation of partial differential equations and was firstly used for a pentadiagonal system of equations obtained while solving an elliptic partial differential equation in a two-dimensional space by a finite difference method. The LU approximate decomposition was looked in the same pentadiagonal form as the original matrix (three diagonals for L and three diagonals for U) as the best match of the seven possible equations for the five unknowns for each row of the matrix. Algorithm method stone is For the linear system calculate incomplete factorization of matrix set a guess while ( ) do evaluate new right hand side solve by forward substitution solve by back substitution end while Footnotes
https://en.wikipedia.org/wiki/Scoring%20rule
In decision theory, a scoring rule provides a summary measure for the evaluation of probabilistic predictions or forecasts. It is applicable to tasks in which predictions assign probabilities to events, i.e. one issues a probability distribution as prediction. This includes probabilistic classification of a set of mutually exclusive outcomes or classes. On the other side, a scoring function provides a summary measure for the evaluation of point predictions, i.e. one predicts a property or functional , like the expectation or the median. Scoring rules and scoring functions can be thought of as "cost function" or "loss function". They are evaluated as empirical mean of a given sample, simply called score. Scores of different predictions or models can then be compared to conclude which model is best. If a cost is levied in proportion to a proper scoring rule, the minimal expected cost corresponds to reporting the true set of probabilities. Proper scoring rules are used in meteorology, finance, and pattern classification where a forecaster or algorithm will attempt to minimize the average score to yield refined, calibrated probabilities (i.e. accurate probabilities). Motivation Since the metrics in Evaluation of binary classifiers are not evaluating the calibration, scoring rules which can do so are needed. These scoring rules can be used as loss functions in empirical risk minimization. Definition Consider a sample space , a σ-algebra of subsets of and a convex class of probability measures on . A function defined on and taking values in the extended real line, , is -quasi-integrable if it is measurable with respect to and is quasi-integrable with respect to all . Probabilistic forecast A probabilistic forecast is any probability measure . Scoring rule A scoring rule is any extended real-valued function such that is -quasi-integrable for all . represents the loss or penalty when the forecast is issued and the observation materializes. Point fore
https://en.wikipedia.org/wiki/Optical%20transfer%20function
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations. Either transfer function specifies the response to a periodic sine-wave pattern passing through the lens system, as a function of its spatial frequency or period, and its orientation. Formally, the OTF is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is complex-valued; but it will be real-valued in the common case of a PSF that is symmetric about its center. The MTF is formally defined as the magnitude (absolute value) of the complex OTF. The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that a
https://en.wikipedia.org/wiki/List%20of%20edible%20flowers
This is a list of edible flowers. See also List of culinary herbs and spices List of edible nuts Flower Edible flowers List of useful plants
https://en.wikipedia.org/wiki/Monkey%20patch
Monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. A monkey patch (also spelled monkey-patch, MonkeyPatch) is a way to extend or modify the runtime code of dynamic languages (e.g. Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, etc.) without altering the original source code. Etymology The term monkey patch seems to have come from an earlier term, guerrilla patch, which referred to changing code sneakily – and possibly incompatibly with other such patches – at runtime. The word guerrilla, nearly homophonous with gorilla, became monkey, possibly to make the patch sound less intimidating. An alternative etymology is that it refers to “monkeying about” with the code (messing with it). Despite the name's suggestion, the "monkey patch" is sometimes the official method of extending a program. For example, web browsers such as Firefox and Internet Explorer used to encourage this, although modern browsers (including Firefox) now have an official extensions system. Definitions The definition of the term varies depending upon the community using it. In Ruby, Python, and many other dynamic programming languages, the term monkey patch only refers to dynamic modifications of a class or module at runtime, motivated by the intent to patch existing third-party code as a workaround to a bug or feature which does not act as desired. Other forms of modifying classes at runtime have different names, based on their different intents. For example, in Zope and Plone, security patches are often delivered using dynamic class modification, but they are called hot fixes. Applications Monkey patching is used to: Replace methods / classes / attributes / functions at runtime, e.g. to stub out a function during testing; Modify/extend behaviour of a third-party product without maintaining a private copy of the source code; Apply the result of a patch at runtime to the state in memory, instead of the source code on disk; Distr
https://en.wikipedia.org/wiki/Trypsinization
Trypsinization is the process of cell dissociation using trypsin, a proteolytic enzyme which breaks down proteins, to dissociate adherent cells from the vessel in which they are being cultured. When added to cell culture, trypsin breaks down the proteins that enable the cells to adhere to the vessel. Trypsinization is often used to pass cells to a new vessel. When the trypsinization process is complete the cells will be in suspension and appear rounded. For experimental purposes, cells are often cultivated in containers that take the form of plastic flasks or plates. In such flasks, cells are provided with a growth medium comprising the essential nutrients required for proliferation, and the cells adhere to the container and each other as they grow. This process of cell culture or tissue culture requires a method to dissociate the cells from the container and each other. Trypsin, an enzyme commonly found in the digestive tract, can be used to "digest" the proteins that facilitate adhesion to the container and between cells. Once cells have detached from their container it is necessary to deactivate the trypsin, unless the trypsin is synthetic, as cell surface proteins will also be cleaved over time and this will affect cell functioning. Serum can be used to inactivate trypsin, as it contains protease inhibitors. Because of the presence of these inhibitors, the serum must be removed before treatment of a growth vessel with trypsin and must not be added again to the growth vessel until cells have detached from their growth surface - this detachment can be confirmed by visual observation using a microscope. Trypsinization is often used to permit passage of adherent cells to a new container, observation for experimentation, or reduction of the degree of confluency in a culture flask through the removal of a percentage of the cells.
https://en.wikipedia.org/wiki/Identity%20transform
The identity transform is a data transformation that copies the source data into the destination data without change. The identity transformation is considered an essential process in creating a reusable transformation library. By creating a library of variations of the base identity transformation, a variety of data transformation filters can be easily maintained. These filters can be chained together in a format similar to UNIX shell pipes. Examples of recursive transforms The "copy with recursion" permits, changing little portions of code, produce entire new and different output, filtering or updating the input. Understanding the "identity by recursion" we can understand the filters. Using XSLT The most frequently cited example of the identity transform (for XSLT version 1.0) is the "copy.xsl" transform as expressed in XSLT. This transformation uses the xsl:copy command to perform the identity transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> This template works by matching all attributes (@*) and other nodes (node()), copying each node matched, then applying the identity transformation to all attributes and child nodes of the context node. This recursively descends the element tree and outputs all structures in the same structure they were found in the original file, within the limitations of what information is considered significant in the XPath data model. Since node() matches text, processing instructions, root, and comments, as well as elements, all XML nodes are copied. A more explicit version of the identity transform is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|*|processing-instruction()|comment()"> <xsl:copy> <xsl:apply-templates select="*|@*|text()|processing-instruction()|c
https://en.wikipedia.org/wiki/Sustainable%20habitat
A Sustainable habitat is an ecosystem that produces food and shelter for people and other organisms, without resource depletion and in such a way that no external waste is produced. Thus the habitat can continue into the future tie without external infusions of resources. Such a sustainable habitat may evolve naturally or be produced under the influence of man. A sustainable habitat that is created and designed by human intelligence will mimic nature, if it is to be successful. Everything within it is connected to a complex array of organisms, physical resources, and functions. Organisms from many different biomes can be brought together to fulfill various ecological niches. Definition A sustainable habitat is achieving stability between the economic and social development of human habitats together with the defense of the environment, shelter, basic services, social infrastructure, and transportation. A sustainable habitat is required to make sure that one species' waste ends up being the energy or food source for another species. It involves the preservation of the ecological balance in terms of a symbiotic perspective on urban development while developing urban extensions of existing towns. The term often refers to sustainable human habitats, which typically involves some form of green building or environmental planning. History In creating the sustainable habitats, environmental scientists, designers, engineers and architects must not consider any elements as a waste product to be disposed of somewhere off site, but as a nutrient stream for another process to feed on. Researching ways to interconnect waste streams to production creates a more sustainable society by minimizing pollution. Sustainability of marine ecosystems is a concern. Rigorous fishing has decreased top trophic levels and affected the ecological dynamics and resilience of fisheries by reducing the numbers and lengths of food webs. Historically intense commercial and rising recreational
https://en.wikipedia.org/wiki/Nachbin%27s%20theorem
In mathematics, in the area of complex analysis, Nachbin's theorem (named after Leopoldo Nachbin) is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, given below. Exponential type A function defined on the complex plane is said to be of exponential type if there exist constants and such that in the limit of . Here, the complex variable was written as to emphasize that the limit must hold in all directions . Letting stand for the infimum of all such , one then says that the function is of exponential type . For example, let . Then one says that is of exponential type , since is the smallest number that bounds the growth of along the imaginary axis. So, for this example, Carlson's theorem cannot apply, as it requires functions of exponential type less than . Ψ type Bounding may be defined for other functions besides the exponential function. In general, a function is a comparison function if it has a series with for all , and Comparison functions are necessarily entire, which follows from the ratio test. If is such a comparison function, one then says that is of -type if there exist constants and such that as . If is the infimum of all such one says that is of -type . Nachbin's theorem states that a function with the series is of -type if and only if Borel transform Nachbin's theorem has immediate applications in Cauchy theorem-like situations, and for integral transforms. For example, the generalized Borel transform is given by If is of -type , then the exterio
https://en.wikipedia.org/wiki/Exponential%20type
In complex analysis, a branch of mathematics, a holomorphic function is said to be of exponential type C if its growth is bounded by the exponential function for some real-valued constant as . When a function is bounded in this way, it is then possible to express it as certain kinds of convergent summations over a series of other complex functions, as well as understanding when it is possible to apply techniques such as Borel summation, or, for example, to apply the Mellin transform, or to perform approximations using the Euler–Maclaurin formula. The general case is handled by Nachbin's theorem, which defines the analogous notion of -type for a general function as opposed to . Basic idea A function defined on the complex plane is said to be of exponential type if there exist real-valued constants and such that in the limit of . Here, the complex variable was written as to emphasize that the limit must hold in all directions . Letting stand for the infimum of all such , one then says that the function is of exponential type . For example, let . Then one says that is of exponential type , since is the smallest number that bounds the growth of along the imaginary axis. So, for this example, Carlson's theorem cannot apply, as it requires functions of exponential type less than . Similarly, the Euler–Maclaurin formula cannot be applied either, as it, too, expresses a theorem ultimately anchored in the theory of finite differences. Formal definition A holomorphic function is said to be of exponential type if for every there exists a real-valued constant such that for where . We say is of exponential type if is of exponential type for some . The number is the exponential type of . The limit superior here means the limit of the supremum of the ratio outside a given radius as the radius goes to infinity. This is also the limit superior of the maximum of the ratio at a given radius as the radius goes to infinity. The limit superior may exist even
https://en.wikipedia.org/wiki/Relaxation%20labelling
Relaxation labelling is an image treatment methodology. Its goal is to associate a label to the pixels of a given image or nodes of a given graph. See also Digital image processing
https://en.wikipedia.org/wiki/Graffiti%20%28program%29
Graffiti is a computer program which makes conjectures in various subfields of mathematics (particularly graph theory) and chemistry, but can be adapted to other fields. It was written by Siemion Fajtlowicz and Ermelinda DeLaViña at the University of Houston. Research on conjectures produced by Graffiti has led to over 60 publications by other mathematicians.
https://en.wikipedia.org/wiki/A%20capriccio
A capriccio (Italian: "following one's fancy") is a tempo marking indicating a free and capricious approach to the tempo (and possibly the style) of the piece. This marking will usually modify another, such as lento a capriccio, often used in the Hungarian Rhapsodies of Franz Liszt. Perhaps the most famous piece to use the term is Ludwig van Beethoven's Rondò a capriccio (Op. 129), better known as Rage Over a Lost Penny. See also Capriccio (music) External links
https://en.wikipedia.org/wiki/D3O
D3O is an ingredient brand specialising in advanced rate-sensitive impact protection technologies, materials and products. It comprises a portfolio of more than 30 technologies and materials including set foams, formable foams, set elastomers and formable elastomers. D3O is an engineering, design and technology-focused company based in London, UK, with offices in China and the US. D3O is sold in more than 50 countries. It is used in sports and motorcycle gear; protective cases for consumer electronics including phones; industrial workwear; and military protection including helmet pads and limb protectors. History In 1999, the materials scientists Richard Palmer and Philip Green experimented with a dilatant liquid with non-Newtonian properties. Unlike water, it was free flowing when stationary but became instantly rigid upon impact. As keen snowboarders, Palmer and Green drew inspiration from snow and decided to replicate its matrix-like quality to develop a flexible material that incorporated the dilatant fluid. After experimenting with numerous materials and formulas, they invented a flexible, pliable material that locked together and solidified in the event of a collision. When incorporated into clothing, the material moved with the wearer while providing comprehensive protection. Palmer and Green successfully filed a patent application, which they used as the foundation for commercialising their invention and setting up a business in 1999. D3O® was used commercially for the first time by the United States Ski Team and the Canada ski team at the 2006 Olympic Winter Games. D3O® first entered the motorcycle market in 2009 when the ingredient was incorporated into CE-certified armour for the apparel brand Firstgear. Philip Green left D3O in 2006, and in 2009 founder Richard Palmer brought in Stuart Sawyer as interim CEO. Palmer took a sabbatical in 2010 and left the business in 2011, at which point executive leadership was officially handed over to Saw
https://en.wikipedia.org/wiki/PAH%20world%20hypothesis
The PAH world hypothesis is a speculative hypothesis that proposes that polycyclic aromatic hydrocarbons (PAHs), known to be abundant in the universe, including in comets, and assumed to be abundant in the primordial soup of the early Earth, played a major role in the origin of life by mediating the synthesis of RNA molecules, leading into the RNA world. However, as yet, the hypothesis is untested. Background The 1952 Miller–Urey experiment demonstrated the synthesis of organic compounds, such as amino acids, formaldehyde and sugars, from the original inorganic precursors the researchers presumed to have been present in the primordial soup (but is no longer considered likely). This experiment inspired many others. In 1961, Joan Oró found that the nucleotide base adenine could be made from hydrogen cyanide (HCN) and ammonia in a water solution. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. The RNA world hypothesis shows how RNA can become its own catalyst (a ribozyme). In between there are some missing steps such as how the first RNA molecules could be formed. The PAH world hypothesis was proposed by Simon Nicholas Platts in May 2004 to try to fill in this missing step. A more thoroughly elaborated idea has been published by Ehrenfreund et al. Polycyclic aromatic hydrocarbons Polycyclic aromatic hydrocarbons are the most common and abundant of the known polyatomic molecules in the visible universe, and are considered a likely constituent of the primordial sea. PAHs, along with fullerenes (or "buckyballs"), have been recently detected in nebulae. In April 2019, scientists, working with the Hubble Space Telescope, reported the confirmed detection of the large and complex ionized molecules of buckminsterfullerene (C60) in the interstellar medium spaces between the stars. (Fullerenes are also implicated in the origin of life; according to astronomer Letizi
https://en.wikipedia.org/wiki/Shooting%20reconstruction
Shooting incident reconstruction is the examination of the physical evidence recovered or documented at the scene of a shooting. Shooting reconstruction may also include the laboratory analysis of the evidence recovered at the scene. The goal is an attempt to gain an understanding of what may or may not have happened during the incident. Once all reasonable explanations have been considered, one can evaluate the significance of witness or suspect accounts of the incident. In many cases valuable evidence necessary for reconstruction analysis exists at the crime scene. Should this evidence go undocumented or unrecovered during the initial processing of the shooting scene, the information it can give investigators may be lost forever. Poor shooting incident processing can not be compensated for by excellent laboratory work. There are many questions that can be answered from the proper reconstruction of a shooting incident. Some of the questions typically answered by a shooting reconstruction investigation include (but not limited to) the distance of the shooter from the target, The path of the bullet(s), The number of shots fired and possibly the sequence of multiple discharges at a shooting incident. The Association of Firearm and Tool Mark Examiners is an international non-profit organization dedicated to the advancement of firearm and tool mark identification, including shooting reconstruction.
https://en.wikipedia.org/wiki/Lenthionine
Lenthionine is a cyclic organosulfur compound found in shiitake mushrooms, onions, and garlic, and it is partly responsible for their flavor. The mechanism of its formation is unclear, but it likely involves the enzyme C–S lyase. Preparation Lenthionine has been isolated from mushrooms by submerging them in water and allowing them to set overnight. The mushrooms were then centrifuged, and dissolved in chloroform, which was later evaporated to form a yellow oil layer. Chromatography was then used to isolate the lenthionine. Lenthionine has been prepared in situ bubbling hydrogen sulfide gas through a solution of sulfur and sodium sulfide until the pH reached 8. Then, the solution had a large amount of dichloromethane added, and it was stirred at room temperature until an organic layer formed, which contained the lenthionine.
https://en.wikipedia.org/wiki/Avenanthramide
Avenanthramides (anthranilic acid amides, formerly called "avenalumins") are a group of phenolic alkaloids found mainly in oats (Avena sativa), but also present in white cabbage butterfly eggs (Pieris brassicae and P. rapae), and in fungus-infected carnation (Dianthus caryophyllus). A number of studies demonstrate that these natural products have anti-inflammatory, antioxidant, anti-itch, anti-irritant, and antiatherogenic activities. Oat kernel extracts with standardized levels of avenanthramides are used for skin, hair, baby, and sun care products. The name avenanthramides was coined by Collins when he reported the presence of these compounds in oat kernels. It was later found that three avenanthramides were the open-ring amides of avenalumins I, II, and III, which were previously reported as oat phytoalexins by Mayama and co-workers. History Oat has been used for personal care purposes since antiquity. Indeed, wild oats (Avena sativa) was used in skin care in Egypt and the Arabian peninsula 2000 BC. Oat baths were a common treatment of insomnia, anxiety, and skin diseases such as eczema and burns. In Roman times, its use as a medication for dermatological issues was reported by Pliny, Columella, and Theophrastus. In the 19th century, oatmeal baths were often used to treat many cutaneous conditions, especially pruritic inflammatory eruptions. In the 1930s, the literature provided further evidence about the cleansing action of oat along with its ability to relieve itching and protect skin. Colloidal oatmeal In 2003, colloidal oatmeal was officially approved as a skin protectant by the FDA. However, little thought had been given to the active ingredient in oats responsible for the anti-inflammatory effect until more attention was paid to avenanthramides, which were first isolated and characterized in the 1980s by Collins. Since then, many congeners have been characterized and purified, and it is known that avenanthramides have antioxidant, anti-inflammatory, and
https://en.wikipedia.org/wiki/Aleksandr%20Korkin
Aleksandr Nikolayevich Korkin (; – ) was a Russian mathematician. He made contribution to the development of partial differential equations, and was second only to Chebyshev among the founders of the Saint Petersburg Mathematical School. Among others, his students included Yegor Ivanovich Zolotarev. Some publications
https://en.wikipedia.org/wiki/John%20Herivel
John William Jamieson Herivel (29 August 1918 – 18 January 2011) was a British science historian and World War II codebreaker at Bletchley Park. As a codebreaker concerned with Cryptanalysis of the Enigma, Herivel is remembered chiefly for the discovery of what was soon dubbed the Herivel tip or Herivelismus. Herivelismus consisted of the idea, the Herivel tip and the method of establishing whether it applied using the Herivel square. It was based on Herivel's insight into the habits of German operators of the Enigma cipher machine that allowed Bletchley Park to easily deduce part of the daily key. For a brief but critical period after May 1940, the Herivel tip in conjunction with "cillies" (another class of operator error) was the main technique used to solve Enigma. After the war, Herivel became an academic, studying the history and philosophy of science at Queen's University Belfast, particularly Isaac Newton, Joseph Fourier, Christiaan Huygens. In 1956, he took a brief leave of absence from Queen's to work as a scholar at the Dublin Institute for Advanced Studies. In retirement, he wrote an autobiographical account of his work at Bletchley Park entitled Herivelismus and the German Military Enigma. Recruitment to Bletchley Park John Herivel was born in Belfast, and attended Methodist College Belfast from 1924 to 1936. In 1937 he was awarded a Kitchener Scholarship to study mathematics at Sidney Sussex College, Cambridge, where his supervisor was Gordon Welchman. Welchman recruited Herivel to the Government Code and Cypher School (GC&CS) at Bletchley Park. Welchman worked with Alan Turing in the newly formed Hut 6 section created to solve Army and Air Force Enigma. Herivel, then aged 21, arrived at Bletchley on 29 January 1940, and was briefed on Enigma by Alan Turing and Tony Kendrick. Enigma At the time that Herivel started work at Bletchley Park, Hut 6 was having only limited success with Enigma-enciphered messages, mostly from the Luftwaffe Enigma network
https://en.wikipedia.org/wiki/Division%20%28horticulture%29
Division, in horticulture and gardening, is a method of asexual plant propagation, where the plant (usually an herbaceous perennial) is broken up into two or more parts. Each part has an intact root and crown. The technique is of ancient origin, and has long been used to propagate bulbs such as garlic and saffron. Another type of division is though a plant tissue culture. In this method the meristem (a type of plant tissue) is divided. Overview Division is one of the three main methods used by gardeners to increase stocks of plants (the other two are seed-sowing and cuttings). Division is usually applied to mature perennial plants, but may also be used for shrubs with suckering roots, such as gaultheria, kerria and sarcococca. Annual and biennial plants do not lend themselves to this procedure, as their lifespan is too short. Practice Most perennials should be divided and replanted every few years to keep them healthy. Plants that do not have enough space between them will start to compete for resources. Additionally, plants that are too close together will stay damp longer due to poor air circulation. This can cause the leaves develop a fungal disease. Most perennials bloom during the fall or during the spring/summer. The best time to divide a perennial is when it is not blooming. Perennials that bloom in the fall should be divided in the spring and perennials that bloom in the spring/summer should be divided in the fall. The ideal day to divide a plant is when it is cool and there is rain in the forecast. Start by digging a circle around the plant about 4-6 inches from the base. Next, dig underneath the plant and lift it out of the hole. Use a shovel, gardening shears, or knife to physically divide the plant into multiple "divisions". This is also a good time to remove any bare patches or old growth. Each division should have a good number of healthy leaves and roots. If the division is not being replanted immediately, it should be watered and kept in a shady p
https://en.wikipedia.org/wiki/Rank%20theory%20of%20depression
Rank theory is an evolutionary theory of depression, developed by Anthony Stevens and John Price, and proposes that depression promotes the survival of genes. Depression is an adaptive response to losing status (rank) and losing confidence in the ability to regain it. The adaptive function of the depression is to change behaviour to promote survival for someone who has been defeated. According to rank theory, depression was naturally selected to allow us to accept a subordinate role. The function of this depressive adaptation is to prevent the loser from suffering further defeat in a conflict. In the face of defeat, a behavioural process swings into action which causes the individual to cease competing and reduce their ambitions. This process is involuntary and results in the loss of energy, depressed mood, sleep disturbance, poor appetite, and loss of confidence, which are typical characteristics of depression. The outward symptoms of depression (facial expressions, constant crying, etc.) signal to others that the loser is not fit to compete, and they also discourage others from attempting to restore the loser's rank. This acceptance of a lower rank would serve to stabilise an ancestral human community, promoting the survival of any individual (or individual's genes) in the community through affording protection from other human groups, retaining access to resources, and to mates. The adaptive function of accepting a lower rank is twofold: first, it ensures that the loser truly yields and does not attempt to make a comeback, and second, the loser reassures the winner that yielding has truly taken place, so that the conflict ends, with no further damage to the loser. Social harmony is then restored. Development Rank theory of depression, initially known as the 'social competition hypothesis', is based on ethological theories of signalling: in order to avoid injury, animals will perform 'appeasement displays' to demonstrate their subordination and lack of desire
https://en.wikipedia.org/wiki/Cheekwood%20Botanical%20Garden%20and%20Museum%20of%20Art
Cheekwood is a historic estate on the western edge of Nashville, Tennessee that houses the Cheekwood Estate & Gardens. Formerly the residence of Nashville's Cheek family, the Georgian-style mansion was opened as a botanical garden and art museum in 1960. History Christopher Cheek founded a wholesale grocery business in Nashville in the 1880s. His son, Leslie Cheek, joined him as a partner, and by 1915 was president of the family-owned company. Leslie's wife, Mabel Wood, was a member of a prominent Clarksville, Tennessee, family. Meanwhile, Joel Owsley Cheek, Leslie's cousin, had developed an acclaimed blend of coffee that was marketed through Nashville's finest hotel, the Maxwell House Hotel. Cheek's extended family, including Leslie and Mabel Cheek, were investors. In 1928, the Postum Cereals Company (now General Foods) purchased Maxwell House's parent company, Cheek-Neal Coffee, for more than $40 million. After the sale of the family business, Leslie Cheek bought of woodland in West Nashville for a country estate. He hired New York residential and landscape architect Bryant Fleming to design the house and gardens, and gave him full control over every detail of the project, including interior furnishings. The resulting limestone mansion and extensive formal gardens were completed in 1932. The estate design was inspired by the grand English manors of the 18th century. Leslie Cheek died just two years after moving into the mansion. Mabel Cheek and their daughter, Huldah Cheek Sharp, lived at Cheekwood until the 1950s, when Huldah Sharp and her husband offered the property as a site for a botanical garden and art museum. The Exchange Club of Nashville, the Horticultural Society of Middle Tennessee and other civic groups led the redevelopment of the property aided by funds raised from the sale of the former building of the defunct Nashville Museum of Art. The new Cheekwood museum opened in 1960. Art museum Cheekwood's art collection was founded in 1959
https://en.wikipedia.org/wiki/Dinosaur%20size
Size is an important aspect of dinosaur paleontology, of interest to both the general public and professional scientists. Dinosaurs show some of the most extreme variations in size of any land animal group, ranging from tiny hummingbirds, which can weigh as little as two grams, to the extinct titanosaurs, which could weigh as much as . The latest evidence suggests that dinosaurs' average size varied through the Triassic, early Jurassic, late Jurassic and Cretaceous periods, and dinosaurs probably only became widespread during the early or mid Jurassic. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the range of . The mode of Mesozoic dinosaur body masses is between one and ten metric tonnes. This contrasts sharply with the size of Cenozoic mammals, estimated by the National Museum of Natural History as about . Size estimation Scientists will probably never be certain of the largest and smallest dinosaurs. This is because only a small fraction of animals ever fossilize, and most of these remains will either never be uncovered, or will be unintentionally destroyed as a result of human activity. Of the specimens that are recovered, few are even relatively complete skeletons, and impressions of skin and other soft tissues are rarely discovered. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art (though governed by some established allometric trends), and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork, and never perfect. Mass estimates for dinosaurs are much more variable than length estimates given the lack of soft tissue preservation in the fossilization process. Modern mass estimation is often done with th
https://en.wikipedia.org/wiki/Smoke%20inhalation
Smoke inhalation is the breathing in of harmful fumes (produced as by-products of combusting substances) through the respiratory tract. This can cause smoke inhalation injury (subtype of acute inhalation injury) which is damage to the respiratory tract caused by chemical and/or heat exposure, as well as possible systemic toxicity after smoke inhalation. Smoke inhalation can occur from fires of various sources such as residential, vehicle, and wildfires. Morbidity and mortality rates in fire victims with burns are increased in those with smoke inhalation injury. Victims of smoke inhalation injury can present with cough, difficulty breathing, low oxygen saturation, smoke debris and/or burns on the face. Smoke inhalation injury can affect the upper respiratory tract (above the larynx), usually due to heat exposure, or the lower respiratory tract (below the larynx), usually due to exposure to toxic fumes. Initial treatment includes taking the victim away from the fire and smoke, giving 100% oxygen at a high flow through a face mask (non-rebreather if available), and checking the victim for injuries to the body. Treatment for smoke inhalation injury is largely supportive, with varying degrees of consensus on benefits of specific treatments. Epidemiology The U.S. Fire Administration reported almost 1.3 million fires in 2019 causing 3,704 deaths and almost 17,000 injuries. Residential fires were found to be most often cooking related and resulted in the highest amount of deaths when compared to other fire types such as vehicle and outdoor fires. It has been found that men have higher rates of fire-related death and injury than women do, and that African American and American Indian men have higher rates of fire-related death and injury than other ethnic and racial groups. The age group with the highest rate of death from smoke inhalation is people over 85, while the age group with the highest injury rate is people of ages 50–54. Some reports also show increased rates of
https://en.wikipedia.org/wiki/Katanin
Katanin is a microtubule-severing AAA protein. It is named after the Japanese sword called a katana. Katanin is a heterodimeric protein first discovered in sea urchins. It contains a 60 kDa ATPase subunit, encoded by KATNA1, which functions to sever microtubules. This subunit requires ATP and the presence of microtubules for activation. The second 80 kDA subunit, encoded by KATNB1, regulates the activity of the ATPase and localizes the protein to centrosomes. Electron microscopy shows that katanin forms 14–16 nm rings in its active oligomerized state on the walls of microtubules (although not around the microtubule). Mechanism and regulation of microtubule length Structural analysis using electron microscopy has revealed that microtubule protofilaments change from a straight to a curved conformation upon GTP hydrolysis of β-tubulin. However, when these protofilaments are part of a polymerized microtubule, the stabilizing interactions created by the surrounding lattice lock subunits into a straight conformation, even after GTP hydrolysis. In order to disrupt these stable interactions, katanin, once bound to ATP, oligomerizes into a ring structure on the microtubule wall - in some cases oligomerization increases the affinity of katanin for microtubules and stimulates its ATPase activity. Once this structure is formed, katanin hydrolyzes ATP, and likely undergoes a conformational change that puts mechanical strain on the tubulin subunits, which destabilizes their interactions within the microtubule lattice. The predicted conformational change also likely decreases the affinity of katanin for tubulin as well as for other katanin proteins, which leads to disassembly of the katanin ring structure, and recycling of the individual inactivated proteins. The severing of microtubules by katanin is regulated by protective microtubule-associated proteins (MAPs), and the p80 subunit (p60 severs microtubules much better in the presence of p80). These mechanisms have dif
https://en.wikipedia.org/wiki/Quick%20Wertkarte
Quick was an electronic purse system available on Austrian bank cards to allow small purchases to be made without cash. The history of the Quick system goes back to 1996. Quick was discontinued on July 31, 2017. The system was aimed at small retailers such as bakeries, cafés, drink, and parking automats (but even small discount shops such as Billa accept it) and intended for purchases of less than €400. The card was inserted into a handheld Quick reader by the merchant who enters the transaction amount for the customer. The customer then confirms the purchase by pushing a button on the keypad, the exact amount debited from the card within a few seconds. As well as the multipurpose bank card version, anonymous cards (also smart cards) are available for the use of people without bank accounts, such as children and tourists. At ATMs, one can transfer money for free between bank cards and the Quick chip (either on a standalone smart card, or contained in the bank card). The scheme was operated by Europay Austria and most of the Maestro cards in use contain Quick support, but new ones are not issued without it. See also Octopus card Moneo External links The official Quick site (in German) Quick, Austria’s electronic purse Banking in Austria Smart cards Payment cards
https://en.wikipedia.org/wiki/Thermoplastic%20olefin
Thermoplastic olefin, thermoplastic polyolefin (TPO), or olefinic thermoplastic elastomers refer to polymer/filler blends usually consisting of some fraction of a thermoplastic, an elastomer or rubber, and usually a filler. Outdoor applications such as roofing frequently contain TPO because it does not degrade under solar UV radiation, a common problem with nylons. TPO is used extensively in the automotive industry. Materials Thermoplastics Thermoplastics may include polypropylene (PP), polyethylene (PE), block copolymer polypropylene (BCPP), and others. Fillers Common fillers include, though are not restricted to talc, fiberglass, carbon fiber, wollastonite, and MOS (Metal Oxy Sulfate). Elastomers Common elastomers include ethylene propylene rubber (EPR), EPDM (EP-diene rubber), ethylene-octene (EO), ethylbenzene (EB), and styrene ethylene butadiene styrene (SEBS). Currently there are a great variety of commercially available rubbers and BCPP's. They are produced using regioselective and stereoselective catalysts known as metallocenes. The metallocene catalyst becomes embedded in the polymer and cannot be recovered. Creation Components for TPO are blended together at 210 - 270 °C under high shear. A twin screw extruder or a continuous mixer may be employed to achieve a continuous stream, or a Banbury compounder may be employed for batch production. A higher degree of mixing and dispersion is achieved in the batch process, but the superheat batch must immediately be processed through an extruder to be pelletized into a transportable intermediate. Thus batch production essentially adds an additional cost step. Structure The geometry of the metallocene catalyst will determine the sequence of chirality in the chain, as in, atactic, syndiotactic, isotactic, as well as average block length, molecular weight and distribution. These characteristics will in turn govern the microstructure of the blend. As in metal alloys the properties of a TPO product depend gr
https://en.wikipedia.org/wiki/Windows%20CardSpace
Windows CardSpace (codenamed InfoCard) is a discontinued identity selector app by Microsoft. It stores references to digital identities of the users, presenting them as visual information cards. CardSpace provides a consistent UI designed to help people to easily and securely use these identities in applications and web sites where they are accepted. Resistance to phishing attacks and adherence to Kim Cameron's "7 Laws of Identity" were goals in its design. CardSpace is a built-in component of Windows 7 and Windows Vista, and has been made available for Windows XP and Windows Server 2003 as part of the .NET Framework 3.x package. Overview When an information card-enabled application or website wishes to obtain information about the user, it requests a particular set of claims. The CardSpace UI then appears, switching the display to the CardSpace service, which displays the user's stored identities as visual cards. The user selects a card to use, and the CardSpace software contacts the issuer of the identity to obtain a digitally signed XML token that contains the requested information. CardSpace also allows users to create personal (also known as self-issued) information cards, which can contain one or more of 14 fields of identity information such as full name and address. Other transactions may require a managed information card; these are issued by a third-party identity provider that makes the claims on the person's behalf, such as a bank, employer, or a government agency. Windows CardSpace is built on top of the Web services protocol stack, an open set of XML-based protocols, including WS-Security, WS-Trust, WS-MetadataExchange and WS-SecurityPolicy. This means that any technology or platform that supports these protocols can integrate with CardSpace. To accept information cards, a web developer needs to declare an HTML <OBJECT> tag that specifies the claims the website is demanding and implement code to decrypt the returned token and extract the claim valu
https://en.wikipedia.org/wiki/Pecel
Pecel (, Javanese:ꦥꦼꦕꦼꦭ꧀) is a traditional Javanese salad with peanut sauce, usually eaten with carbs (steamed rice, lontong or ketupat). The simplicity of pecel preparation and its cheap price have contributed to its popularity throughout Java. It has become a food that represents practicality, simplicity, and travel, since the dish is often found along the train journey across Java. Pecel was introduced to Malaysia, where it is known as pecal, by Javanese immigrants. Pecel is also very popular in Suriname, where it was introduced by the Javanese Surinamese. History In Babad Tanah Jawi (circa 17th century), Ki Gede Pemanahan referred to the dish he presented to his guest, Sunan Kalijaga as "pecel-ised boiled vegetables". In Javanese language, "pecel" used to refer to the act of squeezing the water out of something. Sunan Kalijaga was not familiar with the dish as he came from northeastern part of Central Java, while the dish was native to Yogyakarta. This dish became one of the most popular Javanese dishes soon after it was introduced to other regions of Java, and the word pecel took its current meaning, "a side dish that is made of vegetables and sauce". Pecel is only one of many Javanese vegetable-based salads. It is similar to lothek, except that lothek is usually served with fried batter or tofu and uses both raw and cooked vegetables. Ingredients Main ingredients usually consist of leafy vegetables, bean sprouts (or any other plant sprouts), long beans, and cabbages. Some other types of vegetables can also be added. People may use amaranth leaves, kangkung, cassava leaves, or leaves or any other local plants that are in season. Some modern recipes will add carrots (sliced) into the mix, or replace white cabbages with the red ones to spice up the color. The sauce is made of roasted (or fried) peanut, asam jawa, coconut sugar, and other spices. It might be served thick or watery, sweet or spicy, depending on the regional variation. Pecel is usually
https://en.wikipedia.org/wiki/Kudu%20dung-spitting
Kudu dung-spitting (Bokdrol Spoeg in Afrikaans) is a sport practiced by the Afrikaner community in South Africa. In the competition small, hard pellets of dung from the kudu antelope, are spat, with the farthest distance reached being the winner. Kudu dung-spitting is popular enough to have an annual world championship competition, with the formal sport beginning in 1994. Contests are held at some community bazaars, game festivals or tourism shows in the bushveld, Natal and Eastern Cape. Unlike many similar sports, the distance is measured from the marker to the place the dung pellet comes to rest, rather than where it initially hit the ground. The world record in the sport is a distance of set by Shaun van Rensburg of Addo. It is said that hunters began using the pellets in spitting competitions to "retaliate" at their prey, as the kudu is a notoriously difficult animal to hunt, and infamous for leaving a trail of dung pellets while managing to elude the hunter. "Similar" sports Records are also kept for cherry pit spitting, watermelon seed spitting, prune pit spitting, brown cricket spitting and tobacco juice spitting. In 2015, a sheep dung-spitting competition was introduced to Northern Ireland's Lady of The Lake Festival in County Fermanagh.
https://en.wikipedia.org/wiki/Selective%20soldering
Selective soldering is the process of selectively soldering components to printed circuit boards and molded modules that could be damaged by the heat of a reflow oven or wave soldering in a traditional surface-mount technology (SMT) or through-hole technology assembly processes. This usually follows an SMT oven reflow process; parts to be selectively soldered are usually surrounded by parts that have been previously soldered in a surface-mount reflow process, and the selective-solder process must be sufficiently precise to avoid damaging them. Processes Assembly processes used in selective soldering include: Selective aperture tooling over wave solder: These tools mask off areas previously soldered in the SMT reflow soldering process, exposing only those areas to be selectively soldered in the tool's aperture or window. The tool and printed circuit board (PCB) assembly are then passed over wave soldering equipment to complete the process. Each tool is specific to a PCB assembly. Mass selective dip solder fountain: A variant of selective-aperture soldering in which specialized tooling (with apertures to allow solder to be pumped through it) represent the areas to be soldered. The PCB is then presented over the selective-solder fountain; all selective soldering of the PCB is soldered simultaneously as the board is lowered into the solder fountain. Each tool is specific to a PCB assembly. Miniature wave selective solder : This typically uses a round miniature pumped solder wave, similar to the end of a pencil or crayon, to sequentially solder the PCB. The process is slower than the two previous methods, but more accurate. The PCB may be fixed, and the wave solder pot moved underneath the PCB; alternately, the PCB may be articulated over a fixed wave or solder bath to undergo the selective-soldering process. Unlike the first two examples, this process is toolless. Laser Selective Soldering System: A new system, able to import CAD-based board layouts and use that da
https://en.wikipedia.org/wiki/Aleksandar%20Totic
Aleksandar Totic is one of the original developers of the Mosaic browser. He cofounded and was a partner at Netscape Communications Corporation. He was born in Belgrade, Serbia, on 23 September 1966. He moved to America after his degree from Kuwait was not recognized by Yugoslav government, and currently lives in Palo Alto, CA San Francisco, CA. External links Mosaic - The First Global Web Browser Software engineers Serbian computer scientists Computer programmers Living people Year of birth missing (living people) Place of birth missing (living people)
https://en.wikipedia.org/wiki/Sacrococcygeal%20teratoma
Sacrococcygeal teratoma (SCT) is a type of tumor known as a teratoma that develops at the base of the coccyx (tailbone) and is thought to be primarily derived from remnants of the primitive streak. Sacrococcygeal teratomas are benign 75% of the time, malignant 12% of the time, and the remainder are considered "immature teratomas" that share benign and malignant features. Benign sacrococcygeal teratomas are more likely to develop in younger children who are less than 5 months old, and older children are more likely to develop malignant sacrococcygeal teratomas. The Currarino syndrome, due to an autosomal dominant mutation in the MNX1 gene, consists of a presacral mass (usually a mature teratoma or anterior meningocele), anorectal malformation and sacral dysgenesis. Presentation Complications Maternal complications of pregnancy may include mirror syndrome. Maternal complications of delivery may include a Cesarean section or, alternatively, a vaginal delivery with mechanical dystocia. Complications of the mass effect of a teratoma in general are addressed on the teratoma page. Complications of the mass effect of a large SCT may include hip dysplasia, bowel obstruction, urinary obstruction, hydronephrosis and hydrops fetalis. Even a small SCT can produce complications of mass effect, if it is presacral (Altman Type IV). In the fetus, severe hydronephrosis may contribute to inadequate lung development. Also in the fetus and newborn, the anus may be imperforate. Later complications of the mass effect and/or surgery may include neurogenic bladder, other forms of urinary incontinence, fecal incontinence, and other chronic problems resulting from accidental damage to or sacrifice of nerves and muscles within the pelvis. Removal of the coccyx may include additional complications. In one review of 25 patients, however, the most frequent complication was an unsatisfactory appearance of the surgical scar. Late effects Late effects are of two kinds: consequences of
https://en.wikipedia.org/wiki/Latent%20learning
Latent learning is the subconscious retention of information without reinforcement or motivation. In latent learning, one changes behavior only when there is sufficient motivation later than when they subconsciously retained the information. Latent learning is when the observation of something, rather than experiencing something directly, can affect later behavior. Observational learning can be many things. A human observes a behavior, and later repeats that behavior at another time (not direct imitation) even though no one is rewarding them to do that behavior. In the social learning theory, humans observe others receiving rewards or punishments, which invokes feelings in the observer and motivates them to change their behavior. In latent learning particularly, there is no observation of a reward or punishment. Latent learning is simply animals observing their surroundings with no particular motivation to learn the geography of it; however, at a later date, they are able to exploit this knowledge when there is motivation - such as the biological need to find food or escape trouble. The lack of reinforcement, associations, or motivation with a stimulus is what differentiates this type of learning from the other learning theories such as operant conditioning or classical conditioning. Comparison to other types of learning Classical conditioning Classical conditioning is when an animal eventually subconsciously anticipates a biological stimulus such as food when they experience a seemingly random stimulus, due to a repeated experience of their association. One significant example of classical conditioning is Ivan Pavlov's experiment in which dogs showed a conditioned response to a bell the experimenters had purposely tried to associate with feeding time. After the dogs had been conditioned, the dogs no longer only salivated for the food, which was a biological need and therefore an unconditioned stimulus. The dogs began to salivate at the sound of a bell, the be
https://en.wikipedia.org/wiki/Instinctive%20drift
Instinctive drift, alternately known as instinctual drift, is the tendency of an animal to revert to unconscious and automatic behaviour that interferes with learned behaviour from operant conditioning. Instinctive drift was coined by Keller and Marian Breland, former students of B.F. Skinner at the University of Minnesota, describing the phenomenon as "a clear and utter failure of conditioning theory." B.F. Skinner was an American psychologist and father of operant conditioning (or instrumental conditioning), which is learning strategy that teaches the performance of an action either through reinforcement or punishment. It is through the association of the behaviour and the reward or consequence that follows that depicts whether an animal will maintain a behaviour, or if it will become extinct. Instinctive drift is a phenomenon where such conditioning erodes and an animal reverts to its natural behaviour. B.F. Skinner B.F. Skinner was an American behaviourist inspired by John Watson's philosophy of behaviorism. Skinner was captivated with systematically controlling behaviour to result in desirable or beneficial outcomes. This passion led Skinner to become the father of operant conditioning. Skinner made significant contributions to the research concepts of reinforcement, punishment, schedules of reinforcement, behaviour modification and behaviour shaping. The mere existence of the instinctive drift phenomenon challenged Skinner's initial beliefs on operant conditioning and reinforcement. Operant conditioning Skinner described operant conditioning as strengthening behaviour through reinforcement. Reinforcement can consist of positive reinforcement, in which a desirable stimulus is added; negative reinforcement, in which an undesirable stimulus is taken away; positive punishment, in which an undesirable stimulus is added; and negative punishment, in which a desirable stimulus is taken away. Through these practices, animals shape their behaviour and are motivated
https://en.wikipedia.org/wiki/Genetic%20analysis
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts of the process. The base system of analysis revolves around general genetics. Basic studies include identification of genes and inherited disorders. This research has been conducted for centuries on both a large-scale physical observation basis and on a more microscopic scale. Genetic analysis can be used generally to describe methods both used in and resulting from the sciences of genetics and molecular biology, or to applications resulting from this research. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. History of genetic analysis Much of the research that set the foundation of genetic analysis began in prehistoric times. Early humans found that they could practice selective breeding to improve crops and animals. They also identified inherited traits in humans that were eliminated over the years. The many genetic analyses gradually evolved over time. Mendelian research Modern genetic analysis began in the mid-1800s with research conducted by Gregor Mendel. Mendel, who is known as the "father of modern genetics", was inspired to study variation in plants. Between 1856 and 1863, Mendel cultivated and tested some 29,000 pea plants (i.e., Pisum sativum). This study showed that one in four pea plants had purebred recessive alleles, two out of four were hybrid and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later became known as Mendel's Laws of Inheritance. Lacking the basic understanding of heredity,
https://en.wikipedia.org/wiki/Two-way%20finite%20automaton
In computer science, in particular in automata theory, a two-way finite automaton is a finite automaton that is allowed to re-read its input. Two-way deterministic finite automaton A two-way deterministic finite automaton (2DFA) is an abstract machine, a generalized version of the deterministic finite automaton (DFA) which can revisit characters already processed. As in a DFA, there are a finite number of states with transitions between them based on the current character, but each transition is also labelled with a value indicating whether the machine will move its position in the input to the left, right, or stay at the same position. Equivalently, 2DFAs can be seen as read-only Turing machines with no work tape, only a read-only input tape. 2DFAs were introduced in a seminal 1959 paper by Rabin and Scott, who proved them to have equivalent power to one-way DFAs. That is, any formal language which can be recognized by a 2DFA can be recognized by a DFA which only examines and consumes each character in order. Since DFAs are obviously a special case of 2DFAs, this implies that both kinds of machines recognize precisely the class of regular languages. However, the equivalent DFA for a 2DFA may require exponentially many states, making 2DFAs a much more practical representation for algorithms for some common problems. 2DFAs are also equivalent to read-only Turing machines that use only a constant amount of space on their work tape, since any constant amount of information can be incorporated into the finite control state via a product construction (a state for each combination of work tape state and control state). Formal description Formally, a two-way deterministic finite automaton can be described by the following 8-tuple: where is the finite, non-empty set of states is the finite, non-empty set of input symbols is the left endmarker is the right endmarker is the start state is the end state is the reject state In addition, the following tw
https://en.wikipedia.org/wiki/Autostereoscopy
Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays. Technology Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies. The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin. Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System) and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products. Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutio
https://en.wikipedia.org/wiki/Square-free%20word
In combinatorics, a squarefree word is a word (a sequence of symbols) that does not contain any squares. A square is a word of the form , where is not empty. Thus, a squarefree word can also be defined as a word that avoids the pattern . Finite squarefree words Binary alphabet Over a binary alphabet , the only squarefree words are the empty word , and . Ternary alphabet Over a ternary alphabet , there are infinitely many squarefree words. It is possible to count the number of ternary squarefree words of length . This number is bounded by , where . The upper bound on can be found via Fekete's Lemma and approximation by automata. The lower bound can be found by finding a substitution that preserves squarefreeness. Alphabet with more than three letters Since there are infinitely many squarefree words over three-letter alphabets, this implies there are also infinitely many squarefree words over an alphabet with more than three letters. The following table shows the exact growth rate of the -ary squarefree words: 2-dimensional words Consider a map from to , where is an alphabet and is called a 2-dimensional word. Let be the entry . A word is a line of if there exists such that , and for . Carpi proves that there exists a 2-dimensional word over a 16-letter alphabet such that every line of is squarefree. A computer search shows that there are no 2-dimensional words over a 7-letter alphabet, such that every line of is squarefree. Generating finite squarefree words Shur proposes an algorithm called R2F (random-t(w)o-free) that can generate a squarefree word of length over any alphabet with three or more letters. This algorithm is based on a modification of entropy compression: it randomly selects letters from a k-letter alphabet to generate a -ary squarefree word. algorithm R2F is input: alphabet size , word length output: a -ary squarefree word of length . choose in uniformly at random set to fol
https://en.wikipedia.org/wiki/Comparison%20function
In applied mathematics, comparison functions are several classes of continuous functions, which are used in stability theory to characterize the stability properties of control systems as Lyapunov stability, uniform asymptotic stability etc. 1 + 1 equals 2, which can be used in comparison functions. Let be a space of continuous functions acting from to . The most important classes of comparison functions are: Functions of class are also called positive-definite functions. One of the most important properties of comparison functions is given by Sontag’s -Lemma, named after Eduardo Sontag. It says that for each and any there exist : Many further useful properties of comparison functions can be found in. Comparison functions are primarily used to obtain quantitative restatements of stability properties as Lyapunov stability, uniform asymptotic stability, etc. These restatements are often more useful than the qualitative definitions of stability properties given in language. As an example, consider an ordinary differential equation where is locally Lipschitz. Then: () is globally stable if and only if there is a so that for any initial condition and for any it holds that () is globally asymptotically stable if and only if there is a so that for any initial condition and for any it holds that The comparison-functions formalism is widely used in input-to-state stability theory.
https://en.wikipedia.org/wiki/Logical%20constant
In logic, a logical constant or constant symbol of a language is a symbol that has the same semantic value under every interpretation of . Two important types of logical constants are logical connectives and quantifiers. The equality predicate (usually written '=') is also treated as a logical constant in many systems of logic. One of the fundamental questions in the philosophy of logic is "What is a logical constant?"; that is, what special feature of certain constants makes them logical in nature? Some symbols that are commonly treated as logical constants are: Many of these logical constants are sometimes denoted by alternate symbols (for instance, the use of the symbol "&" rather than "∧" to denote the logical and). Defining logical constants is a major part of the work of Gottlob Frege and Bertrand Russell. Russell returned to the subject of logical constants in the preface to the second edition (1937) of The Principles of Mathematics noting that logic becomes linguistic: "If we are to say anything definite about them, [they] must be treated as part of the language, not as part of what the language speaks about." The text of this book uses relations R, their converses and complements as primitive notions, also taken as logical constants in the form aRb. See also Logical connective Logical value Non-logical symbol
https://en.wikipedia.org/wiki/Musix%20GNU%2BLinux
Musix GNU+Linux is a discontinued live CD and DVD Linux distribution for the IA-32 processor family based on Debian. It contained a collection of software for audio production, graphic design, video editing and general-purpose applications. Musix GNU+Linux was one of the few Linux distributions recognized by the Free Software Foundation as being composed completely of free software. The main language used in development discussion and documentation was Spanish. Software Musix 2.0 Musix 2.0 was developed using the live-helper scripts from the Debian-Live project. The first Alpha version of Musix 2.0 was released on 25 March 2009 including two realtime-patched Linux-Libre kernels. On 17 May 2009 the first beta version of Musix 2.0 was released. See also Comparison of Linux distributions dyne:bolic – another free distribution for multimedia enthusiasts GNU/Linux naming controversy List of Linux distributions based on Debian
https://en.wikipedia.org/wiki/Dipsogen
A dipsogen is an agent that causes thirst. (From Greek: δίψα (dipsa), "thirst" and the suffix -gen, "to create".) Physiology Angiotensin II is thought to be a powerful dipsogen, and is one of the products of the renin–angiotensin pathway, a biological homeostatic mechanism for the regulation of electrolytes and water. External links 'Fluid Physiology' by Kerry Brandis (from http://www.anaesthesiamcq.com) Physiology
https://en.wikipedia.org/wiki/Software%20Communications%20Architecture
The Software Communications Architecture (SCA) is an open architecture framework that defines a standard way for radios to instantiate, configure, and manage waveform applications running on their platform. The SCA separates waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. The latest version is SCA 4.1. Overview The SCA is published by the Joint Tactical Networking Center (JTNC). This architecture was developed to assist in the development of Software Defined Radio (SDR) communication systems, capturing the benefits of recent technology advances which are expected to greatly enhance interoperability of communication systems and reduce development and deployment costs. The architecture is also applicable to other embedded, distributed-computing applications such as Communications Terminals or Electronic Warfare (EW). The SCA has been structured to: Provide for portability of applications software between different SCA implementations, Leverage commercial standards to reduce development cost, Reduce software development time through the ability to reuse design modules, and Build on evolving commercial frameworks and architectures. The SCA is deliberately designed to meet commercial application requirements as well as those of military applications. Since the SCA is intended to become a self-sustaining standard, a wide cross-section of industry has been invited to participate in the development and validation of the SCA. The SCA is not a system specification but an implementation independent set of rules that constrain the design of systems to achieve the objectives listed above. Core Framework The Core Framework (CF) defines the essential "core" set of open software interfaces and profiles that provide for the deployment, management, interconnection, and intercommunication of software application components in an embedded, distributed-computing communication
https://en.wikipedia.org/wiki/Air-to-cloth%20ratio
The air-to-cloth ratio is the volumetric flow rate of air (m3/minute; SI m3/second) flowing through a dust collector's inlet duct divided by the total cloth area (m2) in the filters. The result is expressed in units of velocity. The air-to-cloth ratio is typically between 1.5 and 3.5 metres per minute, mainly depending on the concentration of dust loading. External links Details on how to calculate air-to-cloth ratio Filters Engineering ratios
https://en.wikipedia.org/wiki/Rosette%20%28design%29
A rosette is a round, stylized flower design. Origin The rosette derives from the natural shape of the botanical rosette, formed by leaves radiating out from the stem of a plant and visible even after the flowers have withered. History The rosette design is used extensively in sculptural objects from antiquity, appearing in Mesopotamia, and in funeral steles' decoration in Ancient Greece. The rosette was another important symbol of Ishtar which had originally belonged to Inanna along with the Star of Ishtar. It was adopted later in Romaneseque and Renaissance architecture, and also common in the art of Central Asia, spreading as far as India where it is used as a decorative motif in Greco-Buddhist art. Ancient origins One of the earliest appearances of the rosette in ancient art is in early fourth millennium BC Egypt. Another early Mediterranean occurrence of the rosette design derives from Minoan Crete; Among other places, the design appears on the Phaistos Disc, recovered from the eponymous archaeological site in southern Crete. Modern use The formalised flower motif is often carved in stone or wood to create decorative ornaments for architecture and furniture, and in metalworking, jewelry design and the applied arts to form a decorative border or at the intersection of two materials. Rosette decorations have been used for formal military awards. They also appear in modern, civilian clothes, and are often worn prominently in political or sporting events. Rosettes sometimes decorate musical instruments, such as around the perimeter of sound holes of guitars. Gallery See also Six petal rosette Footnotes Ornaments (architecture) Decorative arts Ornaments Visual motifs
https://en.wikipedia.org/wiki/Monte%20Carlo%20localization
Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot. Basic description Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization. Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is. State representation The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple for position and orientation . For
https://en.wikipedia.org/wiki/System%20in%20a%20package
A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on a substrate. They are internally connected by fine wires that are bonded to the package. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die. Technology SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging, unlike less dense multi-chip modules, which place dies horizontally on a carrier. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. SiPs can contain several chips—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any d
https://en.wikipedia.org/wiki/MAFA
MAFA (Mast cell function-associated antigen) is a type II membrane glycoprotein, first identified on the surface of rat mucosal-type mast cells of the RBL-2H3 line. More recently, human and mouse homologues of MAFA have been discovered yet also (or only) expressed by NK and T-cells. MAFA is closely linked with the type 1 Fcɛ receptors in not only mucosal mast cells of humans and mice but also in the serosal mast cells of these same organisms. It has the ability to function as both a channel for calcium ions along with interact with other receptors to inhibit certain cell processes. It function is based on its specialized structure, which contains many specialized motifs and sequences that allow its functions to take place. Discovery Experimental discovery MAFA was initially discovered by Enrique Ortega and Israel Pecht in 1988 while studying the type 1 Fcɛ receptors (FcɛRI) and the unknown Ca2+ channels that allowed these receptors to work in the cellular membrane. Ortega and Pecht experimented through using a series of monoclonal antibodies on the RBL -2H3 line of rat mast cells. While experimenting and trying to find a specific antibody that would raise a response, the G63 monoclonal antibody was shown to raise a response by inhibiting the cellular secretions linked to the FcɛRI receptors in these rat mucosal mast cells. The G63 antibody attached to a specific membrane receptor protein that caused the inhibition process to occur. Specifically, the inhibition occurred by the G63 antibody and glycoprotein cross-linking so that the processes of inflammation mediator formation, Ca2+ intake into the cell, and the hydrolysis of phosphatidylinositides were all stopped. This caused biochemical inhibition of the normal FcɛRI response. The identified receptor protein was then isolated and studied where it was found that when cross-linked, the protein actually had a conformational change that localized the FcɛRI receptors. Based on these results, both Ortega and Pecht na
https://en.wikipedia.org/wiki/Demographic%20window
The Demographic Window is defined to be that period of time in a nation's demographic evolution when the proportion of population of working age group is particularly prominent. This occurs when the demographic architecture of a population becomes younger and the percentage of people able to work reaches its height. Typically, the demographic window of opportunity lasts for 30–40 years depending upon the country. Because of the mechanical link between fertility levels and age structures, the timing and duration of this period is closely associated to those of fertility decline: when birth rates fall, the age pyramid first shrinks with gradually lower proportions of young population (under 15s) and the dependency ratio decreases as is happening (or happened) in various parts of East Asia over several decades. After a few decades, low fertility however causes the population to get older and the growing proportion of elderly people inflates again the dependency ratio as is observed in present-day Europe. The exact technical boundaries of definition may vary. The UN Population Department has defined it as period when the proportion of children and youth under 15 years falls below 30 per cent and the proportion of people 65 years and older is still below 15 per cent. The Global Data Lab released an alternative classification of phases: Europe's demographic window lasted from 1950 to 2000. It began in China in 1990 and is expected to last until 2015. India is expected to enter the demographic window in 2010, which may last until the middle of the present century. Much of Africa will not enter the demographic window until 2045 or later. Societies who have entered the demographic window have smaller dependency ratio (ratio of dependents to working-age population) and therefore the demographic potential for high economic growth as favorable dependency ratios tend to boost savings and investments in human capital. But this so-called "demographic bonus" (or demographic divi
https://en.wikipedia.org/wiki/Eosinophilic%20esophagitis
Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus that involves eosinophils, a type of white blood cell. In healthy individuals, the esophagus is typically devoid of eosinophils. In EoE, eosinophils migrate to the esophagus in large numbers. When a trigger food is eaten, the eosinophils contribute to tissue damage and inflammation. Symptoms include swallowing difficulty, food impaction, vomiting, and heartburn. Eosinophilic esophagitis was first described in children but also occurs in adults. The condition is not well understood, but food allergy may play a significant role. The treatment may consist of removal of known or suspected triggers and medication to suppress the immune response. In severe cases, it may be necessary to enlarge the esophagus with an endoscopy procedure. While knowledge about EoE has been increasing rapidly, diagnosis of EoE can be challenging because the symptoms and histo-pathologic findings are not specific. Signs and symptoms EoE often presents with difficulty swallowing, food impaction, stomach pains, regurgitation or vomiting, and decreased appetite. Although the typical onset of EoE is in childhood, the disease can be found in all age groups, and symptoms vary depending on the age of presentation. In addition, young children with EoE may present with feeding difficulties and poor weight gain. It is more common in males, and affects both adults and children. Predominant symptoms in school-aged children and adolescents include difficulty swallowing, food impaction, and choking/gagging with meals- particularly when eating foods with coarse textures. Other symptoms in this age group can include abdominal/chest pain, vomiting, and regurgitation. The predominant symptom in adults is difficulty swallowing; however, intractable heartburn and food avoidance may also be present. Due to the long-standing inflammation and possible resultant scarring that may have gone unrecognized, adults presenting with
https://en.wikipedia.org/wiki/Type%E2%80%93length%E2%80%93value
Within communication protocols, TLV (type-length-value or tag-length-value) is an encoding scheme used for informational elements. A TLV-encoded data stream contains code related to the record type, the record value's length, and finally the value itself. Details The type and length are fixed in size (typically 1–4 bytes), and the value field is of variable size. These fields are used as follows: Type A binary code, often simply alphanumeric, which indicates the kind of field that this part of the message represents; Length The size of the value field (typically in bytes); Value Variable-sized series of bytes which contains data for this part of the message. Some advantages of using a TLV representation data system solution are: TLV sequences are easily searched using generalized parsing functions; New message elements which are received at an older node can be safely skipped and the rest of the message can be parsed. This is similar to the way that unknown XML tags can be safely skipped; TLV elements can be placed in any order inside the message body; TLV elements are typically used in a binary format and binary protocols which makes parsing faster and the data smaller than in comparable text based protocols. Examples Real-world examples Transport protocols TLS (and its predecessor SSL) use TLV-encoded messages. SSH COPS IS-IS RADIUS Link Layer Discovery Protocol allows for the sending of organizational-specific information as a TLV element within LLDP packets Media Redundancy Protocol allows organizational-specific information Dynamic Host Configuration Protocol (DHCP) uses TLV encoded options RR protocol used in GSM cell phones (defined in 3GPP 04.18). In this protocol each message is defined as a sequence of information elements. Data storage formats IFF Matroska uses TLV for markup tags QTFF (the basis for MPEG-4 containers) Other ubus used for IPC in OpenWrt Other examples Imagine a message to make a telephone call. In a fi
https://en.wikipedia.org/wiki/Link%20Layer%20Discovery%20Protocol
The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on a local area network based on IEEE 802 technology, principally wired Ethernet. The protocol is formally referred to by the IEEE as Station and Media Access Control Connectivity Discovery specified in IEEE 802.1AB with additional support in IEEE 802.3 section 6 clause 79. LLDP performs functions similar to several proprietary protocols, such as Cisco Discovery Protocol, Foundry Discovery Protocol, Nortel Discovery Protocol and Link Layer Topology Discovery. Information gathered Information gathered with LLDP can be stored in the device management information base (MIB) and queried with the Simple Network Management Protocol (SNMP) as specified in RFC 2922. The topology of an LLDP-enabled network can be discovered by crawling the hosts and querying this database. Information that may be retrieved include: System name and description Port name and description VLAN name IP management address System capabilities (switching, routing, etc.) MAC/PHY information MDI power Link aggregation Applications The Link Layer Discovery Protocol may be used as a component in network management and network monitoring applications. One such example is its use in data center bridging requirements. The (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. LLDP is used to advertise power over Ethernet capabilities and requirements and negotiate power delivery. Media endpoint discovery extension Media Endpoint Discovery is an enhancement of LLDP, known as LLDP-MED, that provides the following facilities: Auto-discovery of LAN policies (such as VLAN, Layer 2 Priority and Differentiated services (Diffserv) settings) enabling plug and play networking. Device
https://en.wikipedia.org/wiki/Value%20%28mathematics%29
In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an integer such as 42. The value of a variable or a constant is any number or other mathematical object assigned to it. The value of a mathematical expression is the result of the computation described by this expression when the variables and constants in it are assigned values. The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values. For example, if the function is defined by , then assigning the value 3 to its argument yields the function value 10, since . If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values. See also Value function Value (computer science) Absolute value Truth value
https://en.wikipedia.org/wiki/Logic%20of%20information
The logic of information, or the logical theory of information, considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce. In this line of work, the concept of information serves to integrate the aspects of signs and expressions that are separately covered, on the one hand, by the concepts of denotation and extension, and on the other hand, by the concepts of connotation and comprehension. Peirce began to develop these ideas in his lectures "On the Logic of Science" at Harvard University (1865) and the Lowell Institute (1866). See also Charles Sanders Peirce bibliography Information theory Inquiry Philosophy of information Pragmatic maxim Pragmatic theory of information Pragmatic theory of truth Pragmaticism Pragmatism Scientific method Semeiotic Semiosis Semiotics Semiotic information theory Sign relation Sign relational complex Triadic relation
https://en.wikipedia.org/wiki/Mass-to-light%20ratio
In astrophysics and physical cosmology the mass-to-light ratio, normally designated with the Greek letter upsilon, , is the quotient between the total mass of a spatial volume (typically on the scales of a galaxy or a cluster) and its luminosity. These ratios are often reported using the value calculated for the Sun as a baseline ratio which is a constant  = 5133 kg/W: equal to the solar mass divided by the solar luminosity , . The mass-to-light ratios of galaxies and clusters are all much greater than due in part to the fact that most of the matter in these objects does not reside within stars and observations suggest that a large fraction is present in the form of dark matter. Luminosities are obtained from photometric observations, correcting the observed brightness of the object for the distance dimming and extinction effects. In general, unless a complete spectrum of the radiation emitted by the object is obtained, a model must be extrapolated through either power law or blackbody fits. The luminosity thus obtained is known as the bolometric luminosity. Masses are often calculated from the dynamics of the virialized system or from gravitational lensing. Typical mass-to-light ratios for galaxies range from 2 to 10  while on the largest scales, the mass to light ratio of the observable universe is approximately 100 , in concordance with the current best fit cosmological model.
https://en.wikipedia.org/wiki/Cray%20CS6400
The Cray Superserver 6400, or CS6400, is a discontinued multiprocessor server computer system produced by Cray Research Superservers, Inc., a subsidiary of Cray Research, and launched in 1993. The CS6400 was also sold as the Amdahl SPARCsummit 6400E. The CS6400 (codenamed SuperDragon during development) superseded the earlier SPARC-based Cray S-MP system, which was designed by Floating Point Systems. However, the CS6400 adopted the XDBus packet-switched inter-processor bus also used in Sun Microsystems' SPARCcenter 2000 (Dragon) and SPARCserver 1000 (Baby Dragon or Scorpion) Sun4d systems. This bus originated in the Xerox Dragon multiprocessor workstation designed at Xerox PARC. The CS6400 was available with either 60 MHz SuperSPARC-I or 85 MHz SuperSPARC-II processors, maximum RAM capacity was 16 GB. Other features shared with the Sun servers included use of the same SuperSPARC microprocessor and Solaris operating system. However, the CS6400 could be configured with four to 64 processors on quad XDBusses at 55 MHz, compared with the SPARCcenter 2000's maximum of 20 on dual XDBusses at 40 or 50 MHz and the SPARCserver 1000's maximum of 8 on a single XDBus. Unlike the Sun SPARCcenter 2000 and SPARCserver 1000, each CS6400 is equipped with an external System Service Processor (SSP), a SPARCstation fitted with a JTAG interface to communicate with the CS6400 to configure its internal bus control card. The other systems have a JTAG interface, but it is not used for this purpose. While the CS6400 only requires the SSP to be used for configuration changes (e.g. a CPU card is pulled for maintenance), some derivative designs, in particular the Sun Enterprise 10000, are useless without their SSP. Upon Silicon Graphics' acquisition of Cray Research in 1996, the Superserver business (by now the Cray Business Systems Division) was sold to Sun. This included Starfire, the CS6400's successor then under development, which became the Sun Enterprise 10000.
https://en.wikipedia.org/wiki/Tunnel%20washer
A tunnel washer, also called a continuous batch washer, is an industrial washing machine designed specifically to handle heavy loads of laundry. The screw is made of perforated metal, so items can progress through the washer in one direction, while water and washing chemicals move through in the opposite direction. Thus, the linen moves through pockets of progressively cleaner water and fresher chemicals. Soiled linen can be continuously fed into one end of the tunnel while clean linen emerges from the other. Originally, one of the machine's major drawbacks was the necessity of using one wash formula for all items. Modern computerized tunnel washers can monitor and adjust the chemical levels in individual pockets, effectively overcoming this problem. See also Washing machine
https://en.wikipedia.org/wiki/Adrian%20Smith%20%28statistician%29
Sir Adrian Frederick Melhuish Smith, PRS (born 9 September 1946) is a British statistician who is chief executive of the Alan Turing Institute and president of the Royal Society. Early life and education Smith was born on 9 September 1946 in Dawlish. He was educated at Selwyn College, Cambridge, and University College London, where his PhD supervisor was Dennis Lindley. Career From 1977 until 1990, he was professor of statistics and head of department of mathematics at the University of Nottingham. He was subsequently at Imperial College, London, where he was head of the mathematics department. Smith is a former deputy vice-chancellor of the University of London and became vice-chancellor of the university on 1 September 2012. He stood down from the role in August 2018 to become the director of the Alan Turing Institute. Smith is a member of the governing body of the London Business School. He served on the Advisory Council for the Office for National Statistics from 1996 to 1998, was statistical advisor to the Nuclear Waste Inspectorate from 1991 to 1998 and was advisor on Operational Analysis to the Ministry of Defence from 1982 to 1987. He is a former president of the Royal Statistical Society. He was elected a Fellow of the Royal Society in 2001. His FRS citation included "his diverse contributions to Bayesian statistics. His monographs are the most comprehensive available and his work has had a major impact on the development of monitoring tools for clinicians." In statistical theory, Smith is a proponent of Bayesian statistics and evidence-based practice—a general extension of evidence-based medicine into all areas of public policy. With Antonio Machi, he translated Bruno de Finetti's Theory of Probability into English. He wrote an influential paper in 1990 along with Alan E. Gelfand, which drew attention to the significance of the Gibbs sampler technique for Bayesian numerical integration problems. He was also co-author of the seminal paper on the partic
https://en.wikipedia.org/wiki/Certificate%20policy
A certificate policy (CP) is a document which aims to state what are the different entities of a public key infrastructure (PKI), their roles and their duties. This document is published in the PKI perimeter. When in use with X.509 certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on the level of trust to put in the certificate. RFC 3647 The reference document for writing a certificate policy is, , . The RFC proposes a framework for the writing of certificate policies and Certification Practice Statements (CPS). The points described below are based on the framework presented in the RFC. Main points Architecture The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI. Certificate uses An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued for digital signature of e-mail (aka S/MIME), encryption of data, authentication (e.g. of a Web server, as when one uses HTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way. Naming, identification and authentication The document also describes how certificates names are to be chosen, and besides, the associated needs for identification and authentication. When a certification application is filled, the certification authority (or, by delegation, the registration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in an identity theft. Key generation The ge
https://en.wikipedia.org/wiki/L%C3%B6fgren%20syndrome
Löfgren syndrome is a type of acute sarcoidosis, an inflammatory disorder characterized by swollen lymph nodes in the chest, tender red nodules on the shins, fever and arthritis. It is more common in women than men, and is more frequent in those of Scandinavian, Irish, African and Puerto Rican heritage. It was described in 1953 by Sven Halvar Löfgren, a Swedish clinician. Some have considered the condition to be imprecisely defined. Signs and symptoms It is characterized by enlargement of the lymph nodes near the inner border of the lungs (called "hilar lymphadenopathy") as seen on x-ray, and tender red nodules (erythema nodosum) are classically present on the shins, predominantly in women. It may also be accompanied by arthritis (more prominent in men) and fever. The arthritis is often acute and involves the lower extremities, particularly the ankles. Löfgren syndrome consists of the triad of erythema nodosum, bilateral hilar lymphadenopathy on chest radiograph, and joint pain. Genetics Recent studies have demonstrated that the HLA-DRB1*03 is strongly associated with Löfgren syndrome. Diagnosis The triad of erythema nodosum, acute arthritis, and bilateral hilar lymphadenopathy is highly specific (>95%) for the diagnosis of Löfgren syndrome. When the triad is present, further testing with additional imaging and laboratory testing is unnecessary. Treatment NSAIDs (nonsteroidal anti-inflammatory drugs) are the usual recommended treatment for Löfgren syndrome. Colchicine or low-dose prednisone may also be used. Prognosis Löfgren syndrome is associated with a good prognosis, with > 90% of patients experiencing disease resolution within 2 years. In contrast, patients with the disfiguring skin condition lupus pernio or cardiac or neurologic involvement rarely experience disease remission. See also List of cutaneous conditions Sarcoidosis
https://en.wikipedia.org/wiki/Sort%20%28Unix%29
In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r" flag will reverse the sort order. History A command that invokes a general sort facility was first implemented within Multics. Later, it appeared in Version 1 Unix. This version was originally written by Ken Thompson at AT&T Bell Laboratories. By Version 4 Thompson had modified it to use pipes, but sort retained an option to name the output file because it was used to sort a file in place. In Version 5, Thompson invented "-" to represent standard input. The version of bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. This implementation employs the merge sort algorithm. Similar commands are available on many other operating systems, for example a command is part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. Syntax sort [OPTION]... [FILE]... With no FILE, or when FILE is -, the command reads from standard input. Parameters Examples Sort a file in alphabetical order $ cat phonebook Smith, Brett 555-4321 Doe, John 555-1234 Doe, Jane 555-3214 Avery, Cory 555-4132 Fogarty, Suzie 555-2314 $ sort phonebook Avery, Cory 555-4132 Doe, Jane 555-3214 Doe, John 555-1234 Fogarty, Suzie 555-2314 Smith, Brett 555-4321 Sort by number The -n option makes the program sort according to numerical value. The command produces output that starts with a number, the file size, so its output can be piped to to produce a list of files sorted by (ascending) fil
https://en.wikipedia.org/wiki/Brian%20Conrad
Brian Conrad (born November 20, 1970) is an American mathematician and number theorist, working at Stanford University. Previously, he taught at the University of Michigan and at Columbia University. Conrad and others proved the modularity theorem, also known as the Taniyama-Shimura Conjecture. He proved this in 1999 with Christophe Breuil, Fred Diamond and Richard Taylor, while holding a joint postdoctoral position at Harvard University and the Institute for Advanced Study in Princeton, New Jersey. Conrad received his bachelor's degree from Harvard in 1992, where he won a prize for his undergraduate thesis. He did his doctoral work under Andrew Wiles and went on to receive his Ph.D. from Princeton University in 1996 with a dissertation titled Finite Honda Systems And Supersingular Elliptic Curves. He was also featured as an extra in Nova's The Proof. His identical twin brother Keith Conrad, also a number theorist, is a professor at the University of Connecticut.
https://en.wikipedia.org/wiki/Transgene
A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene. The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions. Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems. History The idea of shaping a
https://en.wikipedia.org/wiki/Prism%20%28chipset%29
The Prism brand is used for wireless networking integrated circuit (commonly called "chips") technology from Conexant for wireless LANs. They were formerly produced by Intersil Corporation. Legacy 802.11b products (Prism 2/2.5/3) The open-source HostAP driver supports the IEEE 802.11b Prism 2/2.5/3 family of chips. Wireless adaptors which use the Prism chipset are known for compatibility, and are preferred for specialist applications such as packet capture. No win64 drivers are known to exist. Intersil firmware WEP WPA (TKIP), after update WPA2 (CCMP), after update Lucent/Agere WEP WPA (TKIP in hardware) 802.11b/g products (Prism54, ISL38xx) The chipset has undergone a major redesign for 802.11g compatibility and cost reduction, and newer "Prism54" chipsets are not compatible with their predecessors. Intersil initially provided a Linux driver for the first Prism54 chips which implemented a large part of the 802.11 stack in the firmware. However, further cost reductions caused a new, lighter firmware to be designed and the amount of on-chip memory to shrink, making it impossible to run the older version of the firmware on the latest chips. In the meantime, the PRISM business was sold to Conexant, which never published information about the newer firmware API that would enable a Linux driver to be written. However, a reverse engineering effort eventually made it possible to use the new Prism54 chipsets under the Linux and BSD operating systems. See also HostAP driver for prism chipsets External links PRISM solutions at Conexant GPL drivers and firmware for the ISL38xx-based Prism chipsets (mostly reverse engineered) Wireless networking hardware
https://en.wikipedia.org/wiki/QuickWin
QuickWin was a library from Microsoft that made it possible to compile command line MS-DOS programs as Windows 3.1 applications, displaying their output in a window. Since the release of Windows NT, Microsoft has included support for console applications in the Windows operating system itself via the Windows Console, eliminating the need for QuickWin. But Intel Visual Fortran still uses that library. Borland's equivalent in Borland C++ 5 was called EasyWin. There is a program called QuickWin on CodeProject, which does a similar thing. See also Command-line interface
https://en.wikipedia.org/wiki/Sharon%20R.%20Long
Sharon Rugel Long (born March 2, 1951) is an American plant biologist. She is the Steere-Pfizer Professor of Biological Science in the Department of Biology at Stanford University, and the Principal Investigator of the Long Laboratory at Stanford. Long studies the symbiosis between bacteria and plants, in particular the relationship of nitrogen-fixing bacteria to legumes. Her work has applications for energy conservation and sustainable agriculture. She is a 1992 MacArthur Fellows Program recipient, and became a Member of the National Academy of Sciences in 1993. Early life and education Sharon Rugel Long was born on to Harold Eugene and Florence Jean (Rugel) Long. She attended George Washington High School in Denver, Colorado. Long spent a year at Harvey Mudd College before becoming one of the first women to attend Caltech in September 1970. She completed a double major in biochemistry and French literature in the Independent Studies Program, and obtained her B.S. in 1973. Long went on to study biochemistry and genetics at Yale, receiving her Ph.D. in 1979. She began her research on plants and symbiosis while a postdoc at Frederick M Ausubels lab at Harvard University. Career and research Long joined the Stanford University faculty in 1982 as an assistant professor, rising to associate professor in 1987, and full professor in 1992. From 1994-2001 she was also an Investigator of the Howard Hughes Medical Institute. She currently holds the Steere-Pfizer chair in Biological Sciences at Stanford. From 1993-1996 she was part of the National Research Councils Committee on Undergraduate Science Education. She served as Dean of Humanities and Sciences at Stanford University from 2001 to 2007. In September 2008 she was identified as one of 5 science advisors for Democratic presidential candidate Barack Obama. In 2011, she was appointed to the President's Committee on the National Medal of Science by President Obama. Long identified and cloned genes that allow bact
https://en.wikipedia.org/wiki/Shredding%20%28tree-pruning%20technique%29
Shredding is a traditional European method of tree pruning by which all side branches are removed repeatedly leaving the main trunk and top growth. In the Middle Ages the practice was common throughout Europe, but it is now rare, found mainly in central and Eastern Europe. The purpose of shredding is to allow harvest of firewood and animal fodder while preserving a tall main trunk which may be harvested for timber at a later date. It was formerly practiced in Britain although Oliver Rackham notes that "The medieval practice of shredding – cropping the side-branches of a tree leaving a tuft at the top – vanished from Britain long ago. Only at Haresfield (Gloucestershire) have I seen a few ancient ashes that may once have been shredded". Another name for cutting side branches off trees, used mainly in Northern England, is snagging. Other similar woodland management techniques include pollarding and coppicing. See also Woodland management
https://en.wikipedia.org/wiki/Flux%20balance%20analysis
Flux balance analysis (FBA) is a mathematical method for simulating metabolism in genome-scale reconstructions of metabolic networks. In comparison to traditional methods of modeling, FBA is less intensive in terms of the input data required for constructing the model. Simulations performed using FBA are computationally inexpensive and can calculate steady-state metabolic fluxes for large models (over 2000 reactions) in a few seconds on modern personal computers. The related method of metabolic pathway analysis seeks to find and list all possible pathways between metabolites. FBA finds applications in bioprocess engineering to systematically identify modifications to the metabolic networks of microbes used in fermentation processes that improve product yields of industrially important chemicals such as ethanol and succinic acid. It has also been used for the identification of putative drug targets in cancer and pathogens, rational design of culture media, and host–pathogen interactions. The results of FBA can be visualized using flux maps similar to the image on the right, which illustrates the steady-state fluxes carried by reactions in glycolysis. The thickness of the arrows is proportional to the flux through the reaction. FBA formalizes the system of equations describing the concentration changes in a metabolic network as the dot product of a matrix of the stoichiometric coefficients (the stoichiometric matrix S) and the vector v of the unsolved fluxes. The right-hand side of the dot product is a vector of zeros representing the system at steady state. Linear programming is then used to calculate a solution of fluxes corresponding to the steady state. History Some of the earliest work in FBA dates back to the early 1980s. Papoutsakis demonstrated that it was possible to construct flux balance equations using a metabolic map. It was Watson, however, who first introduced the idea of using linear programming and an objective function to solve for the fluxes in
https://en.wikipedia.org/wiki/Filopodia
Filopodia (: filopodium) are slender cytoplasmic projections that extend beyond the leading edge of lamellipodia in migrating cells. Within the lamellipodium, actin ribs are known as microspikes, and when they extend beyond the lamellipodia, they're known as filopodia. They contain microfilaments (also called actin filaments) cross-linked into bundles by actin-bundling proteins, such as fascin and fimbrin. Filopodia form focal adhesions with the substratum, linking them to the cell surface. Many types of migrating cells display filopodia, which are thought to be involved in both sensation of chemotropic cues, and resulting changes in directed locomotion. Activation of the Rho family of GTPases, particularly cdc42 and their downstream intermediates, results in the polymerization of actin fibers by Ena/Vasp homology proteins. Growth factors bind to receptor tyrosine kinases resulting in the polymerization of actin filaments, which, when cross-linked, make up the supporting cytoskeletal elements of filopodia. Rho activity also results in activation by phosphorylation of ezrin-moesin-radixin family proteins that link actin filaments to the filopodia membrane. Filopodia have roles in sensing, migration, neurite outgrowth, and cell-cell interaction. To close a wound in vertebrates, growth factors stimulate the formation of filopodia in fibroblasts to direct fibroblast migration and wound closure. In macrophages, filopodia act as phagocytic tentacles, pulling bound objects towards the cell for phagocytosis. In infections Filopodia are also used for movement of bacteria between cells, so as to evade the host immune system. The intracellular bacteria Ehrlichia are transported between cells through the host cell filopodia induced by the pathogen during initial stages of infection. Filopodia are the initial contact that human retinal pigment epithelial (RPE) cells make with elementary bodies of Chlamydia trachomatis, the bacteria that causes Chlamydia. Viruses have been
https://en.wikipedia.org/wiki/Meaning%20%28philosophy%29
In philosophymore specifically, in its sub-fields semantics, semiotics, philosophy of language, metaphysics, and metasemanticsmeaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify". The types of meanings vary according to the types of the thing that is being represented. There are: the things, which might have meaning; things that are also signs of other things, and therefore are always meaningful (i.e., natural signs of the physical world and ideas within the mind); things that are necessarily meaningful, such as words and nonverbal symbols. The major contemporary positions of meaning come under the following partial definitions of meaning: psychological theories, involving notions of thought, intention, or understanding; logical theories, involving notions such as intension, cognitive content, or sense, along with extension, reference, or denotation; message, content, information, or communication; truth conditions; usage, and the instructions for usage; measurement, computation, or operation. Truth and meaning The question of what is a proper basis for deciding how words, symbols, ideas and beliefs may properly be considered to truthfully denote meaning, whether by a single person or by an entire society, has been considered by five major types of theory of meaning and truth. Each type is discussed below, together with its principal exponents. Substantive theories of meaning Correspondence theory Correspondence theories emphasise that true beliefs and true statements of meaning correspond to the actual state of affairs and that associated meanings must be in agreement with these beliefs and statements. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other. It is a traditional model tracing its origins to ancient Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsi
https://en.wikipedia.org/wiki/Triangle%20strip
In computer graphics, a triangle strip is a subset of triangles in a triangle mesh with shared vertices, and is a more memory-efficient method of storing information about the mesh. They are more efficient than un-indexed lists of triangles, but usually equally fast or slower than indexed triangle lists. The primary reason to use triangle strips is to reduce the amount of data needed to create a series of triangles. The number of vertices stored in memory is reduced from to , where is the number of triangles to be drawn. This allows for less use of disk space, as well as making them faster to load into RAM. For example, the four triangles in the diagram, without using triangle strips, would have to be stored and interpreted as four separate triangles: ABC, CBD, CDE, and EDF. However, using a triangle strip, they can be stored simply as a sequence of vertices ABCDEF. This sequence would be decoded as a set of triangles with vertices at ABC, BCD, CDE and DEF - although the exact order that the vertices are read will not be in left-to-right order as this would result in adjacent triangles facing alternating directions. OpenGL implementation OpenGL has built-in support for triangle strips. Fixed function OpenGL (deprecated in OpenGL 3.0) has support for triangle strips using immediate mode and the , , and functions. Newer versions support triangle strips using and . To draw a triangle strip using immediate mode OpenGL, must be passed the argument , which notifies OpenGL a triangle strip is about to be drawn. The family of functions specify the coordinates for each vertex in the triangle strip. For more information, consult The OpenGL Redbook. To draw the triangle strip in the diagram using immediate mode OpenGL, the code is as follows: //Vertices below are in Clockwise orientation //Default setting for glFrontFace is Counter-clockwise glFrontFace(GL_CW); glBegin(GL_TRIANGLE_STRIP); glVertex3f( 0.0f, 0.0f, 0.0f ); //vertex 1 glVertex3f( 0.0f, 0.5f,
https://en.wikipedia.org/wiki/Sanistand
Sanistand was a female urinal manufactured by Japanese toilet maker giant TOTO from 1951 to 1971 and marketed by American Standard from 1950 to 1973. It appeared in a bathroom in the National Stadium for female athletes during the 1964 Summer Olympics in Tokyo. The urinal encouraged women to urinate from a standing position, without the need to sit on a shared seat. See also Female urinal Female urination device Pollee External links TOTO Library article, March 20, 2000 TOTO Kids article on female urinals Background Information on female urinals Toilets Products introduced in 1951 Urine Urinals
https://en.wikipedia.org/wiki/AVR32
AVR32 is a 32-bit RISC microcontroller architecture produced by Atmel. The microcontroller architecture was designed by a handful of people educated at the Norwegian University of Science and Technology, including lead designer Øyvind Strøm and CPU architect Erik Renno in Atmel's Norwegian design center. Most instructions are executed in a single-cycle. The multiply–accumulate unit can perform a 32-bit × 16-bit + 48-bit arithmetic operation in two cycles (result latency), issued once per cycle. It does not resemble the 8-bit AVR microcontroller family, even though they were both designed at Atmel Norway, in Trondheim. Some of the debug-tools are similar. Support for AVR32 has been dropped from Linux as of kernel 4.12; Atmel has switched mostly to M variants of the ARM architecture. Architecture The AVR32 has at least two micro-architectures, the AVR32A and AVR32B. These differ in the instruction set architecture, register configurations and the use of caches for instructions and data. The AVR32A CPU cores are for inexpensive applications. They do not provide dedicated hardware registers for shadowing the register file, status and return address in interrupts. This saves chip area at the expense of slower interrupt-handling. The AVR32B CPU cores are designed for fast interrupts. They have dedicated registers to hold these values for interrupts, exceptions and supervisor calls. The AVR32B cores also support a Java virtual machine in hardware. The AVR32 instruction set has 16-bit (compact) and 32-bit (extended) instructions, similar to e.g. some ARM, with several specialized instructions not found in older ARMv5 or ARMv6 or MIPS32. Several U.S. patents are filed for the AVR32 ISA and design platform. Just like the AVR 8-bit microcontroller architecture, the AVR32 was designed for high code density (packing much function in few instructions) and fast instructions with few clock cycles. Atmel used the independent benchmark consortium EEMBC to benchmark the a
https://en.wikipedia.org/wiki/Nuclear%20gene
A nuclear gene is a gene whose physical DNA nucleotide sequence is located in the cell nucleus of a eukaryote. The term is used to distinguish nuclear genes from genes found in mitochondria or chloroplasts. The vast majority of genes in eukaryotes are nuclear. Endosymbiotic theory Mitochondria and plastids evolved from free-living prokaryotes into current cytoplasmic organelles through endosymbiotic evolution. Mitochondria are thought to be necessary for eukaryotic life to exist. They are known as the cell's powerhouses because they provide the majority of the energy or ATP required by the cell. The mitochondrial genome (mtDNA) is replicated separately from the host genome. Human mtDNA codes for 13 proteins, most of which are involved in oxidative phosphorylation (OXPHOS). The nuclear genome encodes the remaining mitochondrial proteins, which are then transported into the mitochondria. The genomes of these organelles have become far smaller than those of their free-living predecessors. This is mostly due to the widespread transfer of genes from prokaryote progenitors to the nuclear genome, followed by their elimination from organelle genomes. In evolutionary timescales, the continuous entry of organelle DNA into the nucleus has provided novel nuclear genes. Endosymbiotic organelle interactions Though separated from one another within the cell, nuclear genes and those of mitochondria and chloroplasts can affect each other in a number of ways. Nuclear genes play major roles in the expression of chloroplast genes and mitochondrial genes. Additionally, gene products of mitochondria can themselves affect the expression of genes within the cell nucleus. This can be done through metabolites as well as through certain peptides trans-locating from the mitochondria to the nucleus, where they can then affect gene expression. Structure Eukaryotic genomes have distinct higher-order chromatin structures that are closely packaged functional relates to gene expression. Chroma
https://en.wikipedia.org/wiki/Sentence%20spacing
Sentence spacing concerns how spaces are inserted between sentences in typeset text and is a matter of typographical convention. Since the introduction of movable-type printing in Europe, various sentence spacing conventions have been used in languages with a Latin alphabet. These include a normal word space (as between the words in a sentence), a single enlarged space, and two full spaces. Until the 20th century, publishing houses and printers in many countries used additional space between sentences. There were exceptions to this traditional spacing method—some printers used spacing between sentences that was no wider than word spacing. This was French spacing—a term synonymous with single-space sentence spacing until the late 20th century. With the introduction of the typewriter in the late 19th century, typists used two spaces between sentences to mimic the style used by traditional typesetters. While wide sentence spacing was phased out in the printing industry in the mid-20th century, the practice continued on typewriters and later on computers. Perhaps because of this, many modern sources now incorrectly claim that wide spacing was created for the typewriter. The desired or correct sentence spacing is often debated, but most sources now state that an additional space is not necessary or desirable. From around 1950, single sentence spacing became standard in books, magazines, and newspapers, and the majority of style guides that use a Latin-derived alphabet as a language base now prescribe or recommend the use of a single space after the concluding punctuation of a sentence. However, some sources still state that additional spacing is correct or acceptable. Some people preferred double sentence spacing because that was how they were taught to type. The few direct studies conducted since 2002 have produced inconclusive results as to which convention is more readable. History Traditional typesetting Shortly after the invention of movable type, highly variab
https://en.wikipedia.org/wiki/IPEX%20syndrome
Immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (IPEX syndrome) is a rare autoimmune disease. It is one of the autoimmune polyendocrine syndromes. Most often, IPEX presents with autoimmune enteropathy, dermatitis (eczema), and autoimmune endocrinopathy (most often Type 1 diabetes), but other presentations exist. IPEX is caused by mutations in the gene FOXP3, which encodes transcription factor forkhead box P3 (FOXP3). FOXP3 is widely considered to be the master regulator of the regulatory T cell (Treg) lineage. FOXP3 mutation can lead to the dysfunction of CD4+ Tregs. In healthy people, Tregs maintain immune homeostasis. When there is a deleterious FOXP3 mutation, Tregs do not function properly and cause autoimmunity. IPEX onset usually happens in infancy. If left untreated, it is often fatal by the age of 2 or 3. A bone marrow transplant is generally considered the best treatment option. IPEX exclusively affects males and is inherited in an X-linked recessive manner; female carriers of pathogenic FOXP3 mutations do not have symptoms and no female cases are known. Presentation Classical triad The classical triad describes the most common symptoms of IPEX: intractable diarrhea, type 1 diabetes, and eczema. Symptoms usually begin shortly after birth. Other symptoms include: thyroid disease, kidney dysfunction, blood disorders, frequent infections, autoimmune hemolytic anemia, and food allergies, among others. Endocrinopathy The most common endocrinopathy associated with IPEX is type 1 diabetes, especially neonatal diabetes. In this type of diabetes, the immune system attacks insulin-producing cells. This makes the pancreas unable to produce insulin. Diabetes can permanently damage the pancreas. Thyroid disorders are also common. Enteropathy The most common enteropathy associated with IPEX is intractable diarrhea. Vomiting and gastritis are also common. Other manifestations include Celiac disease, ulcerative colitis, and ileus. Skin man