source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Discrete%20Laplace%20operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix. The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs. Definitions Graph Laplacians There are various definitions of the discrete Laplacian for graphs, differing by sign and scale factor (sometimes one averages over the neighboring vertices, other times one just sums; this makes no difference for a regular graph). The traditional definition of the graph Laplacian, given below, corresponds to the negative continuous Laplacian on a domain with a free boundary. Let be a graph with vertices and edges . Let be a function of the vertices taking values in a ring. Then, the discrete Laplacian acting on is defined by where is the graph distance between vertices w and v. Thus, this sum is over the nearest neighbors of the vertex v. For a graph with a finite number of edges and vertices, this definition is identical to that of the Laplacian matrix. That is, can be written as a column vector; and so is the product of the column vector and the Laplacian matrix, while is just the vth entry of the product vector. If the graph has weighted edges, that is, a weighting function is given, then the definition can be generalized to where is the weight value on the edge . Closely related to the discrete Laplacian is the averaging operator: Mesh Laplacians In addition to considering
https://en.wikipedia.org/wiki/Dual%20lattice
In the theory of lattices, the dual lattice is a construction analogous to that of a dual vector space. In certain respects, the geometry of the dual lattice of a lattice is the reciprocal of the geometry of , a perspective which underlies many of its uses. Dual lattices have many applications inside of lattice theory, theoretical computer science, cryptography and mathematics more broadly. For instance, it is used in the statement of the Poisson summation formula, transference theorems provide connections between the geometry of a lattice and that of its dual, and many lattice algorithms exploit the dual lattice. For an article with emphasis on the physics / chemistry applications, see Reciprocal lattice. This article focuses on the mathematical notion of a dual lattice. Definition Let be a lattice. That is, for some matrix . The dual lattice is the set of linear functionals on which take integer values on each point of : If is identified with using the dot-product, we can write It is important to restrict to vectors in the span of , otherwise the resulting object is not a lattice. Despite this identification of ambient Euclidean spaces, it should be emphasized that a lattice and its dual are fundamentally different kinds of objects; one consists of vectors in Euclidean space, and the other consists of a set of linear functionals on that space. Along these lines, one can also give a more abstract definition as follows: However, we note that the dual is not considered just as an abstract Abelian group of functionals, but comes with a natural inner product: , where is an orthonormal basis of . (Equivalently, one can declare that, for an orthonormal basis of , the dual vectors , defined by are an orthonormal basis.) One of the key uses of duality in lattice theory is the relationship of the geometry of the primal lattice with the geometry of its dual, for which we need this inner product. In the concrete description given above, the inner product
https://en.wikipedia.org/wiki/Food%20intolerance
Food intolerance is a detrimental reaction, often delayed, to a food, beverage, food additive, or compound found in foods that produces symptoms in one or more body organs and systems, but generally refers to reactions other than food allergy. Food hypersensitivity is used to refer broadly to both food intolerances and food allergies. Food allergies are immune reactions, typically an IgE reaction caused by the release of histamine but also encompassing non-IgE immune responses. This mechanism causes allergies to typically give immediate reaction (a few minutes to a few hours) to foods. Food intolerances can be classified according to their mechanism. Intolerance can result from the absence of specific chemicals or enzymes needed to digest a food substance, as in hereditary fructose intolerance. It may be a result of an abnormality in the body's ability to absorb nutrients, as occurs in fructose malabsorption. Food intolerance reactions can occur to naturally occurring chemicals in foods, as in salicylate sensitivity. Drugs sourced from plants, such as aspirin, can also cause these kinds of reactions. Definitions Food hypersensitivity is used to refer broadly to both food intolerances and food allergies. There are a variety of earlier terms which are no longer in use such as "pseudo-allergy". Food intolerance reactions can include pharmacologic, metabolic, and gastro-intestinal responses to foods or food compounds. Food intolerance does not include either psychological responses or foodborne illness. A non-allergic food hypersensitivity is an abnormal physiological response. It can be difficult to determine the poorly tolerated substance as reactions can be delayed, dose-dependent, and a particular reaction-causing compound may be found in many foods. Metabolic food reactions are due to inborn or acquired errors of metabolism of nutrients, such as in lactase deficiency, phenylketonuria and favism. Pharmacological reactions are generally due to low-molecular-we
https://en.wikipedia.org/wiki/Distance%20modulus
The distance modulus is a way of expressing distances that is often used in astronomy. It describes distances on a logarithmic scale based on the astronomical magnitude system. Definition The distance modulus is the difference between the apparent magnitude (ideally, corrected from the effects of interstellar absorption) and the absolute magnitude of an astronomical object. It is related to the luminous distance in parsecs by: This definition is convenient because the observed brightness of a light source is related to its distance by the inverse square law (a source twice as far away appears one quarter as bright) and because brightnesses are usually expressed not directly, but in magnitudes. Absolute magnitude is defined as the apparent magnitude of an object when seen at a distance of 10 parsecs. If a light source has luminosity when observed from a distance of parsecs, and luminosity when observed from a distance of 10 parsecs, the inverse-square law is then written like: The magnitudes and flux are related by: Substituting and rearranging, we get: which means that the apparent magnitude is the absolute magnitude plus the distance modulus. Isolating from the equation , finds that the distance (or, the luminosity distance) in parsecs is given by The uncertainty in the distance in parsecs () can be computed from the uncertainty in the distance modulus () using which is derived using standard error analysis. Different kinds of distance moduli Distance is not the only quantity relevant in determining the difference between absolute and apparent magnitude. Absorption is another important factor, and it may even be a dominant one in particular cases (e.g., in the direction of the Galactic Center). Thus a distinction is made between distance moduli uncorrected for interstellar absorption, the values of which would overestimate distances if used naively, and absorption-corrected moduli. The first ones are termed visual distance moduli and are deno
https://en.wikipedia.org/wiki/Signaling%20game
In game theory, a signaling game is a simple type of a dynamic Bayesian game. The essence of a signalling game is that one player takes an action, the signal, to convey information to another player, where sending the signal is more costly if they are conveying false information. A manufacturer, for example, might provide a warranty for its product in order to signal to consumers that its product is unlikely to break down. The classic example is of a worker who acquires a college degree not because it increases their skill, but because it conveys their ability to employers. A simple signalling game would have two players, the sender and the receiver. The sender has one of two types that might be called "desirable" and "undesirable" with different payoff functions, where the receiver knows the probability of each type but not which one this particular sender has. The receiver has just one possible type. The sender moves first, choosing an action called the "signal" or "message" (though the term "message" is more often used in non-signalling "cheap talk" games where sending messages is costless). The receiver moves second, after observing the signal. The two players receive payoffs dependent on the sender's type, the message chosen by the sender and the action chosen by the receiver. The tension in the game is that the sender wants to persuade the receiver that they have the desirable type, and they will try to choose a signal to do that. Whether this succeeds depends on whether the undesirable type would send the same signal, and how the receiver interprets the signal. Perfect Bayesian equilibrium The equilibrium concept that is relevant for signaling games is the perfect Bayesian equilibrium, a refinement of Bayesian Nash equilibrium. Nature chooses the sender to have type with probability . The sender then chooses the probability with which to take signalling action , which can be written as for each possible The receiver observes the signal
https://en.wikipedia.org/wiki/Bicubic%20interpolation
In mathematics, bicubic interpolation is an extension of cubic spline interpolation (a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regular grid. The interpolated surface (meaning the kernel shape, not the image) is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm. In image processing, bicubic interpolation is often chosen over bilinear or nearest-neighbor interpolation in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2×2) into account, bicubic interpolation considers 16 pixels (4×4). Images resampled with bicubic interpolation can have different interpolation artifacts, depending on the b and c values chosen. Computation Suppose the function values and the derivatives , and are known at the four corners , , , and of the unit square. The interpolated surface can then be written as The interpolation problem consists of determining the 16 coefficients . Matching with the function values yields four equations: Likewise, eight equations for the derivatives in the and the directions: And four equations for the mixed partial derivative: The expressions above have used the following identities: This procedure yields a surface on the unit square that is continuous and has continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries. Grouping the unknown parameters in a vector and letting the above system of equations can be reformulated into a matrix for the linear equation . Inverting the matrix gives the more useful linear equation , where which allows to be calculated qui
https://en.wikipedia.org/wiki/Levi%20graph
In combinatorial mathematics, a Levi graph or incidence graph is a bipartite graph associated with an incidence structure. From a collection of points and lines in an incidence geometry or a projective configuration, we form a graph with one vertex per point, one vertex per line, and an edge for every incidence between a point and a line. They are named for Friedrich Wilhelm Levi, who wrote about them in 1942. The Levi graph of a system of points and lines usually has girth at least six: Any 4-cycles would correspond to two lines through the same two points. Conversely any bipartite graph with girth at least six can be viewed as the Levi graph of an abstract incidence structure. Levi graphs of configurations are biregular, and every biregular graph with girth at least six can be viewed as the Levi graph of an abstract configuration. Levi graphs may also be defined for other types of incidence structure, such as the incidences between points and planes in Euclidean space. For every Levi graph, there is an equivalent hypergraph, and vice versa. Examples The Desargues graph is the Levi graph of the Desargues configuration, composed of 10 points and 10 lines. There are 3 points on each line, and 3 lines passing through each point. The Desargues graph can also be viewed as the generalized Petersen graph G(10,3) or the bipartite Kneser graph with parameters 5,2. It is 3-regular with 20 vertices. The Heawood graph is the Levi graph of the Fano plane. It is also known as the (3,6)-cage, and is 3-regular with 14 vertices. The Möbius–Kantor graph is the Levi graph of the Möbius–Kantor configuration, a system of 8 points and 8 lines that cannot be realized by straight lines in the Euclidean plane. It is 3-regular with 16 vertices. The Pappus graph is the Levi graph of the Pappus configuration, composed of 9 points and 9 lines. Like the Desargues configuration there are 3 points on each line and 3 lines passing through each point. It is 3-regular with 18 vertices. Th
https://en.wikipedia.org/wiki/Particle%20filter
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998. Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems. Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the
https://en.wikipedia.org/wiki/Hermannskogel
The Hermannskogel () is a hill in Döbling, the 19th district of Vienna. At 542 metres above sea level, it is the highest natural point of Vienna. It lies on the border to Lower Austria. The Habsburgwarte, standing atop the Hermannskogel, marked the kilometre zero in cartographic measurements used in Austria-Hungary until 1918. Geography The Hermannskogel is a forested ridge in the Wienerwald. It is both the highest point in the Kahlengebirge and in the city of Vienna. The Hermannskogel is part of a north-eastern chain of foothills belonging to the eastern Alps. It is composed of flysch containing quartz, limestone, marl, and other conglomerates. The many cliff-like layers on the south-western approach to the Hermannskogel clearly show the hill's geological make-up. The Kahlenberg and Leopoldsberg, behind which lie the Wiener Pforte, where the Danube breaks through the Wienerwald, are three kilometres to the east of the Hermannskogel. The Vogelsangberg stands nearby, as does the Dreimarkstein (to the southwest). History The first documentary reference to the Hermannskogel can be found in the Klosterneuburg Monastery’s tithe register. It dates from 1355 and names the hill hermannschobel. The name is composed of the personal name Hermann, which was common in the Middle Ages, and Kobel (which appears elsewhere as Kogel), a common designation for cone-shaped hills. In the Middle Ages, the Hermannskogel was covered in vineyards. On the side of the hill, in a depression between Sievering and Weidling, is the probable former site of the village Kogelbrunn, which lived from viticulture and which is mentioned for the first time in a document from 1237 as chogelbrunne. In 1256, Albero von Feldsberg accorded the village to the Klosterneuburg Monastery. In 1346, it was still inhabited, but it was destroyed at the end of the 15th century, probably a victim of Magyar raids. The village's demise also spelt the end of the vineyards and the Hermannskogel was reclaimed by woo
https://en.wikipedia.org/wiki/Binary%20XML
Various binary formats have been proposed as compact representations for XML (Extensible Markup Language). Using a binary XML format generally reduces the verbosity of XML documents thereby also reducing the cost of parsing, but hinders the use of ordinary text editors and third-party tools to view and edit the document. There are several competing formats, but none has yet emerged as a de facto standard, although the World Wide Web Consortium adopted EXI as a Recommendation on 10 March 2011. Binary XML is typically used in applications where the performance of standard XML is insufficient, but the ability to convert the document to and from a form (XML) which is easily viewed and edited is valued. Other advantages may include enabling random access and indexing of XML documents. The major challenge for binary XML is to create a single, widely adopted standard. The International Organization for Standardization (ISO) and the International Telecommunication Union (ITU) published the Fast Infoset standard in 2007 and 2005, respectively. Another standard (ISO/IEC 23001-1), known as Binary MPEG format for XML (BiM), has been standardized by the ISO in 2001. BiM is used by many ETSI standards for digital TV and mobile TV. The Open Geospatial Consortium provides a Binary XML Encoding Specification (currently a Best Practice Paper) optimized for geo-related data (GML) and also a benchmark to compare performance of Fast InfoSet, EXI, BXML and deflate to encode/decode AIXM. Alternatives to binary XML include using traditional file compression methods on XML documents (for example gzip); or using an existing standard such as ASN.1. Traditional compression methods, however, offer only the advantage of reduced file size, without the advantage of decreased parsing time or random access. ASN.1/PER forms the basis of Fast Infoset, which is one binary XML standard. There are also hybrid approaches (e.g., VTD-XML) that attach a small index file to an XML document to eliminate t
https://en.wikipedia.org/wiki/Kingsoft
Kingsoft Corporation () is a Chinese software company based in Beijing. Kingsoft operates four subsidiaries: Seasun for video game development, Cheetah Mobile for mobile internet apps, Kingsoft Cloud for cloud storage platforms, and WPS for office software, including WPS Office. It also produced security software known as Kingsoft Security. The most popular game developed by Kingsoft is JX Online 3, launched in 2009. Kingsoft owns data centers in mainland China, Hong Kong, Russia, Southeast Asia, and North America.The company is listed on the Hong Kong Stock Exchange. History The company was founded in 1988 by Qiu Bojun. In 2011, Bojun sold his 15.68% stake in Kingsoft to Tencent. References External links Antivirus software Chinese brands Chinese companies established in 1988 Companies listed on the Hong Kong Stock Exchange Multinational companies headquartered in China Privately held companies of China Software companies based in Beijing Software companies established in 1988
https://en.wikipedia.org/wiki/Net%20neutrality
Network neutrality, often referred to as net neutrality, is the principle that Internet service providers (ISPs) must treat all Internet communications equally, offering users and online content providers consistent rates irrespective of content, website, platform, application, type of equipment, source address, destination address, or method of communication (i.e., without price discrimination). Supporters of net neutrality argue that it prevents ISPs from filtering Internet content without a court order, fosters freedom of speech and democratic participation, promotes competition and innovation, prevents dubious services, maintains the end-to-end principle, and that users would be intolerant of slow-loading websites. Opponents of net neutrality argue that it reduces investment, deters competition, increases taxes, imposes unnecessary regulations, prevents the Internet from being accessible to poor people, prevents Internet traffic from being allocated to the most needed users, that large ISPs already have a performance advantage over smaller providers, and that there is already significant competition among ISPs with few competitive issues. Etymology The term was coined by Columbia University media law professor Tim Wu in 2003 as an extension of the longstanding concept of a common carrier which was used to describe the role of telephone systems. Regulatory considerations Net neutrality regulations may be referred to as uncommon carrier regulations. Net neutrality does not block all abilities that ISPs have to impact their customers' services. Opt-in and opt-out services exist on the end user side, and filtering can be done locally, as in the filtering of sensitive material for minors. Research suggests that a combination of policy instruments can help realize the range of valued political and economic objectives central to the network neutrality debate. Combined with public opinion, this has led some governments to regulate broadband Internet services as a
https://en.wikipedia.org/wiki/Cytoplasmic%20hybrid
A cytoplasmic hybrid (or cybrid, a portmanteau of the two words) is a eukaryotic cell line produced by the fusion of a whole cell with a cytoplast. Cytoplasts are enucleated cells. This enucleation can be effected by simultaneous application of centrifugal force and treatment of the cell with an agent that disrupts the cytoskeleton. A special case of cybrid formation involves the use of rho-zero cells as the whole cell partner in the fusion. Rho-zero cells are cells which have been depleted of their own mitochondrial DNA by prolonged incubation with ethidium bromide, a chemical which inhibits mitochondrial DNA replication. The rho-zero cells do retain mitochondria and can grow in rich culture medium with certain supplements. They do retain their own nuclear genome. A cybrid is then a hybrid cell which mixes the nuclear genes from one cell with the mitochondrial genes from another cell. Using this powerful tool, it makes it possible to dissociate contribution from the mitochondrial genes vs that of the nuclear genes. Cybrids are valuable in mitochondrial research and have been used to provide suggestive evidence of mitochondrial involvement in Alzheimer's disease, Parkinson's disease, and other conditions. Legal issues Research utilizing cybrid embryos has been hotly contested due to the ethical implications of further cybrid research. Recently, the House of Lords passed the Human Fertilisation and Embryology Act 2008, which allows the creation of mixed human-animal embryos for medical purposes only. Such cybrids are 99.9% human and 0.1% animal. A cybrid may be kept for a maximum of 14 days, owing to the development of the brain and spinal cord, after which time the cybrid must be destroyed. During the two-week period, stem cells may be harvested from the cybrid, for research or medical purposes. Under no circumstances may a cybrid be implanted into a human uterus. References Further reading Human Fertilisation and Embryology Act at the Wellcome Trust Exte
https://en.wikipedia.org/wiki/Tsunami%20Aid
Tsunami Aid: A Concert of Hope was a worldwide benefit held for the tsunami victims of the 2004 Indian Ocean earthquake. It was broadcast on NBC and its affiliated networks of USA Network, Bravo, PAX, MSNBC, CNBC, Sci-Fi, Trio, Telemundo and other NBC Universal stations and was heard on any Clear Channel radio station. The benefit was led by the actor George Clooney on January 15, 2005, and was similar to America: A Tribute to Heroes (set up after the September 11th, 2001 attacks). Digital Media innovator Jay Samit enabled viewers to purchase digital downloads of the performances as a new way to raise money for the cause; including live recordings by Elton John, Madonna, Sheryl Crow, Eric Clapton, Roger Waters and Diana Ross. Taking a cue from Bob Geldof (the man who had organized the Live Aid concerts for African famine relief), it consisted of famous Hollywood entertainers and former American presidents George H. W. Bush and Bill Clinton. It was two hours long with stories and entertainment from a huge array of Hollywood popstars notables that include Brad Pitt, Donald Trump, and much more. It was estimated to raise at least five million dollars by the end of the broadcast. External links Benefit concerts 2005 television specials American telethons International broadcasting Simulcasts
https://en.wikipedia.org/wiki/DSniff
dSniff is a set of password sniffing and network traffic analysis tools written by security researcher and startup founder Dug Song to parse different application protocols and extract relevant information. dsniff, filesnarf, mailsnarf, msgsnarf, urlsnarf, and webspy passively monitor a network for interesting data (passwords, e-mail, files, etc.). arpspoof, dnsspoof, and macof facilitate the interception of network traffic normally unavailable to an attacker (e.g., due to layer-2 switching). sshmitm and webmitm implement active man-in-the-middle attacks against redirected SSH and HTTPS sessions by exploiting weak bindings in ad-hoc PKI. Overview The applications sniff usernames and passwords, web pages being visited, contents of an email, etc. As the name implies, dsniff is a network sniffer, but it can also be used to disrupt the normal behavior of switched networks and cause network traffic from other hosts on the same network segment to be visible, not just traffic involving the host dsniff is running on. It handles FTP, Telnet, SMTP, HTTP, POP, poppass, NNTP, IMAP, SNMP, LDAP, Rlogin, RIP, OSPF, PPTP MS-CHAP, NFS, VRRP, YP/NIS, SOCKS, X11, CVS, IRC, AIM, ICQ, Napster, PostgreSQL, Meeting Maker, Citrix ICA, Symantec pc Anywhere, NAI Sniffer, Microsoft SMB, Oracle SQL*Net, Sybase and Microsoft SQL protocols. The name "dsniff" refers both to the package as well as an included tool. The "dsniff" tool decodes passwords sent in cleartext across a switched or unswitched Ethernet network. Its man page explains that Dug Song wrote dsniff with "honest intentions - to audit my own network, and to demonstrate the insecurity of cleartext network protocols." He then requests, "Please do not abuse this software." These are the files that are configured in dsniff folder /etc/dsniff/ /etc/dsniff/dnsspoof.hosts Sample hosts file. If no host file is specified, replies will be forged for all address queries on the LAN with an answer of the local machine’s IP address. /etc/d
https://en.wikipedia.org/wiki/Electric%20Fence
For the physical barrier, see electric fence. Electric Fence (or eFence) is a memory debugger written by Bruce Perens. It consists of a library which programmers can link into their code to override the C standard library memory management functions. eFence triggers a program crash when the memory error occurs, so a debugger can be used to inspect the code that caused the error. Electric Fence is intended to find two common types of programming bugs: Overrunning the end (or beginning) of a dynamically allocated buffer Using a dynamically allocated buffer after returning it to the heap In both cases, Electric Fence causes the errant program to abort immediately via a segmentation fault. Normally, these two errors would cause heap corruption, which would manifest itself only much later, usually in unrelated ways. Thus, Electric Fence helps programmers find the precise location of memory programming errors. Electric Fence allocates at least two pages (often 8KB) for every allocated buffer. In some modes of operation, it does not deallocate freed buffers. Thus, Electric Fence vastly increases the memory requirements of programs being debugged. This leads to the recommendation that programmers should apply Electric Fence to smaller programs when possible, and should never leave Electric Fence linked against production code. Electric Fence is free software licensed under the GNU General Public License. See also Dmalloc External links Electric Fence 2.2.4 source code from Ubuntu DUMA – a fork of Electric Fence which also works for Windows eFence-2.2.2 – rpm of electric fence 2.2.2 source Free memory debuggers Free software testing tools Software testing tools Free memory management software
https://en.wikipedia.org/wiki/Kara%20Walker
Kara Elizabeth Walker (born November 26, 1969) is an American contemporary painter, silhouettist, print-maker, installation artist, filmmaker, and professor who explores race, gender, sexuality, violence, and identity in her work. She is best known for her room-size tableaux of black cut-paper silhouettes. Walker was awarded a MacArthur fellowship in 1997, at the age of 28, becoming one of the youngest ever recipients of the award. She has been the Tepper Chair in Visual Arts at the Mason Gross School of the Arts, Rutgers University since 2015. Walker is regarded as among the most prominent and acclaimed Black American artists working today. Early life and education Walker was born in 1969 in Stockton, California. Her father, Larry Walker, was a painter and professor. Her mother Gwendolyn was an administrative assistant. A 2007 review in the New York Times described her early life as calm, noting that "nothing about [Walker's] very early life would seem to have predestined her for this task. Born in 1969, she grew up in an integrated California suburb, part of a generation for whom the uplift and fervor of the civil rights movement and the want-it-now anger of Black Power were yesterday's news." When Walker was 13, her father accepted a position at Georgia State University. They settled in the city of Stone Mountain. The move was a culture shock for the young artist. In sharp contrast with the multi-cultural environment of coastal California, Stone Mountain still held Ku Klux Klan rallies. At her new high school, Walker recalls, "I was called a 'nigger,' told I looked like a monkey, accused (I didn't know it was an accusation) of being a 'Yankee.'" Walker received her BFA from the Atlanta College of Art in 1991 and her MFA from the Rhode Island School of Design in 1994. Walker found herself uncomfortable and afraid to address race within her art during her early college years, worrying it would be received as "typical" or "obvious"; however, she began introduc
https://en.wikipedia.org/wiki/Cubivore%3A%20Survival%20of%20the%20Fittest
Cubivore: Survival of the Fittest, or Cubivore for short, known in Japan as is an action-adventure video game co-developed by Saru Brunei and Intelligent Systems for the GameCube. It was originally published by Nintendo only in Japan on February 21, 2002. After Nintendo expressed intentions to not release the game in other regions in the world, Atlus USA localized the game for North America and released it on November 5, 2002. Development for Cubivore originally started for the Nintendo 64 with 64DD peripheral, but later was moved to the GameCube. The player controls a cube-shaped beast called a Cubivore, which eats other such beasts in order to mutate and become stronger. The game received mixed reviews upon release. Plot In the land of the Cubivores, the beast known as the Killer Cubivore reigns at the top of the animal food chain. This powerful tyrant and his gang of cronies have gorged themselves on the essence of the land, known in the game as "Wilderness", so much that they have absorbed some of it into themselves. Meanwhile, nature has begun to fade away, becoming drab and infertile, and the number of beasts has declined. The user-named protagonist has taken it upon himself to become King of All Cubivores, in order to challenge the Killer Cubivore and restore the Wilderness to the world. Gameplay Cubivores gameplay is an action-adventure game with a few role-playing video game elements in it. The purpose of Cubivore is to kill the Killer Cubivore and its cronies. To accomplish this, the player's Cubivore must go through several mutations, through several lifetimes "laps" and generations of "offspring". Upon attaining 100 mutations, the Cubivore can become powerful enough to produce an offspring capable of fighting the Killer Cubivore. Thus, Cubivore is a game that is meant to somewhat represent natural selection. The combat in the game is simple but strategic and often fast-paced. When facing another Cubivore, the player's job is to attack it, weaken i
https://en.wikipedia.org/wiki/Super-resolution%20imaging
Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced. In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm. Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy. Basic concepts Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles: Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations. Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred in
https://en.wikipedia.org/wiki/Methods%20engineering
Methods engineering is a subspecialty of industrial engineering and manufacturing engineering concerned with human integration in industrial production processes. Overview Alternatively it can be described as the design of the productive process in which a person is involved. The task of the Methods engineer is to decide where humans will be utilized in the process of converting raw materials to finished products and how workers can most effectively perform their assigned tasks. The terms operation analysis, work design and simplification, and methods engineering and corporate re-engineering are frequently used interchangeably. Lowering costs and increasing reliability and productivity are the objectives of methods engineering. Methods efficiency engineering focuses on lowering costs through productivity improvement. It investigates the output obtained from each unit of input and the speed of each machine and man. Methods quality engineering focuses on increasing quality and reliability. These objectives are met in a five step sequence as follows: Project selection, data acquisition and presentation, data analysis, development of an ideal method based on the data analysis and, finally, presentation and implementation of the method. Methods engineering topics Project selection Methods engineers typically work on projects involving new product design, products with a high cost of production to profit ratio, and products associated with having poor quality issues. Different methods of project selection include the Pareto analysis, fish diagrams, Gantt charts, PERT charts, and job/work site analysis guides. Data acquisition and presentation Data that needs to be collected are specification sheets for the product, design drawings, process plans, quantity and delivery requirements, and projections as to how the product will perform or has performed in the market. Process charts are used to describe proposed or existing way of doing work utilizing machines and men
https://en.wikipedia.org/wiki/Body-on-frame
Body-on-frame, also known as ladder frame construction, is a common motor vehicle construction method, whereby a separate body or coach is mounted on a strong and relatively rigid vehicle frame or chassis that carries the powertrain (the engine and drivetrain) and to which the wheels and their suspension, brakes, and steering are mounted. While this was the original method of building automobiles, body-on-frame construction is now used mainly for pickup trucks, large SUVs, and heavy trucks. In the late 19th century, the frames, like those of the carriages they replaced, might be made of wood (commonly ash), reinforced by steel flitch plates, but in the early 20th century, steel ladder frames or chassis rapidly became standard. Mass production of all-metal bodies began with the Budd Company and the Dodge Brothers. Mass production of all-metal bodies became general in the 1920s but Europe, with exceptions, followed almost a decade later. Europe's custom-made or "coachbuilt" cars usually contained some wood framing or used aluminium alloy castings. Towards the beginning of international automobile assembly and construction, most manufacturers created rolling chassis consisting of a powertrain, suspension, steering column and a fuel tank that was then sent to a coachbuilder that added the body, interior and upholstery to the customers specific requests. In contrast, unibody or monocoque designs, where panels within the body supported the car on its suspension, were developed by European manufacturers in the late 1920s with Budd USA (which had a number of large factories in Europe) and its technical knowhow. Because of the high cost of designing and developing these structures and the high cost of specialised machinery to make the large pressings required by this style of construction it is not used by low-volume manufacturers, who might construct an equivalent by welding steel tube to form a suitable space frame. History The Ford Model T carried the tradition of bo
https://en.wikipedia.org/wiki/Encrypted%20function
An encrypted function is an attempt to provide mobile code privacy without providing any tamper-resistant hardware. It is a method where in mobile code can carry out cryptographic primitives even though the code is executed in untrusted environments. should run autonomously. Polynomial and rational functions are encrypted such that their transformation can again be implemented as programs consisting of cleartext instructions that a processor or interpreter understands. The processor would not understand the program's function. This field of study is gaining popularity as mobile cryptography. Example Scenario: Host A, has an algorithm which computes function f. A wants to send its mobile agent to B which holds input x, to compute f(x). But A doesn't want B to learn anything about f. Scheme: Function f is encrypted in a way that results in E(f). Host A then creates another program P(E(f)), which implements E(f), and sends it to B through its agent. B then runs the agent, which computes P(E(f))(x) and returns the result to A. A then decrypts this to get f(x). Drawbacks: Finding appropriate encryption schemes that can transform arbitrary functions is a challenge. The scheme doesn't prevent denial of service, replay, experimental extraction and others. See also Homomorphic encryption References Thomas Sander and Christian F. Tschudin. Protecting Mobile Agents Against Malicious Hosts. In G. Vigna, editor, Mobile agents and security, volume 1419 of Lecture Notes in Computer Science, pages 44–60. Springer-Verlag, New York, NY, 1998. Cryptography
https://en.wikipedia.org/wiki/Assisted%20reproductive%20technology
Assisted reproductive technology (ART) includes medical procedures used primarily to address infertility. This subject involves procedures such as in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), cryopreservation of gametes or embryos, and/or the use of fertility medication. When used to address infertility, ART may also be referred to as fertility treatment. ART mainly belongs to the field of reproductive endocrinology and infertility. Some forms of ART may be used with regard to fertile couples for genetic purpose (see preimplantation genetic diagnosis). ART may also be used in surrogacy arrangements, although not all surrogacy arrangements involve ART. The existence of sterility will not always require ART to be the first option to consider, as there are occasions when its cause is a mild disorder that can be solved with more conventional treatments or with behaviors based on promoting health and reproductive habits. Procedures General With ART, the process of sexual intercourse is bypassed and fertilization of the oocytes occurs in the laboratory environment (i.e., in vitro fertilization). In the US, the Centers for Disease Control and Prevention (CDC) defines ART to include "all fertility treatments in which both eggs and sperm are handled. In general, ART procedures involve surgically removing eggs from a woman's ovaries, combining them with sperm in the laboratory, and returning them to the woman's body or donating them to another woman." According to CDC, "they do not include treatments in which only sperm are handled (i.e., intrauterine—or artificial—insemination) or procedures in which a woman takes medicine only to stimulate egg production without the intention of having eggs retrieved." In Europe, ART also excludes artificial insemination and includes only procedures where oocytes are handled. The World Health Organization (WHO), also defines ART this way. Ovulation induction Ovulation induction is usually used in the sense
https://en.wikipedia.org/wiki/M%C3%BCllerian%20agenesis
Müllerian agenesis, also known as Müllerian aplasia, vaginal agenesis, or Mayer–Rokitansky–Küster–Hauser syndrome (MRKH syndrome), is a congenital malformation characterized by a failure of the Müllerian ducts to develop, resulting in a missing uterus and variable degrees of vaginal hypoplasia of its upper portion. Müllerian agenesis (including absence of the uterus, cervix and/or vagina) is the cause in 15% of cases of primary amenorrhoea. Because most of the vagina does not develop from the Müllerian duct, instead developing from the urogenital sinus, along with the bladder and urethra, it is present even when the Müllerian duct is completely absent. Because ovaries do not develop from the Müllerian ducts, affected people might have normal secondary sexual characteristics but are infertile due to the lack of a functional uterus. However, biological motherhood is possible through uterus transplantation or use of gestational surrogates. Müllerian agenesis is hypothesized to be a result of autosomal dominant inheritance with incomplete penetrance and variable expressivity, which contributes to the complexity involved in identifying of the underlying mechanisms causing the condition. Because of the variance in inheritance, penetrance and expressivity patterns, Müllerian agenesis is subdivided into two types: type 1, in which only the structures developing from the Müllerian duct are affected (the upper vagina, cervix, and uterus), and type 2, where the same structures are affected, but is characterized by the additional malformations of other body systems most often including the renal and skeletal systems. Type 2 includes MURCS (Müllerian Renal Cervical Somite). The majority of Müllerian agenesis cases are characterized as sporadic, but familial cases have provided evidence that, at least for some patients, it is an inherited disorder. The underlying causes are still being investigated, but several causative genes have been studied for their possible association wi
https://en.wikipedia.org/wiki/Laser%20Squad
Laser Squad is a turn-based tactics video game, originally released for the ZX Spectrum and later for the Commodore 64, Amstrad CPC, MSX, Amiga, Sharp MZ-800 and Atari ST and PC computers between 1988 and 1992. It was designed by Julian Gollop and his team at Target Games (later Mythos Games and Codo Technologies) and published by Blade Software, expanding on the ideas applied in their earlier Rebelstar series. Laser Squad originally came with five mission scenarios, with an expansion pack released for the 8-bit versions, containing a further two scenarios. Reaction from gaming magazines was positive, gaining it high review rating and several accolades. The legacy of the game can be seen in other titles like the X-COM series, especially the acclaimed X-COM: UFO Defense which was also created by Julian Gollop and was initially conceived as a sequel to Laser Squad. Gameplay Laser Squad is a turn-based tactics war game where the player completes objectives such as rescue or retrieval operations, or simply eliminating all of the enemy by taking advantage of cover, squad level military tactics, and careful use of weaponry. The squad's team members are maneuvered around a map one at a time, taking actions such as move, turn, shoot, pick up and so on that use up the unit's action points. More heavily laden units may tire more easily, and may have to rest to avoid running out of action points more quickly in subsequent turns. Morale also plays a factor; a unit witnessing the deaths of his teammates can panic and run out of the player's control. The original Target Games 8-bit release came with the first three missions with an expansion pack offered via mail order for the next two. The subsequent Blade Software 8-bit release included these as standard; the mail order expansion pack now offered was for missions six and seven instead. Both offers covered cassette and floppy disk versions. As well as featuring new scenarios, the expansion packs included additional weapons
https://en.wikipedia.org/wiki/Strongly%20regular%20graph
In graph theory, a strongly regular graph (SRG) is a regular graph with vertices and degree such that for some given integers every two adjacent vertices have common neighbours, and every two non-adjacent vertices have common neighbours. Such a strongly regular graph is denoted . Its complement is also strongly regular: . A strongly regular graph is a distance-regular graph with diameter 2 whenever μ is non-zero. It is a locally linear graph whenever . Etymology A strongly regular graph is denoted an srg(v, k, λ, μ) in the literature. By convention, graphs which satisfy the definition trivially are excluded from detailed studies and lists of strongly regular graphs. These include the disjoint union of one or more equal-sized complete graphs, and their complements, the complete multipartite graphs with equal-sized independent sets. Andries Brouwer and Hendrik van Maldeghem (see #References) use an alternate but fully equivalent definition of a strongly regular graph based on spectral graph theory: a strongly regular graph is a finite regular graph that has exactly three eigenvalues, only one of which is equal to the degree k, of multiplicity 1. This automatically rules out fully connected graphs (which have only two distinct eigenvalues, not three) and disconnected graphs (whose multiplicity of the degree k is equal to the number of different connected components, which would therefore exceed one). Much of the literature, including Brouwer, refer to the larger eigenvalue as r (with multiplicity f) and the smaller one as s (with multiplicity g). History Strongly regular graphs were introduced by R.C. Bose in 1963. They built upon earlier work in the 1950s in the then-new field of spectral graph theory. Examples The cycle of length 5 is an srg(5, 2, 0, 1). The Petersen graph is an srg(10, 3, 0, 1). The Clebsch graph is an srg(16, 5, 0, 2). The Shrikhande graph is an srg(16, 6, 2, 2) which is not a distance-transitive graph. The n × n square rook's g
https://en.wikipedia.org/wiki/Circle%20graph
In graph theory, a circle graph is the intersection graph of a chord diagram. That is, it is an undirected graph whose vertices can be associated with a finite system of chords of a circle such that two vertices are adjacent if and only if the corresponding chords cross each other. Algorithmic complexity gives an O(n2)-time algorithm that tests whether a given n-vertex undirected graph is a circle graph and, if it is, constructs a set of chords that represents it. A number of other problems that are NP-complete on general graphs have polynomial time algorithms when restricted to circle graphs. For instance, showed that the treewidth of a circle graph can be determined, and an optimal tree decomposition constructed, in O(n3) time. Additionally, a minimum fill-in (that is, a chordal graph with as few edges as possible that contains the given circle graph as a subgraph) may be found in O(n3) time. has shown that a maximum clique of a circle graph can be found in O(n log2 n) time, while have shown that a maximum independent set of an unweighted circle graph can be found in O(n min{d, α}) time, where d is a parameter of the graph known as its density, and α is the independence number of the circle graph. However, there are also problems that remain NP-complete when restricted to circle graphs. These include the minimum dominating set, minimum connected dominating set, and minimum total dominating set problems. Chromatic number The chromatic number of a circle graph is the minimum number of colors that can be used to color its chords so that no two crossing chords have the same color. Since it is possible to form circle graphs in which arbitrarily large sets of chords all cross each other, the chromatic number of a circle graph may be arbitrarily large, and determining the chromatic number of a circle graph is NP-complete. It remains NP-complete to test whether a circle graph can be colored by four colors. claimed that finding a coloring with three colors may
https://en.wikipedia.org/wiki/Tuberculin
Tuberculin, also known as purified protein derivative, is a combination of proteins that are used in the diagnosis of tuberculosis. This use is referred to as the tuberculin skin test and is recommended only for those at high risk. Reliable administration of the skin test requires large amounts of training, supervision, and practice. Injection is done into the skin. After 48 to 72 hours, if there is more than a five to ten millimeter area of swelling, the test is considered positive. Common side effects include redness, itchiness, and pain at the site of injection. Allergic reactions may occasionally occur. The test may be falsely positive in those who have been previously vaccinated with BCG or have been infected by other types of mycobacteria. The test may be falsely negative within ten weeks of infection, in those less than six months old, and in those who have been infected for many years. Use is safe in pregnancy. Tuberculin was discovered in 1890 by Robert Koch. Koch, best known for his work on the etiology of tuberculosis (TB), laid down various rigorous guidelines that aided the establishment between a pathogen and the specific disease that followed that were later named Koch's postulates. Although he initially believed it would cure tuberculosis, this was later disproved. Tuberculin is made from an extract of Mycobacterium tuberculosis. It is on the World Health Organization's List of Essential Medicines. Medical uses The test used in the United States at present is referred to as the Mantoux test. An alternative test called the Heaf test was used in the United Kingdom until 2005, although the UK now uses the Mantoux test in line with the rest of the world. Both of these tests use the tuberculin derivative PPD (purified protein derivative). History Hope for a cure Tuberculin was invented by German scientist and physician Robert Koch in 1890. The original tuberculin was a glycerine extract of the tubercle bacilli and was developed as a remedy for tub
https://en.wikipedia.org/wiki/Christoffel%20symbols
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group . As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols. In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point. At each point of the underlying -dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted for . Each entry
https://en.wikipedia.org/wiki/Train%20whistle
A train whistle or air whistle (originally referred to as a steam trumpet) is an audible signaling device on a steam locomotive, used to warn that the train is approaching, and to communicate with rail workers. Modern diesel and electric locomotives primarily use a powerful air horn instead of a whistle as an audible warning device. However, the word whistle continues to be used by railroaders in referring to such signaling practices as "whistling off" (sounding the horn when a train gets underway). The need for a whistle on a locomotive exists because trains move on fixed rails and thus are uniquely susceptible to collision. This susceptibility is exacerbated by a train's enormous weight and inertia, which make it difficult to quickly stop when encountering an obstacle. Hence a means of warning others of the approach of a train from a distance is necessary. As train whistles are inexpensive compared to other warning devices, the use of loud and distinct whistles became the preferred solution for railway operators. Steam whistles were almost always actuated with a pull cord (or sometimes a lever) that permitted proportional (tracker) action, so that some form of "expression" could be put into the sound. Many locomotive operators would have their own style of blowing the whistle, and it was often apparent who was operating the locomotive by the sound. Modern locomotives often make use of a push button switch to operate the air horn, eliminating any possibility of altering the horn's volume or pitch. North American usage North American steam locomotive whistles have different sounds from one another. They come in many forms, from tiny little single-note shriekers to larger plain whistles with deeper tones (a deep, plain train whistle is the "hooter" of the Norfolk & Western, used on their A- and Y-class Mallet locomotives). Even more well known were the multi-chime train whistles. Nathan of New York copied and improved Casey Jones's boiler-tube chime whistle by c
https://en.wikipedia.org/wiki/Mr.%20Do%27s%20Castle
Mr. Do's Castle is a platform game released in arcades by Universal in September 1983. In Japan, the game is titled Mr. Do! versus Unicorns. Marketed as a sequel to the original Mr. Do! released one year earlier, the game bears a far closer resemblance to Universal's Space Panic from 1980. It began as a game called Knights vs. Unicorns, but the U.S. division of Universal persuaded the Japanese arm to modify the graphics into a Mr. Do! game following the first game's popularity. Gameplay The object of Mr. Do's Castle is to score as many points as possible by collecting cherries and/or defeating unicorn-like monsters. The game takes place in a castle filled with platforms and ladders —reminiscent of Space Panic (1980)—some of which can be flipped from one platform to another, much like a kickstand on a bicycle. The player controls Mr. Do as he collects cherries by using a hammer to knock out blocks that contain them from the various platforms. Empty holes left by the knocked-out blocks serve as traps for the monsters —if a monster falls into a hole, the player can then defeat it by causing a block above the monster to fall on top of it (and additional points are scored if such a monster falls multiple levels en route to its destruction). If the player takes too long to complete a level, the monsters transform into faster, more difficult forms —at first green in color, later blue —that rapidly multiply once they turn blue. The game advances to the next level when all cherries on the level have been collected or all enemies have been defeated. The player loses a life if Mr. Do is caught by a monster, and the game ends when the player runs out of lives. As in Mr. Do!, the player can earn an extra life by collecting all of the letters from the word "EXTRA". Regular monsters can be changed into monsters bearing the EXTRA letters by collecting all three keys distributed around the playfield and then picking up a magic shield from the top floor. Monsters in this s
https://en.wikipedia.org/wiki/Contagious%20disease
A contagious disease is an infectious disease that is readily spread (that is, communicated) by transmission of a pathogen through contact (direct or indirect) with an infected person. A disease is often known to be contagious before medical science discovers its causative agent. Koch's postulates, which were published at the end of the 19th century, were the standard for the next 100 years or more, especially with diseases caused by bacteria. Microbial pathogenesis attempts to account for diseases caused by a virus. The disease itself can also be called a contagion. Historical meaning Originally, the term referred to a contagion (a derivative of 'contact') or disease transmissible only by direct physical contact. In the modern-day, the term has sometimes been broadened to encompass any communicable or infectious disease. Often the word can only be understood in context, where it is used to emphasize very infectious, easily transmitted, or especially severe communicable diseases. In 1849, John Snow first proposed that cholera was a contagious disease. Effect on public health response Most epidemics are caused by contagious diseases, with occasional exceptions, such as yellow fever. The spread of non-contagious communicable diseases is changed either very little or not at all by medical isolation of ill persons or medical quarantine for exposed persons. Thus, a "contagious disease" is sometimes defined in practical terms, as a disease for which isolation or quarantine are useful public health responses. Some locations are better suited for the research into the contagious pathogens due to the reduced risk of transmission afforded by a remote or isolated location. Negative room pressure is a technique in health care facilities based on aerobiological designs. See also Germ theory of disease Herd immunity References Infectious diseases Microbiology Epidemiology Causality
https://en.wikipedia.org/wiki/Morley%20rank
In mathematical logic, Morley rank, introduced by , is a means of measuring the size of a subset of a model of a theory, generalizing the notion of dimension in algebraic geometry. Definition Fix a theory T with a model M. The Morley rank of a formula φ defining a definable (with parameters) subset S of M is an ordinal or −1 or ∞, defined by first recursively defining what it means for a formula to have Morley rank at least α for some ordinal α. The Morley rank is at least 0 if S is non-empty. For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has countably infinitely many disjoint definable subsets Si, each of rank at least α − 1. For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α. The Morley rank is then defined to be α if it is at least α but not at least α + 1, and is defined to be ∞ if it is at least α for all ordinals α, and is defined to be −1 if S is empty. For a definable subset of a model M (defined by a formula φ) the Morley rank is defined to be the Morley rank of φ in any ℵ0-saturated elementary extension of M. In particular for ℵ0-saturated models the Morley rank of a subset is the Morley rank of any formula defining the subset. If φ defining S has rank α, and S breaks up into no more than n < ω subsets of rank α, then φ is said to have Morley degree n. A formula defining a finite set has Morley rank 0. A formula with Morley rank 1 and Morley degree 1 is called strongly minimal. A strongly minimal structure is one where the trivial formula x = x is strongly minimal. Morley rank and strongly minimal structures are key tools in the proof of Morley's categoricity theorem and in the larger area of model theoretic stability theory. Examples The empty set has Morley rank −1, and conversely anything of Morley rank −1 is empty. A subset has Morley rank 0 if and only if it is finite and non-empty. If V is an algebraic set in Kn, for an algebr
https://en.wikipedia.org/wiki/Baby%20colic
Baby colic, also known as infantile colic, is defined as episodes of crying for more than three hours a day, for more than three days a week, for three weeks in an otherwise healthy child. Often crying occurs in the evening. It typically does not result in long-term problems. The crying can result in frustration of the parents, depression following delivery, excess visits to the doctor, and child abuse. The cause of colic is unknown. Some believe it is due to gastrointestinal discomfort like intestinal cramping. Diagnosis requires ruling out other possible causes. Concerning findings include a fever, poor activity, or a swollen abdomen. Fewer than 5% of infants with excess crying have an underlying organic disease. Treatment is generally conservative, with little to no role for either medications or alternative therapies. Extra support for the parents may be useful. Tentative evidence supports certain probiotics for the baby and a low-allergen diet by the mother in those who are breastfed. Hydrolyzed formula may be useful in those who are bottlefed. Colic affects 10–40% of babies. Equally common in bottle and breast-fed infants, it begins during the second week of life, peaks at 6 weeks, and resolves between 12 and 16 weeks. It rarely lasts up to one year of age. It occurs at the same rate in boys and in girls. The first detailed medical description of the problem was published in 1954. Signs and symptoms Colic is defined as episodes of crying for more than three hours a day, for more than three days a week for at least a three-week duration in an otherwise healthy child. It is most common around six weeks of age and gets better by six months of age. By contrast, infants normally cry an average of just over two hours a day, with the duration peaking at six weeks. With colic, periods of crying most commonly happen in the evening and for no obvious reason. Associated symptoms may include legs pulled up to the stomach, a flushed face, clenched hands, and a wrinkle
https://en.wikipedia.org/wiki/Hydroinformatics
Hydroinformatics is a branch of informatics which concentrates on the application of information and communications technologies (ICTs) in addressing the increasingly serious problems of the equitable and efficient use of water for many different purposes. Growing out of the earlier discipline of computational hydraulics, the numerical simulation of water flows and related processes remains a mainstay of hydroinformatics, which encourages a focus not only on the technology but on its application in a social context. On the technical side, in addition to computational hydraulics, hydroinformatics has a strong interest in the use of techniques originating in the so-called artificial intelligence community, such as artificial neural networks or recently support vector machines and genetic programming. These might be used with large collections of observed data for the purpose of data mining for knowledge discovery, or with data generated from an existing, physically based model in order to generate a computationally efficient emulator of that model for some purpose. Hydroinformatics recognises the inherently social nature of the problems of water management and of decision-making processes, and strives to understand the social processes by which technologies are brought into use. Since the problems of water management are most severe in the majority world, while the resources to obtain and develop technological solutions are concentrated in the hands of the minority, the need to examine these social processes are particularly acute. Hydroinformatics draws on and integrates hydraulics, hydrology, environmental engineering and many other disciplines. It sees application at all points in the water cycle from atmosphere to ocean, and in artificial interventions in that cycle such as urban drainage and water supply systems. It provides support for decision making at all levels from governance and policy through management to operations. Hydroinformatics has a growing wo
https://en.wikipedia.org/wiki/Bendix%20G-15
The Bendix G-15 is a computer introduced in 1956 by the Bendix Corporation, Computer Division, Los Angeles, California. It is about and weighs about . The G-15 has a drum memory of 2,160 29-bit words, along with 20 words used for special purposes and rapid-access storage. The base system, without peripherals, cost $49,500. A working model cost around $60,000 (over $500,000 by today's standards). It could also be rented for $1,485 per month. It was meant for scientific and industrial markets. The series was gradually discontinued when Control Data Corporation took over the Bendix computer division in 1963. The chief designer of the G-15 was Harry Huskey, who had worked with Alan Turing on the ACE in the United Kingdom and on the SWAC in the 1950s. He made most of the design while working as a professor at Berkeley (where his graduate students included Niklaus Wirth), and other universities. David C. Evans was one of the Bendix engineers on the G-15 project. He would later become famous for his work in computer graphics and for starting up Evans & Sutherland with Ivan Sutherland. Architecture The G-15 was inspired by the Automatic Computing Engine (ACE). It is a serial-architecture machine, in which the main memory is a magnetic drum. It uses the drum as a recirculating delay-line memory, in contrast to the analog delay line implementation in other serial designs. Each track has a set of read and write heads; as soon as a bit was read off a track, it is re-written on the same track a certain distance away. The length of delay, and thus the number of words on a track, is determined by the spacing of the read and write heads, the delay corresponding to the time required for a section of the drum to travel from the write head to the corresponding read head. Under normal operation, data are written back without change, but this data flow can be intercepted at any time, allowing the machine to update sections of a track as needed. This arrangement allows the des
https://en.wikipedia.org/wiki/Rollover%20Pass
Rollover Pass (Gilchrist, Galveston County, Texas), also called Rollover Fish Pass, was a strait that linked Rollover Bay and East Bay with the Gulf of Mexico in extreme southeastern Galveston County. It has been closed by filling it in with dirt. You can see the actual view of the site by webcam at https://www.bolivarpeninsulatexas.com/Webcams/Rollover-Pass. Rollover Pass was opened in 1955 by the Texas Game and Fish Commission to improve local fishing conditions. Seawater was introduced into East Bay to promote vegetation growth, and to provide access for marine fish to spawn and feed. The name came from the days of Spanish rule, when barrels of merchandise would be rolled over that part of the peninsula to avoid excise tax. The Pass is about 1600 feet long and 200 feet wide. The Rollover Pass area is a popular location for fishing, bird watching, and family recreation activities. Parking and camping was available on all four quadrants along the Pass, and handicapped or elderly persons were able to fish while sitting in their vehicles. Since 2013 it has been the subject of lawsuits over access and ownership. Technical details Rollover Pass is part of a low-elevation area and was subject to overflow during high tides or storms. A man-made strait was cut through private property on the Bolivar Peninsula and linked the Gulf of Mexico with Rollover Bay and East Bay on the upper Texas coast in eastern Galveston County. Located on property which was owned by the Gulf Coast Rod, Reel and Gun Club and managed by the Gilchrist Community Association, the Pass was widened to allow more water flow in 1955 by the Texas Game and Fish Commission when it was granted an easement by the property owners. The intent was to increase bay water salinity, promote growth of submerged vegetation, and help marine fish to and from spawning and feeding areas in the bay. The Pass was about 1600 feet long and 200 feet wide. Large cement walls framed the Gulf side (southeast of Texas H
https://en.wikipedia.org/wiki/Ultraparallel%20theorem
In hyperbolic geometry, two lines are said to be ultraparallel if they do not intersect and are not limiting parallel. The ultraparallel theorem states that every pair of (distinct) ultraparallel lines has a unique common perpendicular (a hyperbolic line which is perpendicular to both lines). Hilbert's construction Let r and s be two ultraparallel lines. From any two distinct points A and C on s draw AB and CB' perpendicular to r with B and B' on r. If it happens that AB = CB', then the desired common perpendicular joins the midpoints of AC and BB' (by the symmetry of the Saccheri quadrilateral ACB'B). If not, we may suppose AB < CB' without loss of generality. Let E be a point on the line s on the opposite side of A from C. Take A' on CB' so that A'B' = AB. Through A' draw a line s' (A'E') on the side closer to E, so that the angle B'A'E' is the same as angle BAE. Then s' meets s in an ordinary point D'. Construct a point D on ray AE so that AD = A'D'. Then D' ≠ D. They are the same distance from r and both lie on s. So the perpendicular bisector of D'D (a segment of s) is also perpendicular to r. (If r and s were asymptotically parallel rather than ultraparallel, this construction would fail because s' would not meet s. Rather s' would be asymptotically parallel to both s and r.) Proof in the Poincaré half-plane model Let be four distinct points on the abscissa of the Cartesian plane. Let and be semicircles above the abscissa with diameters and respectively. Then in the Poincaré half-plane model HP, and represent ultraparallel lines. Compose the following two hyperbolic motions: Then Now continue with these two hyperbolic motions: Then stays at , , , (say). The unique semicircle, with center at the origin, perpendicular to the one on must have a radius tangent to the radius of the other. The right triangle formed by the abscissa and the perpendicular radii has hypotenuse of length . Since is the radius of the semicircle on , the common
https://en.wikipedia.org/wiki/Hayflick%20limit
The Hayflick limit, or Hayflick phenomenon, is the number of times a normal somatic, differentiated human cell population will divide before cell division stops. However, this limit does not apply to stem cells. The concept of the Hayflick limit was advanced by American anatomist Leonard Hayflick in 1961, at the Wistar Institute in Philadelphia, Pennsylvania. Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. This finding refuted the contention by Alexis Carrel that normal cells are immortal. Each time a cell undergoes mitosis, the telomeres on the ends of each chromosome shorten slightly. Cell division will cease once telomeres shorten to a critical length. Hayflick interpreted his discovery to be aging at the cellular level. The aging of cell populations appears to correlate with the overall physical aging of an organism. Macfarlane Burnet coined the name "Hayflick limit" in his book Intrinsic Mutagenesis: A Genetic Approach to Ageing, published in 1974. History The belief in cell immortality Prior to Leonard Hayflick's discovery, it was believed that vertebrate cells had an unlimited potential to replicate. Alexis Carrel, a Nobel prize-winning surgeon, had stated "that all cells explanted in tissue culture are immortal, and that the lack of continuous cell replication was due to ignorance on how best to cultivate the cells". He claimed to have cultivated fibroblasts from the hearts of chickens (which typically live 5 to 10 years) and to have kept the culture growing for 34 years. However, other scientists have been unable to replicate Carrel's results, and they are suspected to be due to an error in experimental procedure. To provide required nutrients, embryonic stem cells of chickens may have been re-added to the culture daily. This would have easily allowed the cultivation of new, fresh cells in the culture, so there was not an infinite reproduction of
https://en.wikipedia.org/wiki/ArchINFORM
archINFORM is an online database for international architecture, originally emerging from records of interesting building projects from architecture students from the University of Karlsruhe, Germany. The self-described "largest online-database about worldwide architects and buildings" contains plans and images of buildings both built and potential and forms a record of the architecture of the 20th century. The database uses a search engine, which allows a particular project to be found by listing architect, location or key word. It has been described by the librarian of the Calouste Gulbenkian Foundation as "one of the most useful reference tools concerning architecture available on the internet." References External links archINFORM - homepage of the database (English version) Architecture websites Online databases German websites Databases in Germany Architecture databases
https://en.wikipedia.org/wiki/Institution%20of%20Mechanical%20Engineers
The Institution of Mechanical Engineers (IMechE) is an independent professional association and learned society headquartered in London, United Kingdom, that represents mechanical engineers and the engineering profession. With over 120,000 members in 140 countries, working across industries such as railways, automotive, aerospace, manufacturing, energy, biomedical and construction, the Institution is licensed by the Engineering Council to assess candidates for inclusion on its Register of Chartered Engineers, Incorporated Engineers and Engineering Technicians. The Institution was founded at the Queen's Hotel, Birmingham, by George Stephenson in 1847. It received a Royal Charter in 1930. The Institution's headquarters, purpose-built for the Institution in 1899, is situated at No. 1 Birdcage Walk in central London. Origins Informal meetings are said to have taken place in 1846, at locomotive designer Charles Beyer's house in Cecil Street, Manchester, or alternatively at Bromsgrove at the house of James McConnell, after viewing locomotive trials at the Lickey Incline. Beyer, Richard Peacock, George Selby, Archibald Slate and Edward Humphrys were present. Bromsgrove seems the more likely candidate for the initial discussion, not least because McConnell was the driving force in the early years. A meeting took place at the Queen's Hotel in Birmingham to consider the idea further on 7 October and a committee appointed with McDonnell at its head to see the idea to its inauguration. The Institution of Mechanical Engineers was then founded on 27 January 1847, in the Queen's Hotel next to Curzon Street station in Birmingham by the railway pioneer George Stephenson and others. McConnnell became the first chairman. The founding of the Institution was said by Stephenson's biographer Samuel Smiles to have been spurred by outrage that Stephenson, the most famous mechanical engineer of the age, had been refused admission to the Institution of Civil Engineers unless he sent in "
https://en.wikipedia.org/wiki/Life%20Safety%20Code
The publication Life Safety Code, known as NFPA 101, is a consensus standard widely adopted in the United States. It is administered, trademarked, copyrighted, and published by the National Fire Protection Association and, like many NFPA documents, is systematically revised on a three-year cycle. Despite its title, the standard is not a legal code, is not published as an instrument of law, and has no statutory authority in its own right. However, it is deliberately crafted with language suitable for mandatory application to facilitate adoption into law by those empowered to do so. The bulk of the standard addresses "those construction, protection, and occupancy features necessary to minimize danger to life from the effects of fire, including smoke, heat, and toxic gases created during a fire.". The standard does not address the "general fire prevention or building construction features that are normally a function of fire prevention codes and building codes". History The Life Safety Code was originated in 1913 by the Committee on Safety to Life (one of the NFPA's more than 200 committees). As noted in the 1991 Life Safety Code Handbook; "...the Committee devoted its attention to a study of notable fires involving loss of life and to analyzing the causes of that loss of life. This work led to the preparation of standards for the construction of stairways,fire escapes, and similar structures; for fire drills in various occupancies and for the construction and arrangement of exit facilities for factories, schools and other occupancies, which form the basis of the present Code." This study became the basis for two early NFPA publications, "Outside Stairs for Fire Exits" (1916) and "Safeguarding Factory Workers from Fire" (1918). In 1921 the Committee on Safety to Life expanded and the publication they generated in 1927 became known as the Building Exits Code. New editions were published in 1929, 1934, 1936, 1938, 1942 and 1946. After a disastrous series of fires be
https://en.wikipedia.org/wiki/Thermolabile
Thermolabile refers to a substance which is subject to, decomposition, or change in response to heat. This term is often used describe biochemical substances. For example, many bacterial exotoxins are thermolabile and can be easily inactivated by the application of moderate heat. Enzymes are also thermolabile and lose their activity when the temperature rises. Loss of activity in such toxins and enzymes is likely due to change in the three-dimensional structure of the toxin protein during exposure to heat. In pharmaceutical compounds, heat generated during grinding may lead to degradation of thermolabile compounds. This is of particular use in testing gene function. This is done by intentionally creating mutants which are thermolabile. Growth below the permissive temperature allows normal protein function, while increasing the temperature above the permissive temperature ablates activity, likely by denaturing the protein. Thermolabile enzymes are also studied for their applications in DNA replication techniques, such as PCR, where thermostable enzymes are necessary for proper DNA replication. Enzyme function at higher temperatures may be enhanced with trehalose, which opens up the possibility of using normally thermolabile enzymes in DNA replication. See also Thermostable Thermolabile protecting groups References External links Biological concepts
https://en.wikipedia.org/wiki/Bosconian
is a scrolling multidirectional shooter arcade video game developed and released by Namco in Japan in 1981. In North America, it was manufactured and distributed by Midway Games. The goal is to earn as many points as possible by destroying enemy missiles and bases using a ship which shoots simultaneously both the front and back. Bosconian was commercially successful in Japan and received positive critical reception, but did not achieve the global commercial success of other shoot 'em up games from the golden age of arcade video games. It was ported to home computers as Bosconian '87 (1987) and spawned two sequels: Blast Off (1989) and Final Blaster (1990). The game has been regarded by critics as influential in the shoot 'em up genre. Gameplay The objective of Bosconian is to score as many points as possible by destroying enemy missiles and bases. The player controls the Starfighter, a ship that can move in eight directions and fires both forward and backward simultaneously. Throughout the game, the Starfighter stays affixed to the center of the screen as it moves. During each round, several green enemy bases — known as "base stars" — appear, all of which must be destroyed in order to advance to the next round. The number of bases increases with each round. Each base has six globe-like cannons arranged in a hexagon around a central core. To destroy a base, the player must either shoot the core or destroy all six cannons, the latter of which gives the player extra points. In later levels, cores begin defending themselves by opening and closing while launching missiles. A radar display on the right-hand side of the screen shows where enemies are located relative to the player. The game also features a color-coded alert system with voice commands. Additionally, the player must avoid or destroy stationary asteroids, mines, and a variety of enemy missiles and ships which attempt to collide with his or her ship. Enemy bases will also occasionally launch a squadron of
https://en.wikipedia.org/wiki/AA%20postulate
In Euclidean geometry, the AA postulate states that two triangles are similar if they have two corresponding angles congruent. The AA postulate follows from the fact that the sum of the interior angles of a triangle is always equal to 180°. By knowing two angles, such as 32° and 64° degrees, we know that the next angle is 84°, because 180-(32+64)=84. (This is sometimes referred to as the AAA Postulate—which is true in all respects, but two angles are entirely sufficient.) The postulate can be better understood by working in reverse order. The two triangles on grids A and B are similar, by a 1.5 dilation from A to B. If they are aligned, as in grid C, it is apparent that the angle on the origin is congruent with the other (D). We also know that the pair of sides opposite the origin are parallel. We know this because the pairs of sides around them are similar, stem from the same point, and line up with each other. We can then look at the sides around the parallels as transversals, and therefore the corresponding angles are congruent. Using this reasoning we can tell that similar triangles have congruent angles. Now, because this article is practically over, you might want to know what AA postulate can be used for. It is used proving the Angle Bisector Theorem. AA postulate is one of the many similarity ways for determining similarity in a triangle. References http://hanlonmath.com/pdfFiles/464Chapter7Sim.Poly.pdf (Unused Source) Elementary geometry Triangle geometry Euclidean plane geometry
https://en.wikipedia.org/wiki/Print%20server
In computer networking, a print server, or printer server, is a type of server that connects printers to client computers over a network. It accepts print jobs from the computers and sends the jobs to the appropriate printers, queuing the jobs locally to accommodate the fact that work may arrive more quickly than the printer can actually handle. Ancillary functions include the ability to inspect the queue of jobs to be processed, the ability to reorder or delete waiting print jobs, or the ability to do various kinds of accounting (such as counting pages, which may involve reading data generated by the printer(s)). Print servers may be used to enforce administration policies, such as color printing quotas, user/department authentication, or watermarking printed documents. Print servers may support a variety of industry-standard or proprietary printing protocols including Internet Printing Protocol, Line Printer Daemon protocol, NetWare, NetBIOS/NetBEUI, or JetDirect. A print server may be a networked computer with one or more shared printers. Alternatively, a print server may be a dedicated device on the network, with connections to the LAN and one or more printers. Dedicated server appliances tend to be fairly simple in both configuration and features. Print server functionality may be integrated with other devices such as a wireless router, a firewall, or both. A printer may have a built-in print server. All printers with the right type of connector are compatible with all print servers; manufacturers of servers make available lists of compatible printers because a server may not implement all the communications functionality of a printer (e.g. low ink signal). See also Internet Printing Protocol CUPS References Networking hardware Computer printing Servers (computing)
https://en.wikipedia.org/wiki/TORCS
TORCS (The Open Racing Car Simulator) is an open-source 3D car racing simulator available on Linux, FreeBSD, Mac OS X, AmigaOS 4, AROS, MorphOS and Microsoft Windows. TORCS was created by Eric Espié and Christophe Guionneau, but project development is now headed by Bernhard Wymann. It is written in C++ and is licensed under the GNU GPL. TORCS is designed to enable pre-programmed AI drivers to race against one another, while allowing the user to control a vehicle using either a keyboard, mouse, or wheel input. History Development Development of TORCS began in 1997 by Eric Espié and Christophe Guionneau as a 2D game called Racing Car Simulator (RCS). It was influenced by and based on RARS (Robot Auto Racing Simulator). When Espié and Guionneau acquired a 3dfx graphics card for game development, they made the first 3D version of the simulator with OpenGL and renamed it Open Racing Car Simulator (ORCS) so as not to be confused with the Revision Control System. The early versions of ORCS did not include cars with engines, making the game a Soap Box Derby-style, downhill racing simulation. When engines and engine sounds were eventually added, the simulation was given its final name, TORCS, as the name seemed more relevant to automobiles given its similarity to the word torque. Later, Guionneau added multiple camera angles during game-play. Guionneau developed much of the original graphics code in TORCS and eventually added texture mapping to give more detail to the cars. Espié then worked on piecing together and finalizing code for release. Future goals The current main developers of TORCS are Bernhard Wymann (project leader), Christos Dimitrakakis (simulation, sound, AI) and Andrew Sumner (graphics, tracks). Aside from bugfixes and maintenance of TORCS code, the next features planned include network multiplayer mode, improved physics engine, enhanced car interior detail, and replays. Reception and impact In December 2000 CNN placed TORCS among the "Top 10 Linux gam
https://en.wikipedia.org/wiki/Reverse%20leakage%20current
Reverse leakage current in a semiconductor device is the current from that semiconductor device when the device is reverse biased. When a semiconductor device is reverse biased it should not conduct any current, however, due to an increased barrier potential, the free electrons on the p side are dragged to the battery's positive terminal, while holes on the n side are dragged to the battery's negative terminal. This produces a current of minority charge carriers and hence its magnitude is extremely small. For constant temperatures, the reverse current is almost constant although the applied reverse voltage is increased up to a certain limit. Hence, it is also called reverse saturation current. The term is particularly applicable to mostly semiconductor junctions, especially diodes and thyristors. Reverse leakage current is also known as "zero gate voltage drain current" with MOSFETs. The leakage current increased with temperature. As an example, the Fairchild Semiconductor FDV303N has a reverse leakage of up to 1 microamp at room temperature rising to 10 microamps with a junction temperature of 50 degrees Celsius. For all basic purposes, leakage currents are very small, and, thus, are normally negligible. Semiconductors
https://en.wikipedia.org/wiki/List%20of%20LCD%20matrices
This is an incomplete list of LCD matrices. TN+Film Matrices IPS Matrices S-IPS Matrices E-IPS — Enhanced IPS (LG-specific terminology) H-IPS – Horizontal IPS (LG-specific terminology) P-IPS – Professional IPS (LG-specific terminology) DD-IPS Matrices ACE Matrices MVA Matrices PVA Matrices References X-bit labs LCD Guide TFT Central Panel Technologies Guide Liquid crystal displays Technology-related lists
https://en.wikipedia.org/wiki/Reflow%20soldering
Reflow soldering is a process in which a solder paste (a sticky mixture of powdered solder and flux) is used to temporarily attach one or thousands of tiny electrical components to their contact pads, after which the entire assembly is subjected to controlled heat. The solder paste reflows in a molten state, creating permanent solder joints. Heating may be accomplished by passing the assembly through a reflow oven, under an infrared lamp, or (mainly for prototyping) by soldering individual joints with a hot air pencil. Reflow soldering with long industrial convection ovens is the preferred method of soldering surface mount technology components or SMT to a printed circuit board or PCB. Each segment of the oven has a regulated temperature, according to the specific thermal requirements of each assembly. Reflow ovens meant specifically for the soldering of surface mount components may also be used for through-hole components by filling the holes with solder paste and inserting the component leads through the paste. Wave soldering however, has been the common method of soldering multi-leaded through-hole components onto a circuit board designed for surface-mount components. When used on boards containing a mix of SMT and plated through-hole (PTH) components, through-hole reflow, when achievable by specifically modified paste stencils, may allow for the wave soldering step to be eliminated from the assembly process, potentially reducing assembly costs. While this may be said of lead-tin solder pastes used previously, lead-free solder alloys such as SAC present a challenge in terms of the limits of oven temperature profile adjustment and requirements of specialized through-hole components that must be hand soldered with solder wire or cannot reasonably withstand the high temperatures directed at circuit boards as they travel on the conveyor of the reflow oven. The reflow soldering of through-hole components using solder paste in a convection oven process is called in
https://en.wikipedia.org/wiki/Transport%20Driver%20Interface
The Transport Driver Interface or TDI is the protocol understood by the upper edge of the Transport layer of the Microsoft Windows kernel network stack. Transport Providers are implementations of network protocols such as TCP/IP, NetBIOS, and AppleTalk. When user-mode binaries are created by compiling and linking, an entity called a TDI client is linked into the binary. TDI clients are provided with the compiler. The user-mode binary uses the user-mode API of whatever network protocol is being used, which in turn causes the TDI client to emit TDI commands into the Transport Provider. Typical TDI commands are TDI_SEND, TDI_CONNECT, TDI_RECEIVE. The purpose of the Transport Driver Interface is to provide an abstraction layer, permitting simplification of the TDI clients. See also Windows Vista networking technologies References Windows XP Driver Development Kit documentation. Further reading Network protocols Windows communication and services
https://en.wikipedia.org/wiki/Don%20Zagier
Don Bernard Zagier (born 29 June 1951) is an American-German mathematician whose main area of work is number theory. He is currently one of the directors of the Max Planck Institute for Mathematics in Bonn, Germany. He was a professor at the Collège de France in Paris from 2006 to 2014. Since October 2014, he is also a Distinguished Staff Associate at the International Centre for Theoretical Physics (ICTP). Background Zagier was born in Heidelberg, West Germany. His mother was a psychiatrist, and his father was the dean of instruction at the American College of Switzerland. His father held five different citizenships, and he spent his youth living in many different countries. After finishing high school (at age 13) and attending Winchester College for a year, he studied for three years at MIT, completing his bachelor's and master's degrees and being named a Putnam Fellow in 1967 at the age of 16. He then wrote a doctoral dissertation on characteristic classes under Friedrich Hirzebruch at Bonn, receiving his PhD at 20. He received his Habilitation at the age of 23, and was named professor at the age of 24. Work Zagier collaborated with Hirzebruch in work on Hilbert modular surfaces. Hirzebruch and Zagier coauthored Intersection numbers of curves on Hilbert modular surfaces and modular forms of Nebentypus, where they proved that intersection numbers of algebraic cycles on a Hilbert modular surface occur as Fourier coefficients of a modular form. Stephen Kudla, John Millson and others generalized this result to intersection numbers of algebraic cycles on arithmetic quotients of symmetric spaces. One of his results is a joint work with Benedict Gross (the so-called Gross–Zagier formula). This formula relates the first derivative of the complex L-series of an elliptic curve evaluated at 1 to the height of a certain Heegner point. This theorem has some applications, including implying cases of the Birch and Swinnerton-Dyer conjecture, along with being an ingredien
https://en.wikipedia.org/wiki/Metagenomics
Metagenomics is the study of genetic material recovered directly from environmental or clinical samples by a method called sequencing. The broad field may also be referred to as environmental genomics, ecogenomics, community genomics or microbiomics. While traditional microbiology and microbial genome sequencing and genomics rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods. Because of its ability to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful way of understanding the microbial world that might revolutionize understanding of biology. As the price of DNA sequencing continues to fall, metagenomics now allows microbial ecology to be investigated at a much greater scale and detail than before. Recent studies use either "shotgun" or PCR directed sequencing to get largely unbiased samples of all genes from all the members of the sampled communities. Etymology The term "metagenomics" was first used by Jo Handelsman, Robert M. Goodman, Michelle R. Rondon, Jon Clardy, and Sean F. Brady, and first appeared in publication in 1998. The term metagenome referenced the idea that a collection of genes sequenced from the environment could be analyzed in a way analogous to the study of a single genome. In 2005, Kevin Chen and Lior Pachter (researchers at the University of California, Berkeley) defined metagenomics as "the application of modern genomics technique without the need for isolation and lab cultivation of individual species". History Conventional sequencing begins with a culture of identical cells as a source of DNA. However, early metagenomic studies revealed that there are probably large groups of microorganisms in many environments that cannot be cultured and thus cannot be sequenced.
https://en.wikipedia.org/wiki/Common%20knowledge%20%28logic%29
Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum. It can be denoted as . The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper. It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s. There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway. The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" () which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge". If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows p" () is transmitted to each agent in private, it is still not common knowledge: . But, if any agent publicly announces their knowledge of p, then it becomes common knowledge that they know p (viz. ). If every agent publicly announces their knowledge of p, p becomes common knowledge . Example Puzzle The idea of common knowledge is often introduced by some variant of induction puzzles (e.g. Muddy children puzzle): On an island, there are k people who have blue eyes, and the rest of the people have green eyes. At the start of the puzzle, no one on the island ever knows their own eye color. By rule, if a person on the island ever discovers they have blue eyes, that per
https://en.wikipedia.org/wiki/Helly%27s%20theorem
Helly's theorem is a basic result in discrete geometry on the intersection of convex sets. It was discovered by Eduard Helly in 1913, but not published by him until 1923, by which time alternative proofs by and had already appeared. Helly's theorem gave rise to the notion of a Helly family. Statement Let be a finite collection of convex subsets of , with . If the intersection of every of these sets is nonempty, then the whole collection has a nonempty intersection; that is, For infinite collections one has to assume compactness: Let be a collection of compact convex subsets of , such that every subcollection of cardinality at most has nonempty intersection. Then the whole collection has nonempty intersection. Proof We prove the finite version, using Radon's theorem as in the proof by . The infinite version then follows by the finite intersection property characterization of compactness: a collection of closed subsets of a compact space has a non-empty intersection if and only if every finite subcollection has a non-empty intersection (once you fix a single set, the intersection of all others with it are closed subsets of a fixed compact space). The proof is by induction: Base case: Let . By our assumptions, for every there is a point that is in the common intersection of all with the possible exception of . Now we apply Radon's theorem to the set which furnishes us with disjoint subsets of such that the convex hull of intersects the convex hull of . Suppose that is a point in the intersection of these two convex hulls. We claim that Indeed, consider any We shall prove that Note that the only element of that may not be in is . If , then , and therefore . Since is convex, it then also contains the convex hull of and therefore also . Likewise, if , then , and by the same reasoning . Since is in every , it must also be in the intersection. Above, we have assumed that the points are all distinct. If this is not the case, say for some , th
https://en.wikipedia.org/wiki/Incertae%20sedis
or is a term used for a taxonomic group where its broader relationships are unknown or undefined. Alternatively, such groups are frequently referred to as "enigmatic taxa". In the system of open nomenclature, uncertainty at specific taxonomic levels is indicated by (of uncertain family), (of uncertain suborder), (of uncertain order) and similar terms. Examples The fossil plant Paradinandra suecica could not be assigned to any family, but was placed incertae sedis within the order Ericales when described in 2001. The fossil Gluteus minimus, described in 1975, could not be assigned to any known animal phylum. The genus is therefore incertae sedis within the kingdom Animalia. While it was unclear to which order the New World vultures (family Cathartidae) should be assigned, they were placed in Aves incertae sedis. It was later agreed to place them in a separate order, Cathartiformes. Bocage's longbill, Motacilla bocagii, previously known as Amaurocichla bocagii, is a species of passerine bird that belongs to the superfamily Passeroidea. Since it was unclear to which family it belongs, it was classified as Passeroidea incertae sedis, until a 2015 phylogenetic study placed it in Motacilla of Motacillidae. Parakaryon myojinensis, a single-celled organism that is apparently distinct from prokaryotes and eukaryotes. Metallogenium is a bacteria that can form star-shaped minerals. In formal nomenclature When formally naming a taxon, uncertainty about its taxonomic classification can be problematic. The International Code of Nomenclature for algae, fungi, and plants, stipulates that "species and subdivisions of genera must be assigned to genera, and infraspecific taxa must be assigned to species, because their names are combinations", but ranks higher than the genus may be assigned incertae sedis. Reason for use Poor description This excerpt from a 2007 scientific paper about crustaceans of the Kuril–Kamchatka Trench and the Japan Trench describes typical circumstan
https://en.wikipedia.org/wiki/Francisella%20tularensis
Francisella tularensis is a pathogenic species of Gram-negative coccobacillus, an aerobic bacterium. It is nonspore-forming, nonmotile, and the causative agent of tularemia, the pneumonic form of which is often lethal without treatment. It is a fastidious, facultative intracellular bacterium, which requires cysteine for growth. Due to its low infectious dose, ease of spread by aerosol, and high virulence, F. tularensis is classified as a Tier 1 Select Agent by the U.S. government, along with other potential agents of bioterrorism such as Yersinia pestis, Bacillus anthracis, and Ebola virus. When found in nature, Francisella tularensis can survive for several weeks at low temperatures in animal carcasses, soil, and water. In the laboratory, F. tularensis appears as small rods (0.2 by 0.2 µm), and is grown best at 35–37 °C. History This species was discovered in ground squirrels in Tulare County, California in 1911. Bacterium tularense was soon isolated by George Walter McCoy (1876–1952) of the US Plague Lab in San Francisco and reported in 1912. In 1922, Edward Francis (1872–1957), a physician and medical researcher from Ohio, discovered that Bacterium tularense was the causative agent of tularemia, after studying several cases with symptoms of the disease. Later, it became known as Francisella tularensis, in honor of the discovery by Francis. The disease was also described in the Fukushima region of Japan by Hachiro Ohara in the 1920s, where it was associated with hunting rabbits. In 1938, Soviet bacteriologist Vladimir Dorofeev (1911–1988) and his team recreated the infectious cycle of the pathogen in humans, and his team was the first to create protection measures. In 1947, Dorofeev independently isolated the pathogen that Francis discovered in 1922. Hence it is commonly known as Francisella dorofeev in former Soviet countries. Classification Three subspecies (biovars) of F. tularensis are recognised (as of 2020): F. t. tularensis (or type A), found predominan
https://en.wikipedia.org/wiki/Microarchitecture
In electronics, computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology. Computer architecture is the combination of microarchitecture and instruction set architecture. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the instructions, execution model, processor registers, address and data formats among other things. The microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be anything from single gates and registers, to complete arithmetic logic units (ALUs) and even larger elements. These diagrams generally separate the datapath (where data is placed) and the control path (which can be said to steer the data). The person designing a system usually draws the specific microarchitecture as a kind of data flow diagram. Like a block diagram, the microarchitecture diagram shows microarchitectural elements such as the arithmetic and logic unit and the register file as a single schematic symbol. Typically, the diagram connects those elements with arrows, thick lines and thin lines to distinguish between three-state buses (which require a three-state buffer for each device that drives the bus), unidirectional buses (always driven by a single source, such as the way the address bus on simpler computers is always driven by
https://en.wikipedia.org/wiki/Scan%20line
A scan line (also scanline) is one line, or row, in a raster scanning pattern, such as a line of video on a cathode ray tube (CRT) display of a television set or computer monitor. On CRT screens the horizontal scan lines are visually discernible, even when viewed from a distance, as alternating colored lines and black lines, especially when a progressive scan signal with below maximum vertical resolution is displayed. This is sometimes used today as a visual effect in computer graphics. The term is used, by analogy, for a single row of pixels in a raster graphics image. Scan lines are important in representations of image data, because many image file formats have special rules for data at the end of a scan line. For example, there may be a rule that each scan line starts on a particular boundary (such as a byte or word; see for example BMP file format). This means that even otherwise compatible raster data may need to be analyzed at the level of scan lines in order to convert between formats. See also Interlaced video Scanline rendering Flicker (screen) Stroboscopic effect References Computer graphics Image processing Display technology Video signal Television technology Television terminology
https://en.wikipedia.org/wiki/Field%20equation
In theoretical physics and applied mathematics, a field equation is a partial differential equation which determines the dynamics of a physical field, specifically the time evolution and spatial distribution of the field. The solutions to the equation are mathematical functions which correspond directly to the field, as functions of time and space. Since the field equation is a partial differential equation, there are families of solutions which represent a variety of physical possibilities. Usually, there is not just a single equation, but a set of coupled equations which must be solved simultaneously. Field equations are not ordinary differential equations since a field depends on space and time, which requires at least two variables. Whereas the "wave equation", the "diffusion equation", and the "continuity equation" all have standard forms (and various special cases or generalizations), there is no single, special equation referred to as "the field equation". The topic broadly splits into equations of classical field theory and quantum field theory. Classical field equations describe many physical properties like temperature of a substance, velocity of a fluid, stresses in an elastic material, electric and magnetic fields from a current, etc. They also describe the fundamental forces of nature, like electromagnetism and gravity. In quantum field theory, particles or systems of "particles" like electrons and photons are associated with fields, allowing for infinite degrees of freedom (unlike finite degrees of freedom in particle mechanics) and variable particle numbers which can be created or annihilated. Generalities Origin Usually, field equations are postulated (like the Einstein field equations and the Schrödinger equation, which underlies all quantum field equations) or obtained from the results of experiments (like Maxwell's equations). The extent of their validity is their ability to correctly predict and agree with experimental results. From a theor
https://en.wikipedia.org/wiki/Molar%20heat%20capacity
The molar heat capacity of a chemical substance is the amount of energy that must be added, in the form of heat, to one mole of the substance in order to cause an increase of one unit in its temperature. Alternatively, it is the heat capacity of a sample of the substance divided by the amount of substance of the sample; or also the specific heat capacity of the substance times its molar mass. The SI unit of molar heat capacity is joule per kelvin per mole, J⋅K−1⋅mol−1. Like the specific heat, the measured molar heat capacity of a substance, especially a gas, may be significantly higher when the sample is allowed to expand as it is heated (at constant pressure, or isobaric) than when it is heated in a closed vessel that prevents expansion (at constant volume, or isochoric). The ratio between the two, however, is the same heat capacity ratio obtained from the corresponding specific heat capacities. This property is most relevant in chemistry, when amounts of substances are often specified in moles rather than by mass or volume. The molar heat capacity generally increases with the molar mass, often varies with temperature and pressure, and is different for each state of matter. For example, at atmospheric pressure, the (isobaric) molar heat capacity of water just above the melting point is about 76 J⋅K−1⋅mol−1, but that of ice just below that point is about 37.84 J⋅K−1⋅mol−1. While the substance is undergoing a phase transition, such as melting or boiling, its molar heat capacity is technically infinite, because the heat goes into changing its state rather than raising its temperature. The concept is not appropriate for substances whose precise composition is not known, or whose molar mass is not well defined, such as polymers and oligomers of indeterminate molecular size. A closely related property of a substance is the heat capacity per mole of atoms, or atom-molar heat capacity, in which the heat capacity of the sample is divided by the number of moles of at
https://en.wikipedia.org/wiki/Gigabyte%20Technology
Gigabyte Technology (branded as GIGABYTE or sometimes GIGA-BYTE; formally GIGA-BYTE Technology Co., Ltd.) is a Taiwanese manufacturer and distributor of computer hardware. Gigabyte's principal business is motherboards. It shipped 4.8 million motherboards in the first quarter of 2015, which allowed it to become the leading motherboard vendor. Gigabyte also manufactures custom graphics cards and laptop computers (including thin and light laptops under its Aero sub-brand). In 2010, Gigabyte was ranked 17th in "Taiwan's Top 20 Global Brands" by the Taiwan External Trade Development Council. The company is publicly held and traded on the Taiwan Stock Exchange, stock ID number . History Gigabyte Technology was established in 1986 by Pei-Cheng Yeh. One of Gigabyte's key advertised features on its motherboards is its "Ultra Durable" construction, advertised with "all solid capacitors". On 8 August 2006 Gigabyte announced a joint venture with Asus. Gigabyte developed the world's first software-controlled power supply in July 2007. An innovative method to charge the iPad and iPhone on the computer was introduced by Gigabyte in April 2010. Gigabyte launched the world's first Z68 motherboard on 31 May 2011, with an on-board mSATA connection for Intel SSD and Smart Response Technology. On 2 April 2012, Gigabyte released the world's first motherboard with 60A ICs from International Rectifier. In 2023, researchers at firmware-focused cybersecurity company Eclypsium said 271 models of Gigabyte motherboards are affected by backdoor vulnerabilities. Whenever a computer with the affected Gigabyte motherboard restarts, code within the motherboard's firmware initiates an updater program that downloads and executes another piece of software. Gigabyte has said it plans to fix the issues. Products Gigabyte designs and manufactures motherboards for both AMD and Intel platforms, and also produces graphics cards and notebooks in partnership with AMD and Nvidia, including Nvidia's Tur
https://en.wikipedia.org/wiki/Activated%20sludge
The activated sludge process is a type of biological wastewater treatment process for treating sewage or industrial wastewaters using aeration and a biological floc composed of bacteria and protozoa. It uses air (or oxygen) and microorganisms to biologically oxidize organic pollutants, producing a waste sludge (or floc) containing the oxidized material. The activated sludge process for removing carbonaceous pollution begins with an aeration tank where air (or oxygen) is injected into the waste water. This is followed by a settling tank to allow the biological flocs (the sludge blanket) to settle, thus separating the biological sludge from the clear treated water. Part of the waste sludge is recycled to the aeration tank and the remaining waste sludge is removed for further treatment and ultimate disposal. Plant types include package plants, oxidation ditch, deep shaft/vertical treatment, surface-aerated basins, and sequencing batch reactors (SBRs). Aeration methods include diffused aeration, surface aerators (cones) or, rarely, pure oxygen aeration. Sludge bulking can occur which makes activated sludge difficult to settle and frequently has an adverse impact on final effluent quality. Treating sludge bulking and managing the plant to avoid a recurrence requires skilled management and may require full-time staffing of a works to allow immediate intervention. A new development of the activated sludge process is the Nereda process which produces a granular sludge that settles very well. Purpose The activated sludge process is a biological process used to oxidise carbonaceous biological matter, oxidising nitrogenous matter (mainly ammonium and nitrogen) in biological matter, and removing nutrients (nitrogen and phosphorus). Process description The process takes advantage of aerobic micro-organisms that can digest organic matter in sewage, and clump together by flocculation entrapping fine particulate matter as they do so. It thereby produces a liquid that is relat
https://en.wikipedia.org/wiki/X-Face
An X-Face is a small bitmap (48 × 48 pixels, black and white) image which is added to a Usenet posting or e-mail message, typically showing a picture of the author's face. The image data is included in the posting as encoded text, and attached with an 'X-Face' header. It was devised by James Ashton. It is one of the outgrowths of the Vismon program developed at Bell Labs in the 1980s. While many programs support X-Face, most of them are free software and based on Unix or its variations, such as KMail or Sylpheed. The most common email programs though, as used in business and most domestic environments, do not handle X-Face natively, and the information is silently ignored. Even where Unix is widely used (university and research environments), it has never been adopted to maximum potential (for example, by searching for senders by X-Face). A further development is the Face header developed in 2005, which also allows for color images in PNG format, and can be used by the Thunderbird addon Display Contact Photo, as well as some other mail readers. Another approach to include the sender's picture in an e-mail was used by Apple: Mail displayed the picture if the mail included the X-Image-URL header. In 1992, this feature was originally implemented in NeXTmail, Mail.app's ancestor. X-Image-URL accepts http or (anonymous) ftp to download the picture; typical size 64x64 pixels. As of Mail v4.5, the feature is no longer supported. See also iChat has a similar though not compatible feature called picture icons XBM is a general monochrome image format supported by xbm2xface.pl FFmpeg and Netpbm tools can create X-Face images Vismon References External links - utilities and a library for converting to and from the X-Face format Email Usenet
https://en.wikipedia.org/wiki/CAS%20latency
Column address strobe latency, also called CAS latency or CL, is the delay in clock cycles between the READ command and the moment data is available. In asynchronous DRAM, the interval is specified in nanoseconds (absolute time). In synchronous DRAM, the interval is specified in clock cycles. Because the latency is dependent upon a number of clock ticks instead of absolute time, the actual time for an SDRAM module to respond to a CAS event might vary between uses of the same module if the clock rate differs. RAM operation background Dynamic RAM is arranged in a rectangular array. Each row is selected by a horizontal word line. Sending a logical high signal along a given row enables the MOSFETs present in that row, connecting each storage capacitor to its corresponding vertical bit line. Each bit line is connected to a sense amplifier that amplifies the small voltage change produced by the storage capacitor. This amplified signal is then output from the DRAM chip as well as driven back up the bit line to refresh the row. When no word line is active, the array is idle and the bit lines are held in a precharged state, with a voltage halfway between high and low. This indeterminate signal is deflected towards high or low by the storage capacitor when a row is made active. To access memory, a row must first be selected and loaded into the sense amplifiers. This row is then active, and columns may be accessed for read or write. The CAS latency is the delay between the time at which the column address and the column address strobe signal are presented to the memory module and the time at which the corresponding data is made available by the memory module. The desired row must already be active; if it is not, additional time is required. As an example, a typical 1 GiB SDRAM memory module might contain eight separate one-gibibit DRAM chips, each offering 128 MiB of storage space. Each chip is divided internally into eight banks of 227=128 Mibits, each of which co
https://en.wikipedia.org/wiki/Unreachable%20code
In computer programming, unreachable code is part of the source code of a program which can never be executed because there exists no control flow path to the code from the rest of the program. Unreachable code is sometimes also called dead code, although dead code may also refer to code that is executed but has no effect on the output of a program. Unreachable code is generally considered undesirable for several reasons: It uses memory unnecessarily It can cause unnecessary use of the CPU's instruction cache This can also decrease data locality Time and effort may be spent testing, maintaining and documenting code which is never used Sometimes an automated test is the only thing using the code. However, unreachable code can have some legitimate uses, like providing a library of functions for calling or jumping to manually via a debugger while the program is halted after a breakpoint. This is particularly useful for examining and pretty-printing the internal state of the program. It may make sense to have such code in the shipped product, so that a developer can attach a debugger to a client's running instance. Causes Unreachable code can exist for many reasons, such as: programming errors in complex conditional branches a consequence of the internal transformations performed by an optimizing compiler; incomplete testing of new or modified code Legacy code Code superseded by another implementation Unreachable code that a programmer decided not to delete because it is mingled with reachable code Potentially reachable code that current use cases never need Dormant code that is kept intentionally in case it is needed later Code used only for debugging. Legacy code is that which was once useful but is no longer used or required. But unreachable code may also be part of a complex library, module or routine where it is useful to others or under conditions which are not met in a particular scenario. An example of such a conditionally unreachable code m
https://en.wikipedia.org/wiki/Active%20structure
An active structure (also known as a smart or adaptive structure) is a mechanical structure with the ability to alter its configuration, form or properties in response to changes in the environment. The term active structure also refers to structures that, unlike traditional engineering structures (e.g., bridges, buildings), require constant motion and hence power input to remain stable. The advantage of active structures is that they can be far more massive than a traditional static structure: an example would be a space fountain, a building that reaches into space. Function The result of the activity is a structure more suited for the type and magnitude of the load it is carrying. For example, an orientation change of a beam could reduce the maximum stress or strain level, while a shape change could render a structure less susceptible to dynamic vibrations. A good example of an adaptive structure is the human body where the skeleton carries a wide range of loads and the muscles change its configuration to do so. Consider carrying a backpack. If the upper body did not adjust the centre of mass of the whole system slightly by leaning forward, the person would fall on their back. An active structure consists of three integral components besides the load carrying part. They are the sensors, the processor and the actuators. In the case of a human body, the sensory nerves are the sensors which gather information of the environment. The brain acts as the processor to evaluate the information and decide to act accordingly and therefore instructs the muscles, which act as actuators to respond. In heavy engineering, there is already an emerging trend to incorporate activation into bridges and domes to minimize vibrations under wind and earthquake loads. Aviation engineering and aerospace engineering have been the main driving force in developing modern active structures. Aircraft (and spacecraft) require adaptation because they are exposed to many different environments
https://en.wikipedia.org/wiki/The%20Fog%20Horn
"The Fog Horn" is a 1951 science fiction short story by American writer Ray Bradbury, the first in his collection The Golden Apples of the Sun. The story was the basis for the 1953 film The Beast from 20,000 Fathoms. Plot The plot follows Johnny, the protagonist and narrator, and his boss, McDunn, who are putting in a night's work at a remote lighthouse in late November. The lighthouse's resonating fog horn attracts a sea monster. This is in fact the third time the monster has visited the lighthouse: he has been attracted by the same fog horn on the same night for the last two years. McDunn attributes the monster's actions to feelings of unrequited love for the lighthouse, whose fog horn sounds exactly like the wailings of the sea monster himself. The fog horn tricks the monster into thinking he has found another of his kind, one who acts as though the monster did not even exist. McDunn and Johnny turn off the fog horn, and in a rage, the monster destroys the lighthouse before retreating to the sea. The lighthouse is reconstructed with reinforced concrete and Johnny finds a new job away from the lighthouse. Years later, Johnny returns and asks McDunn if the monster ever returned; it never did. McDunn hypothesizes that the monster will continue to wait in the depths of the world. Background The original title of the story was "The Beast from 20,000 Fathoms". It was published in The Saturday Evening Post. Meanwhile, a film with a similar theme of prehistoric sea monster was being shot under the working title of Monster from Beneath the Sea. Later the producers, who wished to capitalize on Bradbury's reputation and popularity, bought the rights to Bradbury's story and changed their film's title. Bradbury then changed the title of his story to "The Fog Horn". The monster of the film was based on the illustration of The Saturday Evening Post. Bradbury says that the idea for the story came from seeing the ruins of a demolished roller coaster on a Los Angeles-area bea
https://en.wikipedia.org/wiki/Biofouling
Biofouling or biological fouling is the accumulation of microorganisms, plants, algae, or small animals where it is not wanted on surfaces such as ship and submarine hulls, devices such as water inlets, pipework, grates, ponds, and rivers that cause degradation to the primary purpose of that item. Such accumulation is referred to as epibiosis when the host surface is another organism and the relationship is not parasitic. Since biofouling can occur almost anywhere water is present, biofouling poses risks to a wide variety of objects such as boat hulls and equipment, medical devices and membranes, as well as to entire industries, such as paper manufacturing, food processing, underwater construction, and desalination plants. Anti-fouling is the ability of specifically designed materials (such as toxic biocide paints, or non-toxic paints) to remove or prevent biofouling. The buildup of biofouling on marine vessels poses a significant problem. In some instances, the hull structure and propulsion systems can be damaged. The accumulation of biofoulers on hulls can increase both the hydrodynamic volume of a vessel and the hydrodynamic friction, leading to increased drag of up to 60%. The drag increase has been seen to decrease speeds by up to 10%, which can require up to a 40% increase in fuel to compensate. With fuel typically comprising up to half of marine transport costs, antifouling methods save the shipping industry a considerable amount of money. Further, increased fuel use due to biofouling contributes to adverse environmental effects and is predicted to increase emissions of carbon dioxide and sulfur dioxide between 38% and 72% by 2020, respectively. Biology Biofouling organisms are highly diverse, and extend far beyond the attachment of barnacles and seaweeds. According to some estimates, over 1,700 species comprising over 4,000 organisms are responsible for biofouling. Biofouling is divided into microfouling—biofilm formation and bacterial adhesion—and macrof
https://en.wikipedia.org/wiki/Primary%20energy
Primary energy (PE) is the energy found in nature that has not been subjected to any human engineered conversion process. It encompasses energy contained in raw fuels and other forms of energy, including waste, received as input to a system. Primary energy can be non-renewable or renewable. Total primary energy supply (TPES) is the sum of production and imports, plus or minus stock changes, minus exports and international bunker storage. The International Recommendations for Energy Statistics (IRES) prefers total energy supply (TES) to refer to this indicator. These expressions are often used to describe the total energy supply of a national territory. Secondary energy is a carrier of energy, such as electricity. These are produced by conversion from a primary energy source. Primary energy is used as a measure in energy statistics in the compilation of energy balances, as well as in the field of energetics. In energetics, a primary energy source (PES) refers to the energy forms required by the energy sector to generate the supply of energy carriers used by human society. Primary energy only counts raw energy and not usable energy and fails to account well for energy losses, particularly the large losses in thermal sources. It therefore generally grossly undercounts non thermal renewable energy sources . Examples of sources Primary energy sources should not be confused with the energy system components (or conversion processes) through which they are converted into energy carriers. Usable energy Primary energy sources are transformed in energy conversion processes to more convenient forms of energy that can directly be used by society, such as electrical energy, refined fuels, or synthetic fuels such as hydrogen fuel. In the field of energetics, these forms are called energy carriers and correspond to the concept of "secondary energy" in energy statistics. Conversion to energy carriers (or secondary energy) Energy carriers are energy forms which have been tra
https://en.wikipedia.org/wiki/Incomplete%20polylogarithm
In mathematics, the Incomplete Polylogarithm function is related to the polylogarithm function. It is sometimes known as the incomplete Fermi–Dirac integral or the incomplete Bose–Einstein integral. It may be defined by: Expanding about z=0 and integrating gives a series representation: where Γ(s) is the gamma function and Γ(s,x) is the upper incomplete gamma function. Since Γ(s,0)=Γ(s), it follows that: where Lis(.) is the polylogarithm function. References GNU Scientific Library - Reference Manual https://www.gnu.org/software/gsl/manual/gsl-ref.html#SEC117 Special functions
https://en.wikipedia.org/wiki/Machinery%27s%20Handbook
Machinery's Handbook for machine shop and drafting-room; a reference book on machine design and shop practice for the mechanical engineer, draftsman, toolmaker, and machinist (the full title of the 1st edition) is a classic reference work in mechanical engineering and practical workshop mechanics in one volume published by Industrial Press, New York, since 1914. The first edition was created by Erik Oberg (1881–1951) and Franklin D. Jones (1879–1967), who are still mentioned on the title page of the 29th edition (2012). Recent editions of the handbook contain chapters on mathematics, mechanics, materials, measuring, toolmaking, manufacturing, threading, gears, and machine elements, combined with excerpts from ANSI standards. The work is available in online and ebook form as well as print. During the decades from World War I to World War II, these phrases could refer to either of two competing reference books: McGraw-Hill's American Machinists' Handbook or Industrial Press's Machinery's Handbook. The former book ceased publication after the 8th edition (1945). (One short-lived spin-off appeared in 1955.) The latter book, Machinery's Handbook, is still regularly revised and updated, and it continues to be a "bible of the metalworking industries" today. Machinery's Handbook is apparently the direct inspiration for similar works in other countries, such as Sweden's Karlebo handbok (1st ed. 1936). Machinery's Encyclopedia In 1917, Oberg and Jones also published Machinery's Encyclopedia in 7 volumes. The handbook and encyclopedia are named after the monthly magazine Machinery (Industrial Press, 1894–1973), where the two were consulting editors. See also Machinist Calculator Kempe's Engineers Year-Book External links Industrial Press website History of Machinery's Handbook References 1914 non-fiction books Mechanical engineering Handbooks and manuals Metallurgical industry of the United States
https://en.wikipedia.org/wiki/Storage%20organ
A storage organ is a part of a plant specifically modified for storage of energy (generally in the form of carbohydrates) or water. Storage organs often grow underground, where they are better protected from attack by herbivores. Plants that have an underground storage organ are called geophytes in the Raunkiær plant life-form classification system. Storage organs often, but not always, act as perennating organs which enable plants to survive adverse conditions (such as cold, excessive heat, lack of light or drought). Relationship to perennating organ Storage organs may act as perennating organs ('perennating' as in perennial, meaning "through the year", used in the sense of continuing beyond the year and in due course lasting for multiple years). These are used by plants to survive adverse periods in the plant's life-cycle (e.g. caused by cold, excessive heat, lack of light or drought). During these periods, parts of the plant die and then when conditions become favourable again, re-growth occurs from buds in the perennating organs. For example, geophytes growing in woodland under deciduous trees (e.g. bluebells, trilliums) die back to underground storage organs during summer when tree leaf cover restricts light and water is less available. However, perennating organs need not be storage organs. After losing their leaves, deciduous trees grow them again from 'resting buds', which are the perennating organs of phanerophytes in the Raunkiær classification, but which do not specifically act as storage organs. Equally, storage organs need not be perennating organs. Many succulents have leaves adapted for water storage, which they retain in adverse conditions. Underground storage organ In common parlance, underground storage organs may be generically called roots, tubers, or bulbs, but to the botanist there is more specific technical nomenclature: True roots: Storage taproot — e.g. carrot Tuberous root or root tuber – e.g. Dahlia Modified stems: Bulb (a shor
https://en.wikipedia.org/wiki/Lepidium%20meyenii
Lepidium meyenii, known as maca or Peruvian ginseng, is an edible herbaceous biennial plant of the family Brassicaceae that is native to South America in the high Andes mountains of Peru and Bolivia. It was found at the Meseta de Bombón plateau close to Lake Junin in the late 1980s. It is grown for its fleshy hypocotyl that is fused with a taproot, which is typically dried, but may also be freshly cooked as a root vegetable. As a cash crop, it is primarily exported as a powder that may be raw, or processed further as a gelatinized starch or as an extract. If dried, it may be processed into a flour for baking or as a dietary supplement. Its Spanish and Quechua names include maca-maca, maino, ayak chichira, and ayak willku. History and controversy Antonio Vázquez de Espinosa gave a description of the plant following his visit to Peru circa 1598 and Bernabé Cobo gave a description of this plant in the early 17th century. Gerhard Walpers named the species Lepidium meyenii in 1843. In the 1990s, Gloria Chacon made a further distinction of a different species. She considered the widely cultivated natural maca of today to be a newer domesticated species, L. peruvianum. Most botanists doubt this distinction, however, and continue to call the cultivated maca L. meyenii. The Latin name recognized by the USDA similarly continues to be Lepidium meyenii. It has been debated whether it is botanically correct to consider meyenii and peruvianum to be distinct from one another. A 2015 multi-center study found differences in taxonomy, visual appearance, phytochemical profiles and DNA sequences when comparing L. meyenii and L. peruvianum, suggesting that they are in fact different and that their names should not be considered synonyms. Description The growth habit, size, and proportions of maca are roughly similar to those of radishes and turnips, to which it is related, but it also resembles a parsnip. The green, fragrant tops are short and lie along the ground. The thin, frilly
https://en.wikipedia.org/wiki/St.%20Brandon
Saint Brandon, also known as the Cargados Carajos Shoals, is an Indian Ocean archipelago of sand banks, shoals and islets belonging to Mauritius. It lies about northeast of the island of Mauritius. It consists of five island groups, with about 28-40 islands and islets in total, depending on seasonal storms and related sand movements. The archipelago is low-lying and is prone to substantial submersion in severe weather by tropical cyclones in the Mascarene Islands. It has an aggregate land area estimated variously at and . The islands have a small population of mostly fishermen, numbering 63 people in 2001. The bulk of this population, approximately 40 people, live on Île Raphael, with smaller settlements existing on Avocaré Island, L'Île Coco, and L'île du Sud. In the early 19th century, most of the islands were used as fishing stations. Today, only one company operates on the archipelago with three fishing stations and accommodation for sport fishermen on L'île du Sud and Île Raphael. A settlement on Albatross Island was abandoned in 1988. Thirteen of the thirty islands of St. Brandon held on a permanent lease since 1901 were subject to a legal dispute from 11 August 1995 until 30 July 2008 between a deceased Mauritian citizen called M. Talbot and the government of Mauritius as co-defendant and the Raphael Fishing Company, but this was definitively resolved by Mauritius's Highest Court of Appeal, the UK Privy Council in 2008 which converted the Permanent Lease into a Permanent Grant. Etymology The name Saint Brandon most likely came from the anglicized name of the French town of Saint-Brandan, possibly given by French sailors and corsairs that sailed to and from Brittany. Geography Geographically, the archipelago is part of the Mascarene Islands and is situated on the Mascarene Plateau formed by the separation of the Mauritia microcontinent during the separation of India and Madagascar around 60 million years ago from what is today the African continent.
https://en.wikipedia.org/wiki/Supertaster
A supertaster is a person whose sense of taste is of far greater intensity than the average person, having an elevated taste response. History The term originated with experimental psychologist Linda Bartoshuk, who has spent much of her career studying genetic variation in taste perception. In the early 1980s, Bartoshuk and her colleagues found that some individuals tested in the laboratory seemed to have an elevated taste response and called them supertasters. This increased taste response is not the result of response bias or a scaling artifact but appears to have an anatomical or biological basis. Phenylthiocarbamide In 1931, Arthur L. Fox, a DuPont chemist, discovered that some people found phenylthiocarbamide (PTC) to be bitter while others found it tasteless. At the 1931 American Association for the Advancement of Science meeting, Fox collaborated with Albert F. Blakeslee, a geneticist, to have attendees taste PTC: 65% found it bitter, 28% found it tasteless, and 6% described other taste qualities. Subsequent work revealed that the ability to taste PTC is genetic. Propylthiouracil In the 1960s, Roland Fischer was the first to link the ability to taste PTC, and the related compound propylthiouracil (PROP), to food preference, diets, and calorie intake. Today, PROP has replaced PTC for research because of a faint sulfurous odor and safety concerns with PTC. Bartoshuk and colleagues discovered that the taster group could be further divided into medium tasters and supertasters. Research suggests 25% of the population are non-tasters, 50% are medium tasters, and 25% are supertasters. Cause The exact cause of heightened response to taste in humans has yet to be elucidated. A review found associations between supertasters and the presence of the TAS2R38 gene, the ability to taste PROP and PTC, and an increased number of fungiform papillae. In addition, environmental causes may play a role in sensitive taste. The exact mechanisms by which these causes may ma
https://en.wikipedia.org/wiki/Smart%20camera
A smart camera (sensor) or intelligent camera (sensor) or (smart) vision sensor or intelligent vision sensor or smart optical sensor or intelligent optical sensor or smart visual sensor or intelligent visual sensor is a machine vision system which, in addition to image capture circuitry, is capable of extracting application-specific information from the captured images, along with generating event descriptions or making decisions that are used in an intelligent and automated system. A smart camera is a self-contained, standalone vision system with built-in image sensor in the housing of an industrial video camera. The vision system and the image sensor can be integrated into one single piece of hardware known as intelligent image sensor or smart image sensor. It contains all necessary communication interfaces, e.g. Ethernet, as well as industry-proof 24V I/O lines for connection to a PLC, actuators, relays or pneumatic valves, and can be either static or mobile. It is not necessarily larger than an industrial or surveillance camera. A capability in machine vision generally means a degree of development such that these capabilities are ready for use on individual applications. This architecture has the advantage of a more compact volume compared to PC-based vision systems and often achieves lower cost, at the expense of a somewhat simpler (or omitted) user interface. Smart cameras are also referred to by the more general term smart sensors. History The first publication of the term smart camera was in 1975 as according to Belbachir et al. In 1976, the General Electric's Electronic Systems Division indicated requirements of two industrial firms for smart cameras in a report for National Technical Information Service. Authors affiliated in HRL Laboratories defined a smart camera as "a camera that could process its pictures before recording them" in 1976. One of the first mentions of smart optical sensors appeared in a concept evaluation for satellites by NASA and Gen
https://en.wikipedia.org/wiki/Uplift%20%28science%20fiction%29
In science fiction, uplift is a developmental process to transform a certain species of animals into more intelligent beings by other, already-intelligent beings. This is usually accomplished by cultural, technological, or evolutional interventions like genetic engineering. The earliest appearance of the concept is in H. G. Wells's 1896 novel The Island of Doctor Moreau. The term was popularized by David Brin in his Uplift series in the 1980s. History of the concept The concept can be traced to H. G. Wells's novel The Island of Doctor Moreau (1896), in which the titular scientist transforms animals into horrifying parodies of humans through surgery and psychological torment. The resulting animal-people obsessively recite the Law, a series of prohibitions against reversion to animal behaviors, with the haunting refrain of "Are we not men?" Wells's novel reflects Victorian concerns about vivisection and of the power of unrestrained scientific experimentation to do terrible harm. Other early literary examples can be found in the following works: Mikhail Bulgakov's Heart of a Dog (1921) tells the story of a stray dog, who is found by a surgeon, and undergoes extensive brain surgery for experimental purposes to create a New Soviet man. L. Sprague de Camp's "Johnny Black" stories (beginning with "The Command") about a black bear raised to human-level intelligence, published in Astounding Science-Fiction from 1938–1940. Olaf Stapledon's Sirius (1944) explores a dog with human intelligence. In Cordwainer Smith's Instrumentality of Mankind series "underpeople" are created from animals through unexplained technological means explicitly to be servants of humanity, and were often treated as less than slaves by the society that used them, until the laws were reformed in the story "The Ballad of Lost C'Mell" (1962). Smith's characterizations of underpeople are frequently quite sympathetic, and one of his most memorable characters is C'Mell, the cat-woman who appears in "T
https://en.wikipedia.org/wiki/Megatrajectory
In evolutionary biology, megatrajectories are the major evolutionary milestones and directions in the evolution of life. Posited by A. H. Knoll and Richard K. Bambach in their 2000 collaboration, "Directionality in the History of Life," Knoll and Bamback argue that, in consideration of the problem of progress in evolutionary history, a middle road that encompasses both contingent and convergent features of biological evolution may be attainable through the idea of the megatrajectory: We believe that six broad megatrajectories capture the essence of vectoral change in the history of life. The megatrajectories for a logical sequence dictated by the necessity for complexity level N to exist before N+1 can evolve...In the view offered here, each megatrajectory adds new and qualitatively distinct dimensions to the way life utilizes ecospace. According to Knoll and Bambach, the six megatrajectories outlined by biological evolution thus far are: the origin of life to the "Last Common Ancestor" prokaryote diversification unicellular eukaryote diversification multicellular organisms land organisms appearance of intelligence and technology Milan M. Ćirković and Robert Bradbury, have taken the megatrajectory concept one step further by theorizing that a seventh megatrajectory exists: postbiological evolution triggered by the emergence of artificial intelligence at least equivalent to the biologically-evolved one, as well as the invention of several key technologies of the similar level of complexity and environmental impact, such as molecular nanoassembling or stellar uplifting. See also Intelligence principle References Further reading Evolutionary biology
https://en.wikipedia.org/wiki/Falcon%20Northwest
Falcon Northwest is a private company headquartered in Medford, Oregon. It designs, assembles, and markets high-end custom computers. The company was founded in 1992 and was one of the first to specialize in PCs built specifically for gaming. History Falcon Northwest was founded in April 1992 by gamer hobbyist and former pilot Kelt Reeves. Falcon Northwest released the first pre-built computer model intended specifically for gaming, the Mach V, in 1993, starting the "gaming PC" category of computer products. The company was founded in Florida, but later moved to Coos Bay, Oregon, then Ashland, Oregon, and finally Medford, Oregon. In the late 1990s, Falcon grew to $3 million in annual revenues and opened a new manufacturing facility in Oregon. Later on, the company collaborated with Intel on early liquid cooling components. Intel worked with Falcon Northwest in secret, in order to avoid the appearance of endorsing overclocking by selling liquid cooling products under its own brand. Products Falcon Northwest sells high-end computers that are optimized for gaming, scientific, or military applications. As of 2013, about half of its sales were from gamers. Falcon's computers are consistently highly-ranked in benchmark tests, but cost $1,500 to over $10,000 depending on the user's configuration. Many Falcon PCs are sold with custom paint jobs, high-end graphics cards, and special low-latency components. Though it was originally known for tower desktops like the Mach V, and also sells laptops, as of 2017 Falcon is best-known for its smaller, portable mini-PCs. Their products include: Mach V - Desktop tower PC Talon - Desktop tower PC FragBox - Small Form Factor (SFF) PC Tiki - Micro-tower PC TLX - Thin & light class laptop PC DRX - Desktop replacement class laptop PC Reception In benchmark tests by Maximum PC in 2018, Falcon Northwest's Tiki mini-PC performed better than a tower computer with a high-end graphics card, but was also the most expensive computer the rev
https://en.wikipedia.org/wiki/Burroughs%20B1700
The Burroughs B1000 Series was a series of mainframe computers, built by the Burroughs Corporation, and originally introduced in the 1970s with continued software development until 1987. The series consisted of three major generations which were the B1700, B1800, and B1900 series machines. They were also known as the Burroughs Small Systems, by contrast with the Burroughs Large Systems (B5000, B6000, B7000, B8000) and the Burroughs Medium Systems (B2000, B3000, B4000). Much of the original research for the B1700, initially codenamed the PLP ("Proper Language Processor" or "Program Language Processor"), was done at the Burroughs Pasadena plant. Production of the B1700s began in the mid-1970s and occurred at both the Santa Barbara and Liège, Belgium plants. The majority of design work was done at Santa Barbara with the B1830 being the notable exception designed at Liège. Features Writeable control store The B1000 is distinguished from other machines in that it had a writeable control store allowing the machine to emulate any other machine. The Burroughs MCP (Master Control Program) would schedule a particular job to run. The MCP would preload the interpreter for whatever language was required. These interpreters presented different virtual machines for COBOL, Fortran, etc. A notable idea of the "semantic gap" between the ideal expression of the solution to a particular programming problem, and the real physical hardware illustrated the inefficiency of current machine implementations. The three Burroughs architectures represent solving this problem by building hardware aligned with high-level languages, so-called language-directed design (contemporary term; today more often called a "high-level language computer architecture"). The large systems were stack machines and very efficiently executed ALGOL. The medium systems (B2000, 3000, and B4000) were aimed at the business world and executing COBOL (thus everything was done with BCD including addressing me
https://en.wikipedia.org/wiki/HTCondor
HTCondor is an open-source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks. It can be used to manage workload on a dedicated cluster of computers, or to farm out work to idle desktop computersso-called cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X, FreeBSD, and Microsoft Windows operating systems. HTCondor can integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment. HTCondor is developed by the HTCondor team at the University of Wisconsin–Madison and is freely available for use. HTCondor follows an open-source philosophy and is licensed under the Apache License 2.0. While HTCondor makes use of unused computing time, leaving computers turned on for use with HTCondor will increase energy consumption and associated costs. Starting from version 7.1.1, HTCondor can hibernate and wake machines based on user-specified policies, a feature previously available only via third-party software. History The development of HTCondor started in 1988. HTCondor was formerly known as Condor; the name was changed in October 2012 to resolve a trademark lawsuit. HTCondor was the scheduler software used to distribute jobs for the first draft assembly of the Human Genome. Example of use The NASA Advanced Supercomputing facility (NAS) HTCondor pool consists of approximately 350 SGI and Sun workstations purchased and used for software development, visualization, email, document preparation, and other tasks. Each workstation runs a daemon that watches user I/O and CPU load. When a workstation has been idle for two hours, a job from the batch queue is assigned to the workstation and will run until the daemon detects a keystroke, mouse motion, or high non-HTCondor CPU usage. At that point, the job will be removed from the workstation and placed back on the batch queue. Features HTCondor can run both sequential
https://en.wikipedia.org/wiki/Ad%20infinitum
Ad infinitum is a Latin phrase meaning "to infinity" or "forevermore". Description In context, it usually means "continue forever, without limit" and this can be used to describe a non-terminating process, a non-terminating repeating process, or a set of instructions to be repeated "forever," among other uses. It may also be used in a manner similar to the Latin phrase et cetera to denote written words or a concept that continues for a lengthy period beyond what is shown. Examples include: "The sequence 1, 2, 3, ... continues ad infinitum." "The perimeter of a fractal may be iteratively drawn ad infinitum." The 17th-century writer Jonathan Swift incorporated the idea of self-similarity in the following lines from his satirical poem On Poetry: a Rhapsody (1733): The vermin only teaze and pinch Their foes superior by an inch. So, naturalists observe, a flea Has smaller fleas that on him prey; And these have smaller still to bite 'em, And so proceed ad infinitum. Thus every poet, in his kind, Is bit by him that comes behindThe mathematician Augustus De Morgan included similar lines in his rhyme Siphonaptera. See also Mathematical induction Recursion Self-reference "The Song That Never Ends" Turtles all the way down References External links __notoc__ Infinity Latin logical phrases Latin words and phrases
https://en.wikipedia.org/wiki/Tangent%20cone
In geometry, the tangent cone is a generalization of the notion of the tangent space to a manifold to the case of certain spaces with singularities. Definitions in nonlinear analysis In nonlinear analysis, there are many definitions for a tangent cone, including the adjacent cone, Bouligand's contingent cone, and the Clarke tangent cone. These three cones coincide for a convex set, but they can differ on more general sets. Clarke tangent cone Let be a nonempty closed subset of the Banach space . The Clarke's tangent cone to at , denoted by consists of all vectors , such that for any sequence tending to zero, and any sequence tending to , there exists a sequence tending to , such that for all holds Clarke's tangent cone is always subset of the corresponding contingent cone (and coincides with it, when the set in question is convex). It has the important property of being a closed convex cone. Definition in convex geometry Let K be a closed convex subset of a real vector space V and ∂K be the boundary of K. The solid tangent cone to K at a point x ∈ ∂K is the closure of the cone formed by all half-lines (or rays) emanating from x and intersecting K in at least one point y distinct from x. It is a convex cone in V and can also be defined as the intersection of the closed half-spaces of V containing K and bounded by the supporting hyperplanes of K at x. The boundary TK of the solid tangent cone is the tangent cone to K and ∂K at x. If this is an affine subspace of V then the point x is called a smooth point of ∂K and ∂K is said to be differentiable at x and TK is the ordinary tangent space to ∂K at x. Definition in algebraic geometry Let X be an affine algebraic variety embedded into the affine space , with defining ideal . For any polynomial f, let be the homogeneous component of f of the lowest degree, the initial term of f, and let be the homogeneous ideal which is formed by the initial terms for all , the initial ideal of I. The tangent c
https://en.wikipedia.org/wiki/Ab%20Initio%20Software
Ab Initio Software is an American multinational enterprise software corporation based in Lexington, Massachusetts. The company specializes in high-volume data processing applications and enterprise application integration. It was founded in 1995 by the former CEO of Thinking Machines Corporation, Sheryl Handler, and several other former employees after the bankruptcy of that company. The Ab Initio products are provided on a platform for parallel data processing applications. These applications perform functions relating to fourth generation data analysis, batch processing, complex events, quantitative and qualitative data processing, data manipulation graphical user interface (GUI)-based parallel processing software which is commonly used to extract, transform, and load (ETL) data. References External links Official site The Ab Initio Professionals Group Data warehousing products Development software companies Extract, transform, load tools Companies based in Lexington, Massachusetts
https://en.wikipedia.org/wiki/Fuchsian%20model
In mathematics, a Fuchsian model is a representation of a hyperbolic Riemann surface R as a quotient of the upper half-plane H by a Fuchsian group. Every hyperbolic Riemann surface admits such a representation. The concept is named after Lazarus Fuchs. A more precise definition By the uniformization theorem, every Riemann surface is either elliptic, parabolic or hyperbolic. More precisely this theorem states that a Riemann surface which is not isomorphic to either the Riemann sphere (the elliptic case) or a quotient of the complex plane by a discrete subgroup (the parabolic case) must be a quotient of the hyperbolic plane by a subgroup acting properly discontinuously and freely. In the Poincaré half-plane model for the hyperbolic plane the group of biholomorphic transformations is the group acting by homographies, and the uniformization theorem means that there exists a discrete, torsion-free subgroup such that the Riemann surface is isomorphic to . Such a group is called a Fuchsian group, and the isomorphism is called a Fuchsian model for . Fuchsian models and Teichmüller space Let be a closed hyperbolic surface and let be a Fuchsian group so that is a Fuchsian model for . Let and endow this set with the topology of pointwise convergence (sometimes called "algebraic convergence"). In this particular case this topology can most easily be defined as follows: the group is finitely generated since it is isomorphic to the fundamental group of . Let be a generating set: then any is determined by the elements and so we can identify with a subset of by the map . Then we give it the subspace topology. The Nielsen isomorphism theorem (this is not standard terminology and this result is not directly related to the Dehn–Nielsen theorem) then has the following statement: The proof is very simple: choose an homeomorphism and lift it to the hyperbolic plane. Taking a diffeomorphism yields quasi-conformal map since is compact. This result can be se
https://en.wikipedia.org/wiki/Apulet
An apulet is a component of the Cell computer architecture consisting of a bundle comprising a data object and the code necessary to perform an action upon it. The Cell architecture calls for several APUs (Attached Processing Units) which do the primary processing of the system, under the control of a single Processing Element (PE). Each APU is loaded with its apulet by the PE and can pass its results to the next APU. See also Core Multiplexing Technology CPGA BINAC References Cell BE architecture
https://en.wikipedia.org/wiki/Aspirator%20%28entomology%29
In entomology, an aspirator, also known as a pooter, is a device used in the collection of insects, crustaceans or other small, fragile organisms, usually for scientific purposes. Design and use Such devices are most commonly used by entomologists for field and lab work. One of the most common designs consists of a small resealable jar or vial, the lid or stopper of which is penetrated by two tubes. On the inner end of one tube, fine mesh or another type of filter is attached, and this tube leads to the user's mouth (usually connected by a long, flexible piece of tubing). The end of the second tube projects into the collecting chamber, and its far end can then be placed over an insect or other small organism; the user sucks on the first tube, and the insect is drawn into the collecting chamber through the other. The other common design (the more traditional "pooter") consists of a length of flexible tubing, of which one end is held in the mouth, and the other end which holds the tip. The tip is usually a glass or plastic pipette inserted into the plastic tubing, with a piece of gauze as a filter at the inner end to prevent accidental ingestion. Small insects (e.g., Drosophila) may be gently collected and held against the filter by steady inhalation, and transferred into a container by then blowing the insect(s) out. A skilled lab worker, for instance, may be able to sequentially inhale and then transfer a pooter-full of Drosophila flies singly into vials, thus facilitating rapid setup of fly experiments with a minimum of pain caused to the researcher, or the researched. Larger, motor-powered variants of this design exist (typically, a leaf blower working in reverse), often named D-Vac, where the insects are sucked into a mesh collecting bag in a long plastic tube, and held there by the powerful suction In entomological surveys pooters are usually used in combination with insect nets or beating nets but may also be used alone to collect insects seen on vegetatio
https://en.wikipedia.org/wiki/C.%20R.%20Rao
Calyampudi Radhakrishna Rao (10 September 1920 – 22 August 2023) was an Indian-American mathematician and statistician. He was professor emeritus at Pennsylvania State University and Research Professor at the University at Buffalo. Rao was honoured by numerous colloquia, honorary degrees, and festschrifts and was awarded the US National Medal of Science in 2002. The American Statistical Association has described him as "a living legend whose work has influenced not just statistics, but has had far reaching implications for fields as varied as economics, genetics, anthropology, geology, national planning, demography, biometry, and medicine." The Times of India listed Rao as one of the top 10 Indian scientists of all time. In 2023, Rao was awarded the International Prize in Statistics, an award often touted as the "statistics’ equivalent of the Nobel Prize". Rao was also a Senior Policy and Statistics advisor for the Indian Heart Association non-profit focused on raising South Asian cardiovascular disease awareness. Early life C. R. Rao was the eighth of the ten children born to a Telugu family in Hoovina Hadagali, Bellary, Madras Presidency, Britain ruled India (now in Vijayanagara, Karnataka, India). His schooling was completed in Gudur, Nuzvid, Nandigama, and Visakhapatnam, all in the present state of Andhra Pradesh. He received an MSc in mathematics from Andhra University and an MA in statistics from Calcutta University in 1943. He obtained a PhD degree at King's College, Cambridge, under R. A. Fisher in 1948, to which he added a DSc degree, also from Cambridge, in 1965. Career Rao first worked at the Indian Statistical Institute and the Anthropological Museum in Cambridge. Later he held several important positions, as the Director of the Indian Statistical Institute, Jawaharlal Nehru Professor and National Professor in India, University Professor at the University of Pittsburgh and Eberly Professor and Chair of Statistics and Director of the Center for Multi
https://en.wikipedia.org/wiki/Phototroph
Phototrophs () are organisms that carry out photon capture to produce complex organic compounds (e.g. carbohydrates) and acquire energy. They use the energy from light to carry out various cellular metabolic processes. It is a common misconception that phototrophs are obligatorily photosynthetic. Many, but not all, phototrophs often photosynthesize: they anabolically convert carbon dioxide into organic material to be utilized structurally, functionally, or as a source for later catabolic processes (e.g. in the form of starches, sugars and fats). All phototrophs either use electron transport chains or direct proton pumping to establish an electrochemical gradient which is utilized by ATP synthase, to provide the molecular energy currency for the cell. Phototrophs can be either autotrophs or heterotrophs. If their electron and hydrogen donors are inorganic compounds (e.g., , as in some purple sulfur bacteria, or , as in some green sulfur bacteria) they can be also called lithotrophs, and so, some photoautotrophs are also called photolithoautotrophs. Examples of phototroph organisms are Rhodobacter capsulatus, Chromatium, and Chlorobium. History Originally used with a different meaning, the term took its current definition after Lwoff and collaborators (1946). Photoautotroph Most of the well-recognized phototrophs are autotrophic, also known as photoautotrophs, and can fix carbon. They can be contrasted with chemotrophs that obtain their energy by the oxidation of electron donors in their environments. Photoautotrophs are capable of synthesizing their own food from inorganic substances using light as an energy source. Green plants and photosynthetic bacteria are photoautotrophs. Photoautotrophic organisms are sometimes referred to as holophytic. Oxygenic photosynthetic organisms use chlorophyll for light-energy capture and oxidize water, "splitting" it into molecular oxygen. Ecology In an ecological context, phototrophs are often the food source for neighboring he
https://en.wikipedia.org/wiki/Left%20recursion
In the formal language theory of computer science, left recursion is a special case of recursion where a string is recognized as part of a language by the fact that it decomposes into a string from that same language (on the left) and a suffix (on the right). For instance, can be recognized as a sum because it can be broken into , also a sum, and , a suitable suffix. In terms of context-free grammar, a nonterminal is left-recursive if the leftmost symbol in one of its productions is itself (in the case of direct left recursion) or can be made itself by some sequence of substitutions (in the case of indirect left recursion). Non-technical Introduction Formal language theory may come across as very difficult. Let's start off with a very simple example just to show the problem. If we take a look at the name of a former Dutch bank, VSB Bank, you will see something odd. What do you think the B stands for? Bank. The same word again. How many banks are in this name? Let's chop down the name in parts: VSB Bank: V = Verenigde (United) S = Spaarbank (Savings Bank) B = Bank Bank. Concluding: VSB Bank = Verenigde Spaarbank Bank Bank. Now you see what a left recursive name abbreviation is all about. The remainder of this article is not using examples, but abstractions in the forms of symbols. Definition A grammar is left-recursive if and only if there exists a nonterminal symbol that can derive to a sentential form with itself as the leftmost symbol. Symbolically, , where indicates the operation of making one or more substitutions, and is any sequence of terminal and nonterminal symbols. Direct left recursion Direct left recursion occurs when the definition can be satisfied with only one substitution. It requires a rule of the form where is a sequence of nonterminals and terminals . For example, the rule is directly left-recursive. A left-to-right recursive descent parser for this rule might look like void Expression() { Expression(); match('+');
https://en.wikipedia.org/wiki/Barton%E2%80%93Nackman%20trick
Barton–Nackman trick is a term coined by the C++ standardization committee (ISO/IEC JTC1/SC22 WG21) to refer to an idiom introduced by John Barton and Lee Nackman as restricted template expansion. The idiom The idiom is characterized by an in-class friend function definition appearing in the base class template component of the curiously recurring template pattern (CRTP). // A class template to express an equality comparison interface. template<typename T> class equal_comparable { friend bool operator==(T const &a, T const &b) { return a.equal_to(b); } friend bool operator!=(T const &a, T const &b) { return !a.equal_to(b); } }; // Class value_type wants to have == and !=, so it derives from // equal_comparable with itself as argument (which is the CRTP). class value_type : private equal_comparable<value_type> { public: bool equal_to(value_type const& rhs) const; // to be defined }; When a class template like equal_comparable is instantiated, the in-class friend definitions produce nontemplate (and nonmember) functions (operator functions, in this case). At the time the idiom was introduced (1994), the C++ language did not define a partial ordering for overloaded function templates and, as a result, overloading function templates often resulted in ambiguities. For example, trying to capture a generic definition for operator== as template<typename T> bool operator==(T const &a, T const &b) { /* ... */ } would essentially be incompatible with another definition like template<typename T> bool operator==(Array<T> const &a, Array<T> const &b) { /* ... */ } The Barton–Nackman trick, then, achieves the goal of providing a generic user-defined equality operator without having to deal with such ambiguities. The adjective restricted in the idiom name refers to the fact that the provided in-class function definition is restricted (only applies) to specializations of the given class template. The term is sometimes mistakenly used to refer to t
https://en.wikipedia.org/wiki/Google%20File%20System
Google File System (GFS or GoogleFS, not to be confused with the GFS Linux file system) is a proprietary distributed file system developed by Google to provide efficient, reliable access to data using large clusters of commodity hardware. Google file system was replaced by Colossus in 2010. Design GFS is enhanced for Google's core data storage and usage needs (primarily the search engine), which can generate enormous amounts of data that must be retained; Google File System grew out of an earlier Google effort, "BigFiles", developed by Larry Page and Sergey Brin in the early days of Google, while it was still located in Stanford. Files are divided into fixed-size chunks of 64 megabytes, similar to clusters or sectors in regular file systems, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on Google's computing clusters, dense nodes which consist of cheap "commodity" computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high data throughputs, even when it comes at the cost of latency. A GFS cluster consists of multiple nodes. These nodes are divided into two types: one Master node and multiple Chunkservers. Each file is divided into fixed-size chunks. Chunkservers store these chunks. Each chunk is assigned a globally unique 64-bit label by the master node at the time of creation, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network. At default, it is replicated three times, but this is configurable. Files which are in high demand may have a higher replication factor, while files for which the application client uses strict storage optimizations may be replicated less than three times - in order to cope with quick garbage cleaning policies. The Master server does not usually store the actual chu
https://en.wikipedia.org/wiki/Sex%20linkage
Sex linked describes the sex-specific reading patterns of inheritance and presentation when a gene mutation (allele) is present on a sex chromosome (allosome) rather than a non-sex chromosome (autosome). In humans, these are termed X-linked recessive, X-linked dominant and Y-linked. The inheritance and presentation of all three differ depending on the sex of both the parent and the child. This makes them characteristically different from autosomal dominance and recessiveness. There are many more X-linked conditions than Y-linked conditions, since humans have several times as many genes on the X chromosome than the Y chromosome. Only females are able to be carriers for X-linked conditions; males will always be affected by any X-linked condition, since they have no second X chromosome with a healthy copy of the gene. As such, X-linked recessive conditions affect males much more commonly than females. In X-linked recessive inheritance, a son born to a carrier mother and an unaffected father has a 50% chance of being affected, while a daughter has a 50% chance of being a carrier, however a fraction of carriers may display a milder (or even full) form of the condition due to a phenomenon known as skewed X-inactivation, in which the normal process of inactivating half of the female body's X chromosomes preferably targets a certain parent's X chromosome (the father's in this case). If the father is affected, the son will not be affected, as he does not inherit the father's X chromosome, but the daughter will always be a carrier (and may occasionally present with symptoms due to aforementioned skewed X-inactivation). In X-linked dominant inheritance, a son or daughter born to an affected mother and an unaffected father both have a 50% chance of being affected (though a few X-linked dominant conditions are embryonic lethal for the son, making them appear to only occur in females). If the father is affected, the son will always be unaffected, but the daughter will always b
https://en.wikipedia.org/wiki/Computational%20economics
Computational economics is an interdisciplinary research discipline that involves computer science, economics, and management science. This subject encompasses computational modeling of economic systems. Some of these areas are unique, while others established areas of economics by allowing robust data analytics and solutions of problems that would be arduous to research without computers and associated numerical methods. Computational methods have been applied in various fields of economics research, including but not limiting to:    Econometrics: Non-parametric approaches, semi-parametric approaches, and machine learning. Dynamic systems modeling: Optimization, dynamic stochastic general equilibrium modeling, and agent-based modeling. History Computational economics developed concurrently with the mathematization of the field. During the early 20th century, pioneers such as Jan Tinbergen and Ragnar Frisch advanced the computerization of economics and the growth of econometrics. As a result of advancements in Econometrics, regression models, hypothesis testing, and other computational statistical methods became widely adopted in economic research. On the theoretical front, complex macroeconomic models, including the real business cycle (RBC) model and dynamic stochastic general equilibrium (DSGE) models have propelled the development and application of numerical solution methods that rely heavily on computation. In the 21st century, the development of computational algorithms created new means for computational methods to interact with economic research. Innovative approaches such as machine learning models and agent-based modeling have been actively explored in different areas of economic research, offering economists an expanded toolkit that frequently differs in character from traditional methods. Applications Agent based modelling Computational economics uses computer-based economic modeling to solve analytically and statistically formulated economic
https://en.wikipedia.org/wiki/Strela%20computer
Strela computer () was the first mainframe vacuum-tube computer manufactured serially in the Soviet Union, beginning in 1953. Overview This first-generation computer had 6200 vacuum tubes and 60,000 semiconductor diodes. Strela's speed was 2000 operations per second. Its floating-point arithmetic was based on 43-bit floating point words, with a signed 35-bit mantissa and a signed 6-bit exponent. Operative Williams tube memory (RAM) had 2048 words. It also used read-only semiconductor diode memory for programs. It used punched cards or magnetic tape for data input and magnetic tape, punched cards and/or wide printer for data. The last version of Strela used a 4096-word magnetic drum, rotating at 6000 rpm. While Yuri Bazilevsky was officially Strela's chief designer, Bashir Rameyev, who developed the project prior to Bazilevsky's appointment, could be considered its main inventor. Strela was constructed at the Special Design Bureau 245 (Argon R&D Institute since 1986) in Moscow. Strelas were manufactured by the Moscow Plant of Computing-Analytical Machines (счетно-аналитических машин) during 1953–1957; 7 copies were manufactured. They were installed in the Computing Centre of the USSR Academy of Sciences, Keldysh Institute of Applied Mathematics, Moscow State University, and in computing centres of some ministries related to defense and economic planning. In 1954, the designers of Strela were awarded the Stalin Prize of 1st degree (Bashir Rameyev, Yu. Bazilevsky, V. Alexandrov, D. Zhuchkov, I. Lygin, G. Markov, B. Melnikov, G. Prokudayev, N. Trubnikov, A. Tsygankin, Yu. Shcherbakov, L. Larionova). The impetus for the development of Strela was a BBC broadcast heard by Bashir Rameyev about the American development of ENIAC. See also History of computer hardware in Eastern Bloc countries List of vacuum-tube computers References Further reading External links Strela Computer, Russian Virtual Computer Museum Architecture and computer code of Strela comput
https://en.wikipedia.org/wiki/Mediant%20%28mathematics%29
In mathematics, the mediant of two fractions, generally made up of four positive integers and is defined as That is to say, the numerator and denominator of the mediant are the sums of the numerators and denominators of the given fractions, respectively. It is sometimes called the freshman sum, as it is a common mistake in the early stages of learning about addition of fractions. Technically, this is a binary operation on valid fractions (nonzero denominator), considered as ordered pairs of appropriate integers, a priori disregarding the perspective on rational numbers as equivalence classes of fractions. For example, the mediant of the fractions 1/1 and 1/2 is 2/3. However, if the fraction 1/1 is replaced by the fraction 2/2, which is an equivalent fraction denoting the same rational number 1, the mediant of the fractions 2/2 and 1/2 is 3/4. For a stronger connection to rational numbers the fractions may be required to be reduced to lowest terms, thereby selecting unique representatives from the respective equivalence classes. The Stern–Brocot tree provides an enumeration of all positive rational numbers via mediants in lowest terms, obtained purely by iterative computation of the mediant according to a simple algorithm. Properties The mediant inequality: An important property (also explaining its name) of the mediant is that it lies strictly between the two fractions of which it is the mediant: If and , then This property follows from the two relations and Componendo and Dividendo Theorems: If and , then Componendo: Dividendo: Assume that the pair of fractions a/c and b/d satisfies the determinant relation . Then the mediant has the property that it is the simplest fraction in the interval (a/c, b/d), in the sense of being the fraction with the smallest denominator. More precisely, if the fraction with positive denominator c' lies (strictly) between a/c and b/d, then its numerator and denominator can be written as and with two positive rea
https://en.wikipedia.org/wiki/L%27Empereur
is a turn-based strategy video game for the Nintendo Entertainment System released by the Koei company in 1989. The user controls Napoleon Bonaparte during the Napoleonic Wars of the late 18th and early 19th Centuries. The goal is to conquer Europe. The game begins with Napoleon as an army officer, but with victories in combat, the user may get promoted to Commander-in-Chief, First Consul, and finally Emperor of the French, with more powers and actions available at each level. As Emperor, the user also controls Napoleon's brothers, Louis, Jérôme, Lucien, and Joseph, as well as Napoleon's stepson, Eugene Beauharnais. The game has both military and civilian aspects. The user can lead armies, act as mayor of cities, and depending on the level achieved, engage in diplomacy with other nations. This historically accurate game reproduces many historical figures and the militaries of Europe with great detail. Gameplay The player chooses from one of four scenarios (or loads a saved game) that starts off in different years. The earliest scenario has Napoleon as a Commander in Marseille in 1796, and historically has him poised for his invasion of Italy. The second scenario has Napoleon in St. Malo as Commander-in-Chief in 1798. The third scenario, arguably the easiest starting point, has Napoleon as First Consul of France in 1802. The final scenario, which has Napoleon as Emperor, starts in 1806, and in this mode the player can control Napoleon's siblings and stepson as well. Each turn lasts one full month per year, for a total of 12 turns per year. Although the years change every January, it's in March that most gameplay elements are affected by the game engine (such as the drafting of soldiers and officers, the termination of certain diplomatic agreements, and the collection of taxes). In each month, the player (as well as all other commanders of cities) manage military and civil affairs for their respective cities. Additionally, every three months (starting in March),
https://en.wikipedia.org/wiki/Developmental%20robotics
Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development. Developmental robotics is related to but differs from evolutionary robotics (ER). ER uses populations of robots that evolve over time, whereas DevRob is interested in how the organization of a single robot's control system develops through experience, over time. DevRob is also related to work done in the domains of robotics and artificial life. Background Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is "out of the factory"? What can it learn through natural social interactions with humans? These are the questions at the center of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those
https://en.wikipedia.org/wiki/Threshold%20voltage
The threshold voltage, commonly abbreviated as Vth or VGS(th), of a field-effect transistor (FET) is the minimum gate-to-source voltage (VGS) that is needed to create a conducting path between the source and drain terminals. It is an important scaling factor to maintain power efficiency. When referring to a junction field-effect transistor (JFET), the threshold voltage is often called pinch-off voltage instead. This is somewhat confusing since pinch off applied to insulated-gate field-effect transistor (IGFET) refers to the channel pinching that leads to current saturation behaviour under high source–drain bias, even though the current is never off. Unlike pinch off, the term threshold voltage is unambiguous and refers to the same concept in any field-effect transistor. Basic principles In n-channel enhancement-mode devices, a conductive channel does not exist naturally within the transistor, and a positive gate-to-source voltage is necessary to create one such. The positive voltage attracts free-floating electrons within the body towards the gate, forming a conductive channel. But first, enough electrons must be attracted near the gate to counter the dopant ions added to the body of the FET; this forms a region with no mobile carriers called a depletion region, and the voltage at which this occurs is the threshold voltage of the FET. Further gate-to-source voltage increase will attract even more electrons towards the gate which are able to create a conductive channel from source to drain; this process is called inversion. The reverse is true for the p-channel "enhancement-mode" MOS transistor. When VGS = 0 the device is “OFF” and the channel is open / non-conducting. The application of a negative gate voltage to the p-type "enhancement-mode" MOSFET enhances the channels conductivity turning it “ON”. In contrast, n-channel depletion-mode devices have a conductive channel naturally existing within the transistor. Accordingly, the term threshold voltage does not re