source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Commutation%20%28neurophysiology%29
In neurophysiology, commutation is the process by which the brain's neural circuits exhibit non-commutativity. Physiologist Douglas B. Tweed and coworkers have considered whether certain neural circuits in the brain exhibit noncommutativity and state: In noncommutative algebra, order makes a difference to multiplication, so that . This feature is necessary for computing rotary motion, because order makes a difference to the combined effect of two rotations. It has therefore been proposed that there are non-commutative operators in the brain circuits that deal with rotations, including motor system circuits that steer the eyes, head and limbs, and sensory system circuits that handle spatial information. This idea is controversial: studies of eye and head control have revealed behaviours that are consistent with non-commutativity in the brain, but none that clearly rules out all commutative models. Tweed goes on to demonstrate non-commutative computation in the vestibulo-ocular reflex by showing that subjects rotated in darkness can hold their gaze points stable in space – correctly computing different final eye-position commands when put through the same two rotations in different orders, in a way that is unattainable by any commutative system.
https://en.wikipedia.org/wiki/Cryptochirality
In stereochemistry, cryptochirality is a special case of chirality in which a molecule is chiral but its specific rotation is non-measurable. The underlying reason for the lack of rotation is the specific electronic properties of the molecule. The term was introduced by Kurt Mislow in 1977. For example, the alkane 5-ethyl-5-propylundecane found in certain species of Phaseolus vulgaris is chiral at its central quaternary carbon, but neither enantiomeric form has any observable optical rotation: It is still possible to distinguish between the two enantiomers by using them in asymmetric synthesis of another chemical whose stereochemical nature can be measured. For example, the Soai reaction of 2-(3,3-dimethylbut-1-ynyl)pyrimidine-5-carbaldehyde with diisopropylzinc performed in the presence of 5-ethyl-5-propylundecane forms a secondary alcohol with a high enantiomeric excess based on the major enantiomer of the alkane that was used. Even a slight enantiomeric excess of the alkane is rapidly amplified due to the autocatalytic nature of this reaction. Cryptochirality also occurs in polymeric systems growing from chiral initiators, for example in dendrimers having lobes of different sizes attached to a central core. The term is also used to describe a situation where an enantiomeric excess lies far below the observational horizon, but is still relevant, e.g. in highly enantiosensitive, self-amplifying reactions.
https://en.wikipedia.org/wiki/Biomedical%20tissue
Biomedical tissue is biological tissue used for organ transplantation and medical research, particularly cancer research. When it is used for research it is a biological specimen. Such tissues and organs may be referred to as implant tissue, allograft, xenograft, skin graft tissue, human transplant tissue, or implant bone. Tissue is stored in tissue establishments or tissue banks under cryogenic conditions. Fluids such as blood, blood products and urine are stored in fluid banks under similar conditions. Regulation The collection, storage, analysis and transplantation of human tissue involves significant ethical and safety issues, and is heavily regulated. Each country sets its own framework for ensuring the safety of human tissue products. The regulation of human transplantation in the United Kingdom is set out in the Human Tissue Act 2004 and managed by the Human Tissue Authority. Tissue banks in the US are monitored by the Food and Drug Administration (FDA). The Code of Federal Regulations sets out the following topics: Donor Screening and Testing: the determination of donor suitability for human tissue intended for transplantation. Procedures and Records: the written procedures and records that must be kept Inspection of Tissue Establishments: the importation of tissues from abroad and the retention, recall, and destruction of human tissue. Notable regulation cases Biomedical Tissue Services, Inc. is at the heart of an investigation by the Food and Drug Administration. FDA Provides Information on Investigation into Human Tissue for Transplantation See also Biomaterial Implant (medicine) Biomesh Footnotes External links The UK Human Tissue Act 2004 Specimen Central biorepository list A worldwide listing of active biobanks and biorepositories Tissues (biology) Transplantation medicine Medical ethics
https://en.wikipedia.org/wiki/Eliahu%20I.%20Jury
Eliahu Ibrahim Jury (May 23, 1923 – September 20, 2020) was an Iraqi-born American engineer. He received his the E.E. degree from the Technion – Israel Institute of Technology, Haifa, Mandatory Palestine (now, Israel), in 1947, the M.S. degree in electrical engineering from Harvard University, Cambridge, MA, in 1949, and the Sc.D. degree degree from Columbia University of New York City in 1953. He was professor of electrical engineering at the University of California, Berkeley, and the University of Miami. He developed the advanced Z-transform, used in digital control systems and signal processing. He was the creator of the Jury stability criterion, which is named after him. He was a Life Fellow of the IEEE and received the Rufus Oldenburger Medal from the ASME, the First Education Award of IEEE Circuits and Systems Society, and the IEEE Millennium Medal. In 1993 he received the AACC's Richard E. Bellman Control Heritage Award. Bibliography Theory and Application of the z-Transform Method, John Wiley and Sons, 1964. Inners and stability of dynamic systems, John Wiley & Sons, 1974
https://en.wikipedia.org/wiki/Shade%20tolerance
In ecology, shade tolerance is a plant's ability to tolerate low light levels. The term is also used in horticulture and landscaping, although in this context its use is sometimes imprecise, especially in labeling of plants for sale in commercial nurseries. Shade tolerance is a complex, multi-faceted property of plants. Different plant species exhibit different adaptations to shade, and a particular plant can exhibit varying degrees of shade tolerance, or even of requirement for light, depending on its history or stage of development. Basic concepts Except for some parasitic plants, all land plants need sunlight to survive. However, in general, more sunlight does not always make it easier for plants to survive. In direct sunlight, plants face desiccation and exposure to UV rays, and must expend energy producing pigments to block UV light, and waxy coatings to prevent water loss. Plants adapted to shade have the ability to use far-red light (about 730 nm) more effectively than plants adapted to full sunlight. Most red light gets absorbed by the shade-intolerant canopy plants, but more of the far-red light penetrates the canopy, reaching the understorey. The shade-tolerant plants found here are capable of photosynthesis using light at such wavelengths. The situation with respect to nutrients is often different in shade and sun. Most shade is due to the presence of a canopy of other plants, and this is usually associated with a completely different environment—richer in soil nutrients—than sunny areas. Shade-tolerant plants are thus adapted to be efficient energy-users. In simple terms, shade-tolerant plants grow broader, thinner leaves to catch more sunlight relative to the cost of producing the leaf. Shade-tolerant plants are also usually adapted to make more use of soil nutrients than shade-intolerant plants. A distinction may be made between "shade-tolerant" plants and "shade-loving" or sciophilous plants. Sciophilous plants are dependent on a degree of sha
https://en.wikipedia.org/wiki/Pathwidth
In graph theory, a path decomposition of a graph is, informally, a representation of as a "thickened" path graph, and the pathwidth of is a number that measures how much the path was thickened to form . More formally, a path-decomposition is a sequence of subsets of vertices of such that the endpoints of each edge appear in one of the subsets and such that each vertex appears in a contiguous subsequence of the subsets, and the pathwidth is one less than the size of the largest set in such a decomposition. Pathwidth is also known as interval thickness (one less than the maximum clique size in an interval supergraph of ), vertex separation number, or node searching number. Pathwidth and path-decompositions are closely analogous to treewidth and tree decompositions. They play a key role in the theory of graph minors: the families of graphs that are closed under graph minors and do not include all forests may be characterized as having bounded pathwidth, and the "vortices" appearing in the general structure theory for minor-closed graph families have bounded pathwidth. Pathwidth, and graphs of bounded pathwidth, also have applications in VLSI design, graph drawing, and computational linguistics. It is NP-hard to find the pathwidth of arbitrary graphs, or even to approximate it accurately. However, the problem is fixed-parameter tractable: testing whether a graph has pathwidth can be solved in an amount of time that depends linearly on the size of the graph but superexponentially on . Additionally, for several special classes of graphs, such as trees, the pathwidth may be computed in polynomial time without dependence on . Many problems in graph algorithms may be solved efficiently on graphs of bounded pathwidth, by using dynamic programming on a path-decomposition of the graph. Path decomposition may also be used to measure the space complexity of dynamic programming algorithms on graphs of bounded treewidth. Definition In the first of their famous series of pa
https://en.wikipedia.org/wiki/Haven%20%28graph%20theory%29
In graph theory, a haven is a certain type of function on sets of vertices in an undirected graph. If a haven exists, it can be used by an evader to win a pursuit–evasion game on the graph, by consulting the function at each step of the game to determine a safe set of vertices to move into. Havens were first introduced by as a tool for characterizing the treewidth of graphs. Their other applications include proving the existence of small separators on minor-closed families of graphs, and characterizing the ends and clique minors of infinite graphs. Definition If is an undirected graph, and is a set of vertices, then an -flap is a nonempty connected component of the subgraph of formed by deleting . A haven of order in is a function that assigns an -flap to every set of fewer than vertices. This function must also satisfy additional constraints which are given differently by different authors. The number is called the order of the haven. In the original definition of Seymour and Thomas, a haven is required to satisfy the property that every two flaps and must touch each other: either they share a common vertex or there exists an edge with one endpoint in each flap. In the definition used later by Alon, Seymour, and Thomas, havens are instead required to satisfy a weaker monotonicity property: if , and both and have fewer than vertices, then . The touching property implies the monotonicity property, but not necessarily vice versa. However, it follows from the results of Seymour and Thomas that, in finite graphs, if a haven with the monotonicity property exists, then one with the same order and the touching property also exists. Havens with the touching definition are closely related to brambles, families of connected subgraphs of a given graph that all touch each other. The order of a bramble is the minimum number of vertices needed in a set of vertices that hits all of the subgraphs in the family. The set of flaps for a haven of order (with the
https://en.wikipedia.org/wiki/Railroad%20ecology
Railroad ecology or railway ecology is a term used to refer to the study of the ecological community growing along railroad or railway tracks and the effects of railroads on natural ecosystems. Such ecosystems have been studied primarily in Europe. Similar conditions and effects appear also by roads used by vehicles. Railroads along with roads, canals, and power lines are examples of linear infrastructure intrusions. Conditions Railroad beds, like road beds, are designed to drain water away from the tracks, so there is usually a bed of rock and gravel resulting in fast drainage away from the tracks. At the same time, this drainage often accumulates in areas fairly near the tracks where drainage is poor, forming small artificial wetlands. These unnatural conditions combine to form different zones, some in which water is scarce, others in which water is abundant. Maintenance Railroad companies routinely clear-cut and/or spray with herbicide any vegetation that grows too close to the tracks. This favors vegetation that is able to respond favorably to clearcutting, and/or resist herbicides. On overhead electrified railroad lines, clear-cutting must be more extensive, vertically as well as horizontally, in order to prevent vegetation (especially tree limbs) from interfering with the pantographs on a moving train, breaking off and falling on the wires, or simply from arcing in proximity to high voltage transmission cables. The same vegetative selection processes described in the previous paragraph apply, but may additionally favor climbing vines due to the presence of catenary and transmission poles in addition to the wooden communications and signal pole lines which often exist(ed) along nonelectrified lines. History Historically, conditions along railroad beds were very different from today. Coal engines used to blanket the area with soot, favoring species adapted to these conditions (some of which only occurred naturally in volcanic areas). Newer engines prod
https://en.wikipedia.org/wiki/Lateral%20corticospinal%20tract
The lateral corticospinal tract (also called the crossed pyramidal tract or lateral cerebrospinal fasciculus) is the largest part of the corticospinal tract. It extends throughout the entire length of the spinal cord, and on transverse section appears as an oval area in front of the posterior column and medial to the posterior spinocerebellar tract. Structure Descending motor pathways carry motor signals from the brain down the spinal cord and to the target muscle or organ. They typically consist of an upper motor neuron and a lower motor neuron. The lateral corticospinal tract is a descending motor pathway that begins in the cerebral cortex, decussates in the pyramids of the lower medulla (also known as the medulla oblongata or the cervicomedullary junction, which is the most posterior division of the brain) and proceeds down the contralateral side of the spinal cord. It is the largest part of the corticospinal tract. It extends throughout the entire length of the medulla spinalis, and on transverse section appears as an oval area in front of the posterior column and medial to the posterior spinocerebellar tract. Function Axons in the lateral corticospinal tract weave out of the tract and into the anterior horns of the spinal cord. It controls fine movement of ipsilateral limbs (albeit contralateral to the corresponding motor cortex) as it lies distal to the pyramidal decussation. Control of more central axial and girdle muscles comes from the anterior corticospinal tract. Damage to different parts of the body will cause deficits, depending on whether the damage is above (rostral) or below (caudal) the pyramidal decussation. Damage to the body above the pyramidal decussation will cause contralateral motor deficits. For example, if there is a lesion at the pre-central gyrus in the right cerebral cortex, then the left side of the body will be affected. Whereas damage below the pyramidal decussation will result in ipsilateral motor deficits. For example, spinal
https://en.wikipedia.org/wiki/Anterior%20corticospinal%20tract
The anterior corticospinal tract (also called the ventral corticospinal tract, "Bundle of Turck", medial corticospinal tract, direct pyramidal tract, or anterior cerebrospinal fasciculus) is a small bundle of descending fibers that connect the cerebral cortex to the spinal cord. Descending tracts are pathways by which motor signals are sent from upper motor neurons in the brain to lower motor neurons which then directly innervate muscle to produce movement. The anterior corticospinal tract is usually small, varying inversely in size with the lateral corticospinal tract, which is the main part of the corticospinal tract. It lies close to the anterior median fissure, and is present only in the upper part of the spinal cord; gradually diminishing in size as it descends, it ends about the middle of the thoracic region. It consists of descending fibers that arise from cells in the motor area of the ipsilateral cerebral hemisphere. The impulse travels from these upper motor neurons (located in the pre-central gyrus of the brain) through the anterior column. In contrast to the fibers for the lateral corticospinal tract, the fibers for the anterior corticospinal tract do not decussate at the level of the medulla oblongata, although they do cross over in the spinal level they innervate. They then synapse at the anterior horn with the lower motor neuron which then synapses with the target muscle at the motor end plate. In contrast to the lateral corticospinal tract which controls the movement of the limbs, the anterior corticospinal tract controls the movements of axial muscles (of the trunk). A few of its fibers pass to the lateral column of the same side and to the gray matter at the base of the posterior grey column. Additional images
https://en.wikipedia.org/wiki/Fej%C3%A9r%27s%20theorem
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following: Explanation of Fejér's Theorem's Explicitly, we can write the Fourier series of f as where the nth partial sum of the Fourier series of f may be written as where the Fourier coefficients are Then, we can define with Fn being the nth order Fejér kernel. Then, Fejér's theorem asserts that with uniform convergence. With the convergence written out explicitly, the above statement becomes Proof of Fejér's Theorem We first prove the following lemma: Proof: Recall the definition of , the Dirichlet Kernel:We substitute the integral form of the Fourier coefficients into the formula for above Using a change of variables we get This completes the proof of Lemma 1. We next prove the following lemma: Proof: Recall the definition of the Fejér Kernel As in the case of Lemma 1, we substitute the integral form of the Fourier coefficients into the formula for This completes the proof of Lemma 2. We next prove the 3rd Lemma: This completes the proof of Lemma 3. We are now ready to prove Fejér's Theorem. First, let us recall the statement we are trying to prove We want to find an expression for . We begin by invoking Lemma 2: By Lemma 3a we know that Applying the triangle inequality yields and by Lemma 3b, we get We now split the integral into two parts, integrating over the two regions and . The motivation for doing so is that we want to prove that . We can do this by proving that each integral above, integral 1 and integral 2, goes to zero. This is precisely what we'll do in the next step. We first note that the function f is continuous on [-π,π]. We invoke the theorem that every periodic function on [-π,π] that is continuous is also bounded and uniformily continuous. This means that . Hence we can rewrite the integral 1 as follows Because and By Lemma 3a we then get for all n This gives the desired bound for integral 1 which we can exploit in fina
https://en.wikipedia.org/wiki/Network%20Security%20Services
Network Security Services (NSS) is a collection of cryptographic computer libraries designed to support cross-platform development of security-enabled client and server applications with optional support for hardware TLS/SSL acceleration on the server side and hardware smart cards on the client side. NSS provides a complete open-source implementation of cryptographic libraries supporting Transport Layer Security (TLS) / Secure Sockets Layer (SSL) and S/MIME. NSS releases prior to version 3.14 are tri-licensed under the Mozilla Public License 1.1, the GNU General Public License, and the GNU Lesser General Public License. Since release 3.14, NSS releases are licensed under GPL-compatible Mozilla Public License 2.0. History NSS originated from the libraries developed when Netscape invented the SSL security protocol. FIPS 140 validation and NISCC testing The NSS software crypto module has been validated five times (in 1997, 1999, 2002, 2007, and 2010) for conformance to FIPS 140 at Security Levels 1 and 2. NSS was the first open source cryptographic library to receive FIPS 140 validation. The NSS libraries passed the NISCC TLS/SSL and S/MIME test suites (1.6 million test cases of invalid input data). Applications that use NSS AOL, Red Hat, Sun Microsystems/Oracle Corporation, Google and other companies and individual contributors have co-developed NSS. Mozilla provides the source code repository, bug tracking system, and infrastructure for mailing lists and discussion groups. They and others named below use NSS in a variety of products, including the following: Mozilla client products, including Firefox, Thunderbird, SeaMonkey, and Firefox for mobile. AOL Communicator and AOL Instant Messenger (AIM) Open source client applications such as Evolution, Pidgin, and OpenOffice.org 2.0 onward (and its descendants). Server products from Red Hat: Red Hat Directory Server, Red Hat Certificate System, and the mod nss SSL module for the Apache web server. Sun server produ
https://en.wikipedia.org/wiki/Discrete%20Poisson%20equation
In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics. On a two-dimensional rectangular grid Using the finite difference numerical method to discretize the 2-dimensional Poisson equation (assuming a uniform spatial discretization, ) on an grid gives the following formula: where and . The preferred arrangement of the solution vector is to use natural ordering which, prior to removing boundary elements, would look like: This will result in an linear system: where is the identity matrix, and , also , is given by: and is defined by For each equation, the columns of correspond to a block of components in : while the columns of to the left and right of each correspond to other blocks of components within : and respectively. From the above, it can be inferred that there are block columns of in . It is important to note that prescribed values of (usually lying on the boundary) would have their corresponding elements removed from and . For the common case that all the nodes on the boundary are set, we have and , and the system would have the dimensions , where and would have dimensions . Example For a 3×3 ( and ) grid with all the boundary nodes prescribed, the system would look like: with and As can be seen, the boundary 's are brought to the right-hand-side of the equation. The entire system is while and are and given by: and Methods of solution Because is block tridiagonal and sparse, many methods of solution have been developed to optimally solve this linear system for . Among the methods are a generalized Thomas algorithm with a resulting computational complexity of , cyclic reduction, successive o
https://en.wikipedia.org/wiki/Gray%20short-tailed%20opossum
The gray short-tailed opossum (Monodelphis domestica) is a small South American member of the family Didelphidae. Unlike most other marsupials, the gray short-tailed opossum does not have a true pouch. The scientific name Monodelphis is derived from Greek and means "single womb" (referring to the lack of a pouch) and the Latin word domestica which means "domestic" (chosen because of the species' habit of entering human dwellings). It was the first marsupial to have its genome sequenced. The gray short-tailed opossum is used as a research model in science, and is also frequently found in the exotic pet trade. It is also known as the Brazilian opossum, rainforest opossum and in a research setting the laboratory opossum. Description Gray short-tailed opossums are relatively small animals, with a superficial resemblance to voles. In the wild they have head-body length of and weigh ; males are larger than females. However, individuals kept in captivity are typically much larger, with males weighing up to . As the common name implies, the tail is proportionately shorter than in some other opossum species, ranging from . Their tails are only semi-prehensile, unlike the fully prehensile tail characteristic of the North American opossum. The fur is greyish brown over almost the entire body, although fading to a paler shade on the underparts, and with near-white fur on the feet. Only the base of the tail has fur, the remainder being almost entirely hairless. The claws are well-developed and curved in shape, and the paws have small pads marked with fine dermal ridges. Unlike many other marsupials, females do not have a pouch. They typically possess thirteen teats, which can be retracted into the body by muscles at their base. Distribution and habitat The gray short-tailed opossum is found generally south of the Amazon River, in southern, central, and western Brazil. It is also found in eastern Bolivia, northern Paraguay, and in Formosa Province in northern Argentina. It in
https://en.wikipedia.org/wiki/Satisfiability%20modulo%20theories
In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality (often disallowing quantifiers). SMT solvers are tools that aim to solve the SMT problem for a practical subset of inputs. SMT solvers such as Z3 and cvc5 have been used as a building block for a wide range of applications across computer science, including in automated theorem proving, program analysis, program verification, and software testing. Since Boolean satisfiability is already NP-complete, the SMT problem is typically NP-hard, and for many theories it is undecidable. Researchers study which theories or subsets of theories lead to a decidable SMT problem and the computational complexity of decidable cases. The resulting decision procedures are often implemented directly in SMT solvers; see, for instance, the decidability of Presburger arithmetic. SMT can be thought of as a constraint satisfaction problem and thus a certain formalized approach to constraint programming. Basic terminology Formally speaking, an SMT instance is a formula in first-order logic, where some function and predicate symbols have additional interpretations, and SMT is the problem of determining whether such a formula is satisfiable. In other words, imagine an instance of the Boolean satisfiability problem (SAT) in which some of the binary variables are replaced by predicates over a suitable set of non-binary variables. A predicate is a binary-valued function of non-binary variables. Example predicates include linear inequalities (e.g., ) or equalities involving uninterpreted terms and funct
https://en.wikipedia.org/wiki/Timeline%20of%20immunology
The following are notable events in the Timeline of immunology: 1550 BCE – The Ebers papyrus recommends placing a poultice on a tumor and then making an incision, which would induce infection and cause regression of the tumor. 1549 – The earliest account of inoculation of smallpox (variolation) occurs in Wan Quan's (1499–1582) Douzhen Xinfa (痘疹心法). 1718 – Smallpox inoculation in Ottoman Empire realized by West, and Henry Kipkosgei , recorded the positive effects of variolation. 1761 – A case of breast cancer cured after ulcerating and getting infected is reported by Lambergen 1796 – First demonstration of smallpox vaccination (Edward Jenner) 1808 – 1813 - First experimental demonstration of the germ theory of disease by Agostino Bassi though he does not formally propose the theory until 1844 1813 – Vautier reports spontaneous remission of cancer after gangrene infection (later to be known as Clostridium perfringens) 1829 – Another case of spontaneous remission of breast cancer after a patient refused surgery and the tumor ruptured, became infected and during a febrile illness with purulent discharge, it shrunk and disappeared after a few weeks. (Guillaume Dupuytren) 1837 – Description of the role of microbes in putrefaction and fermentation (Theodore Schwann) 1838 – Confirmation of the role of yeast in fermentation of sugar to alcohol (Charles Cagniard-Latour) 1850 – Demonstration of the contagious nature of puerperal fever (childbed fever) (Ignaz Semmelweis) 1857–1870 – Confirmation of the role of microbes in fermentation (Louis Pasteur) 1862 – Phagocytosis (Ernst Haeckel) 1867 – Aseptic practice in surgery using carbolic acid (Joseph Lister) 1868 – Busch discovered that a sarcoma patient being surgically intervened to remove the tumor, after being exposed to a patient suffering from erysipelas, got a skin infection and her tumor disappeared. He inoculated some other cancer patients with many successes. 1876 – Demonstration that microbes can cause
https://en.wikipedia.org/wiki/Gene%20H.%20Golub
Gene Howard Golub (February 29, 1932 – November 16, 2007), was an American numerical analyst who taught at Stanford University as Fletcher Jones Professor of Computer Science and held a courtesy appointment in electrical engineering. Personal life Born in Chicago, he was educated at the University of Illinois at Urbana-Champaign, receiving his B.S. (1953), M.A. (1954) and Ph.D. (1959) all in mathematics. His M.A. degree was more specifically in Mathematical Statistics. His PhD dissertation was entitled "The Use of Chebyshev Matrix Polynomials in the Iterative Solution of Linear Equations Compared to the Method of Successive Overrelaxation" and his thesis adviser was Abraham Taub. Gene Golub succumbed to acute myeloid leukemia on the morning of 16 November 2007 at the Stanford Hospital. Stanford University He arrived at Stanford in 1962 and became a professor there in 1970. He advised more than thirty doctoral students, many of whom have themselves achieved distinction. Gene Golub was an important figure in numerical analysis and pivotal to creating the NA-Net and the NA-Digest, as well as the International Congress on Industrial and Applied Mathematics. One of his best-known books is Matrix Computations, co-authored with Charles F. Van Loan. He was a major contributor to algorithms for matrix decompositions. In particular he published an algorithm together with William Kahan in 1970 that made the computation of the singular value decomposition (SVD) feasible and that is still used today. A survey of his work was published in 2007 by Oxford University Press as "Milestones in Matrix Computation". Recognition Golub was awarded the B. Bolzano Gold Medal for Merits in the Field of Mathematical Sciences and was one of the few elected to three national academies: the National Academy of Sciences (1993), the National Academy of Engineering (1990), and the American Academy of Arts and Sciences (1994). He was also a Foreign Member of the Royal Swedish Academy of Engineer
https://en.wikipedia.org/wiki/Admissible%20set
In set theory, a discipline within mathematics, an admissible set is a transitive set such that is a model of Kripke–Platek set theory (Barwise 1975). The smallest example of an admissible set is the set of hereditarily finite sets. Another example is the set of hereditarily countable sets. See also Admissible ordinal
https://en.wikipedia.org/wiki/Turn%20%28biochemistry%29
A turn is an element of secondary structure in proteins where the polypeptide chain reverses its overall direction. Definition According to one definition, a turn is a structural motif where the Cα atoms of two residues separated by a few (usually 1 to 5) peptide bonds are close (less than ). The proximity of the terminal Cα atoms often correlates with formation of an inter main chain hydrogen bond between the corresponding residues. Such hydrogen bonding is the basis for the original, perhaps better known, turn definition. In many cases, but not all, the hydrogen-bonding and Cα-distance definitions are equivalent. Types of turns Turns are classified according to the separation between the two end residues: In an α-turn the end residues are separated by four peptide bonds (i → i ± 4). In a β-turn (the most common form), by three bonds (i → i ± 3). In a γ-turn, by two bonds (i → i ± 2). In a δ-turn, by one bond (i → i ± 1), which is sterically unlikely. In a π-turn, by five bonds (i → i ± 5). Turns are classified by their backbone dihedral angles (see Ramachandran plot). A turn can be converted into its inverse turn (in which the main chain atoms have opposite chirality) by changing the sign on its dihedral angles. (The inverse turn is not a true enantiomer since the Cα atom chirality is maintained.) Thus, the γ-turn has two forms, a classical form with (φ, ψ) dihedral angles of roughly (75°, −65°) and an inverse form with dihedral angles (−75°, 65°). At least eight forms of the beta turn occur, varying in whether a cis isomer of a peptide bond is involved and on the dihedral angles of the central two residues. The classical and inverse β-turns are distinguished with a prime, e.g., type I and type I′ beta turns. If an i → i + 3 hydrogen bond is taken as the criterion for turns, the four categories of Venkatachalam (I, II, II′, I′) suffice to describe all possible beta turns. All four occur frequently in proteins but I is most common, followed by II, I′ a
https://en.wikipedia.org/wiki/Little%20Boy%20from%20Manly
The Little Boy from Manly was a national personification of New South Wales and later Australia created by the cartoonist Livingston Hopkins of The Bulletin in April 1885. In March 1885, as the New South Wales Contingent was about to depart for the Sudan, a letter was addressed to Premier William Bede Dalley containing a cheque for £25 for the Patriotic Fund 'with my best wishes from a little boy at Manly'. It was Australia's first overseas military adventure, and the little boy became a symbol either of Australian patriotism or, among opponents of the adventure, of mindless chauvinism. Hopkins put the boy in a cartoon, dressed in the pantaloons and frilled shirt associated with English storybook schoolboys of the namby-pamby kind. Over the following decades, he became The Bulletin'''s stock symbol of Young Australia.
https://en.wikipedia.org/wiki/Code%20%28set%20theory%29
In set theory, a code for a hereditarily countable set is a set such that there is an isomorphism between (ω,E) and (X,) where X is the transitive closure of {x}. If X is finite (with cardinality n), then use n×n instead of ω×ω and (n,E) instead of (ω,E). According to the axiom of extensionality, the identity of a set is determined by its elements. And since those elements are also sets, their identities are determined by their elements, etc.. So if one knows the element relation restricted to X, then one knows what x is. (We use the transitive closure of {x} rather than of x itself to avoid confusing the elements of x with elements of its elements or whatever.) A code includes that information identifying x and also information about the particular injection from X into ω which was used to create E. The extra information about the injection is non-essential, so there are many codes for the same set which are equally useful. So codes are a way of mapping into the powerset of ω×ω. Using a pairing function on ω (such as (n,k) goes to (n2+2·n·k+k2+n+3·k)/2), we can map the powerset of ω×ω into the powerset of ω. And we can map the powerset of ω into the Cantor set, a subset of the real numbers. So statements about can be converted into statements about the reals. Therefore, Codes are useful in constructing mice. See also L(R)
https://en.wikipedia.org/wiki/Optical%20scalars
In general relativity, optical scalars refer to a set of three scalar functions (expansion), (shear) and (twist/rotation/vorticity) describing the propagation of a geodesic null congruence. In fact, these three scalars can be defined for both timelike and null geodesic congruences in an identical spirit, but they are called "optical scalars" only for the null case. Also, it is their tensorial predecessors that are adopted in tensorial equations, while the scalars mainly show up in equations written in the language of Newman–Penrose formalism. Definitions: expansion, shear and twist For geodesic timelike congruences Denote the tangent vector field of an observer's worldline (in a timelike congruence) as , and then one could construct induced "spatial metrics" that where works as a spatially projecting operator. Use to project the coordinate covariant derivative and one obtains the "spatial" auxiliary tensor , where represents the four-acceleration, and is purely spatial in the sense that . Specifically for an observer with a geodesic timelike worldline, we have Now decompose into its symmetric and antisymmetric parts and , is trace-free () while has nonzero trace, . Thus, the symmetric part can be further rewritten into its trace and trace-free part, Hence, all in all we have For geodesic null congruences Now, consider a geodesic null congruence with tangent vector field . Similar to the timelike situation, we also define which can be decomposed into where Here, "hatted" quantities are utilized to stress that these quantities for null congruences are two-dimensional as opposed to the three-dimensional timelike case. However, if we only discuss null congruences in a paper, the hats can be omitted for simplicity. Definitions: optical scalars for null congruences The optical scalars come straightforwardly from "scalarization" of the tensors in Eq(9). The expansion of a geodesic null congruence is defined by (where for clearance we wil
https://en.wikipedia.org/wiki/Ducking
In audio engineering, ducking is an audio effect commonly used in radio and pop music, especially dance music. In ducking, the level of one audio signal is reduced by the presence of another signal. In radio this can typically be achieved by lowering (ducking) the volume of a secondary audio track when the primary track starts, and lifting the volume again when the primary track is finished. A typical use of this effect in a daily radio production routine is for creating a voice-over: a foreign language original sound is dubbed (and ducked) by a professional speaker reading the translation. Ducking becomes active as soon as the translation starts. In music, the ducking effect is applied in more sophisticated ways where a signal's volume is delicately lowered by another signal's presence. Ducking here works through the use of a "side chain" gate. In other words, one track is made quieter (the ducked track) whenever another (the ducking track) gets louder. This may be done with a gate with its ducking function engaged or by a dedicated ducker. A typical application is to achieve an impression similar to the "pumping" effect. The difference between ducking and side-chain pumping is that in ducking the attenuation is by a specific range while side-chain compression creates variable attenuation. Ducking may be used in place of mirrored equalization to combat masking, for example with the bass guitar ducked under the kick drum, resembling subtle side-chain pumping. A ducking system may be created where one track ducks another, which ducks another, and so on. Examples include Portishead's "Biscuit". Used most often to turn down the music when the DJ speaks, ducking may be used to combat the muffling and distancing effect of reverb and delay. The ducker is inserted into the reverb and delay line and keyed to a dry track to duck its own reverb and delay so that when the dry track exceeds the ducker's threshold by reaching a certain amplitude the reverb and delay are atten
https://en.wikipedia.org/wiki/Nagata%27s%20conjecture%20on%20curves
In mathematics, the Nagata conjecture on curves, named after Masayoshi Nagata, governs the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities. History Nagata arrived at the conjecture via work on the 14th problem of Hilbert, which asks whether the invariant ring of a linear group action on the polynomial ring over some field is finitely generated. Nagata published the conjecture in a 1959 paper in the American Journal of Mathematics, in which he presented a counterexample to Hilbert's 14th problem. Statement Nagata Conjecture. Suppose are very general points in and that are given positive integers. Then for any curve in that passes through each of the points with multiplicity must satisfy The condition is necessary: The cases and are distinguished by whether or not the anti-canonical bundle on the blowup of at a collection of points is nef. In the case where , the cone theorem essentially gives a complete description of the cone of curves of the blow-up of the plane. Current status The only case when this is known to hold is when is a perfect square, which was proved by Nagata. Despite much interest, the other cases remain open. A more modern formulation of this conjecture is often given in terms of Seshadri constants and has been generalised to other surfaces under the name of the Nagata–Biran conjecture.
https://en.wikipedia.org/wiki/D-Link%20G604T%20network%20adaptor
The DSL-G604T is a first D-Link Wireless/ADSL router which firmware is based on open source the MontaVista Linux. The DSL-G604T was introduced in November 2004. This model has been discontinued. Specifications Hardware CPU: Texas Instrument AR7W MIPS 4KEc based SoC with built-in ADSL and Ethernet interfaces DRAM Memory: 16Mb Flash Memory: 2Mb SquashFS file system Wi-Fi: TI MiniPCI card Ethernet: 5-port Ethernet hub (1 internal, 4 external) Firmware The G604T runs MontaVista and busybox Linux which allows a degrejje of customisation with customised firmware. These and similar units from D-Link appear to have an issue that causes certain services to fail when using the factory provided firmware, namely the Debian package update service being interrupted due to a faulty DNS through DHCP issue at the kernel level. A v2.00B06.AU_20060728 patch was made available through their downloads section that provided some level of correction, but it was not a complete fix and the issue would resurface intermittently. When the issue was originally reported, D-Link seemed to have misunderstood that the same issue has been discovered by the Linux community at large to be common across a number of their router models and they failed to provide a complete fix across the board for all adsl router models. Russian version of the firmware (prefix .RU, e.g. V1.00B02T02.RU.20041014) has restrictions on configuring firewall rules – user can only change sender's address (computer address in the LAN segment) and the recipient's port. The web interface with Russian firmware also differs from the English interface. Default settings When running the D-link DSL-G604T router for the first time (or resetting), the device is configured with a default IP address (192.168.1.1), username (admin) and password (admin). Default username and password can also be printed on the router itself, in the manual, or on the box. Problems Security D-Link DSL-G604T has Cross-site scripting (XSS) vulne
https://en.wikipedia.org/wiki/DLL%20injection
In computer programming, DLL injection is a technique used for running code within the address space of another process by forcing it to load a dynamic-link library. DLL injection is often used by external programs to influence the behavior of another program in a way its authors did not anticipate or intend. For example, the injected code could hook system function calls, or read the contents of password textboxes, which cannot be done the usual way. A program used to inject arbitrary code into arbitrary processes is called a DLL injector. Approaches on Microsoft Windows There are multiple ways on Microsoft Windows to force a process to load and execute code in a DLL that the authors did not intend: DLLs listed in the registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs are loaded into every process that loads User32.dll during the initial call of that DLL. Beginning with Windows Vista, AppInit_DLLs are disabled by default. Beginning with Windows 7, the AppInit_DLL infrastructure supports code signing. Starting with Windows 8, the entire AppInit_DLL functionality is disabled when Secure Boot is enabled, regardless of code signing or registry settings. DLLs listed under the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\AppCertDLLs are loaded into every process that calls the Win32 API functions CreateProcess, CreateProcessAsUser, CreateProcessWithLogonW, CreateProcessWithTokenW and WinExec. That is the right way to use legal DLL injection on current version of Windows - Windows 10. DLL must be signed by a valid certificate. Process manipulation functions such as CreateRemoteThread or code injection techniques such as AtomBombing, can be used to inject a DLL into a program after it has started. Open a handle to the target process. This can be done by spawning the process or by keying off something created by that process that is known to exist – for instance, a window with a predic
https://en.wikipedia.org/wiki/S3%20Trio
The S3 Trio range were popular video cards for personal computers and were S3's first fully integrated graphics accelerators. As the name implies, three previously separate components were now included in the same ASIC: the graphics core, RAMDAC and clock generator. The increased integration allowed a graphics card to be simpler than before and thus cheaper to produce. Variants The Trio64 and 64V+, first appeared in 1995, are essentially fully integrated solutions based upon the earlier Vision 864 and 868 accelerator chipsets. Like the 868, the 64V+ has a video acceleration engine that can perform YUV to RGB color space conversion and horizontal linear filtered scaling. Unlike the Vision964/968, the Trio chips do not support VRAM, and are limited to FPM DRAM and EDO DRAM only. The 2D graphics hardware was later used in the ViRGE. The Trio32 is a low-cost version of the Trio64 with a narrower 32-bit DRAM interface (vs. 64-bit). The Trio64V2 improved on the 64V+ by including vertical bilinear filtering. The 2D graphics core was later used in the ViRGE/DX and ViRGE/GX. Like the corresponding ViRGE chips, the 64V2 also came in /DX and /GX variants, with the latter supporting more modern SDRAM or SGRAM. The final version, called the Trio3D, was effectively the 128-bit successor to the ViRGE/GX2. The various Trio chips were used on many motherboards. Because of the popularity of the series and the resulting compatibility advantages, they are used in various PC emulation and virtualization packages such as DOSBox, Microsoft Virtual PC, PCem and 86Box. Specifications Motherboard interface: ISA, VLB, PCI, AGP (Trio3D only) Video Connector: 15-pin VGA connector External links Virtual PC 4.0 Test Results, S3 Trio emulated graphics chipset S3 Graphics - Download Drivers - Legacy Software Archive Vogons Vintage Driver Library Graphics cards
https://en.wikipedia.org/wiki/Neville%27s%20algorithm
In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points. Neville's algorithm evaluates this polynomial. Neville's algorithm is based on the Newton form of the interpolating polynomial and the recursion relation for the divided differences. It is similar to Aitken's algorithm (named after Alexander Aitken), which is nowadays not used. The algorithm Given a set of n+1 data points (xi, yi) where no two xi are the same, the interpolating polynomial is the polynomial p of degree at most n with the property p(xi) = yi for all i = 0,…,n This polynomial exists and it is unique. Neville's algorithm evaluates the polynomial at some point x. Let pi,j denote the polynomial of degree j − i which goes through the points (xk, yk) for k = i, i + 1, …, j. The pi,j satisfy the recurrence relation {| | || |- | || |} This recurrence can calculate p0,n(x), which is the value being sought. This is Neville's algorithm. For instance, for n = 4, one can use the recurrence to fill the triangular tableau below from the left to the right. {| | |- | || |- | || || |- | || || || |- | || || || || style="border: 1px solid;" | |- | || || || |- | || || |- | || |- | |} This process yields p0,4(x), the value of the polynomial going through the n + 1 data points (xi, yi) at the point x. This algorithm needs O(n2) floating point operations to interpolate a single point, and O(n3) floating point operations to interpolate a polynomial of degree n. The derivative of the polynomial can be obtained in the same manner, i.e: {| | || |- | || |} Application to numerical differentiation Lyness and Moler showed in 1966 that using undetermined coefficients for the polynomials in Neville's algorithm, one can compute the Maclaurin expansion of the final interpolating polynomial, which y
https://en.wikipedia.org/wiki/Teaching-family%20model
The Teaching-Family Model (TFM) is a model of care for persons in need of services and care necessary to support an improved quality of life and increase opportunities to live to their potential. The TFM is used internationally in residential homes, foster care, schools, home-based treatment, emergency shelters, assessment centers, and other youth and dependent adult care programs. It was developed in the 1960s through research at the University of Kansas. Researchers included Montrose Wolf (the inventor of time-out as a learning tool to shape behavior) and Gary Timbers. The model has been replicated over 800 times, although not all of the replications have proven effective and successful. Overview This model of care is based on an "organized approach to providing humane, effective, trauma-informed, and individualized services that are satisfactory to clients and consumers. It is cost effective and replicable." (from Teaching-Family Association Website) The focus is using scientifically proven methods of behaviorism known as applied behavior analysis to teach and reinforce pro-social, lifestyle skills and allow the individual to maintain or advance in his or her environment. The Teaching-Family Association (TFA) accredits programs and certifies staff implementing the program with fidelity meaning that they undergo rigorous development and training, on-site reviews by qualified peer reviewers, submit consumer satisfaction surveys, and successfully demonstrate implementation of the TFM's 15 Standards with fidelity and as intended. The TFM is implemented internationally and TFA Accreditation is a registered trademark. Many programs across the U.S. use this model of care, including Accredited sites such as Garfield Park Academy, Thornwell Home for Children, Kenosha Human Development Services, Inc., The Children's Home of Cincinnati, Virginia Home for Boys and Girls, The Barium Springs Home For Children, Closer To Home Calgary, Alberta, Canada, The Indiana United Metho
https://en.wikipedia.org/wiki/Muromonab-CD3
Muromonab-CD3 (trade name Orthoclone OKT3, marketed by Janssen-Cilag) is an immunosuppressant drug given to reduce acute rejection in patients with organ transplants. It is a monoclonal antibody targeted at the CD3 receptor, a membrane protein on the surface of T cells. It was the first monoclonal antibody to be approved for clinical use in humans. History Muromonab-CD3 was approved by the U.S. Food and Drug Administration (FDA) in 1986, making it the first monoclonal antibody to be approved anywhere as a drug for humans. In the European Communities, it was the first drug to be approved under the directive 87/22/EWG, a precursor of the European Medicines Agency (EMEA) centralised approval system in the European Union. This process included an assessment by the Committee for Proprietary Medicinal Products (CPMP, now CHMP), and a subsequent approval by the national health agencies; in Germany, for example, in 1988 by the Paul Ehrlich Institute in Frankfurt. However, the manufacturer of muromonab-CD3 has voluntarily withdrawn it from the United States market in 2010 due to numerous side-effects, better-tolerated alternatives and declining usage. Indications Muromonab-CD3 is approved for the therapy of acute, glucocorticoid-resistant rejection of allogeneic renal, heart and liver transplants. Unlike the monoclonal antibodies basiliximab and daclizumab, it is not approved for prophylaxis of transplant rejection, although a 1996 review has found it to be safe for that purpose. It has also been investigated for use in treating T-cell acute lymphoblastic leukemia. Pharmacodynamics and chemistry T cells recognise antigens primarily via the T cell receptor (TCR). CD3 is one of the proteins that make up the TCR complex. The TCR transduces the signal for the T cell to proliferate and attack the antigen. Muromonab-CD3 is a murine (mouse) monoclonal IgG2a antibody which was created using hybridoma technology. It binds to the T cell receptor-CD3-complex (specifically the CD3
https://en.wikipedia.org/wiki/List%20of%20microprocessors
This is a list of microprocessors. Altera Nios 16-bit (soft processor) Nios II 32-bit (soft processor) AMD List of AMD K5 processors List of AMD Athlon processors List of AMD Athlon 64 processors List of AMD Athlon XP processors List of AMD Duron processors List of AMD Opteron processors List of AMD Sempron processors List of AMD Turion processors List of AMD Athlon X2 processors List of AMD Phenom processors List of AMD FX processors List of AMD Ryzen processors Apollo PRISM ARM ARM Atmel AVR32 AVR AT&T Hobbit Bell Labs Bellmac 32 BLX IC Design Corporation Godson/Loongson Broadcom XLS 200 series multicore processor Centaur Technology/IDT WinChip Computer Cowboys Sh-Boom Cyrix 486, 5x86, 6x86 Data General microNOVA mN601 and mN602 microECLIPSE Centre for Development of Advanced Computing VEGA Microprocessors Digital Equipment Corporation DEC T-11 DEC J-11 DEC V-11 MicroVAX 78032 CVAX Rigel Mariah NVAX Alpha 21064 Alpha 21164 Alpha 21264 Alpha 21364 StrongARM DM&P Electronics Vortex86 Emotion Engine by Sony & Toshiba Emotion Engine Elbrus Elbrus 2K (VLIW design) Electronic Arrays Electronic Arrays 9002 EnSilica eSI-RISC Fairchild Semiconductor 9440 F8 Clipper Freescale Semiconductor (formerly Motorola) List of Freescale products Fujitsu FR FR-V SPARC64 V Garrett AiResearch/American Microsystems MP944 Google Tensor processing unit Harris Semiconductor Harris RTX2000 Hewlett-Packard Capricorn (microprocessor) FOCUS 32-bit stack architecture PA-7000 PA-RISC Version 1.0 (32-bit) PA-7100 PA-RISC Version 1.1 PA-7100LC PA-7150 PA-7200 PA-7300LC PA-8000 PA-RISC Version 2.0 (64-bit) PA-8200 PA-8500 PA-8600 PA-8700 PA-8800 PA-8900 Saturn Nibble CPU (4-bit) Hitachi SuperH SH-1/SH-2 etc. Inmos Transputer T2/T4/T8 IBM 1977 – OPD Mini Processor 1986 – IBM ROMP 2000 – Gekko processor 2005 – Xenon processor 2006 – Cell processor 2006 – Broadway processor 2012 – Espresso
https://en.wikipedia.org/wiki/Quagga%20Project
The Quagga Project is an attempt by a group in South Africa to use selective breeding to achieve a breeding lineage of Burchell's zebra (Equus quagga burchellii) which visually resemble the extinct quagga (Equus quagga quagga). History In 1955, Lutz Heck suggested in his book that careful selective breeding with the plains zebra could produce an animal resembling the extinct quagga: a zebra with reduced striping and a brownish basic colour. In 1971, Reinhold Rau visited various museums in Europe to examine the quagga specimens in their collections and decided to attempt to re-breed the quagga. Rau later contacted several zoologists and park authorities, but they were on the whole negative because the quagga has left no living descendants, and thus the genetic composition of this animal is not present in living zebras. Rau did not abandon his re-breeding proposal, as he considered the quagga to be a subspecies of the plains zebra. In 1980, molecular studies of mitochondrial DNA of a quagga indicated that it was indeed a subspecies of the plains zebra. After the DNA examination results appeared in publications from 1984 onward, gradually a more positive attitude was taken towards the quagga re-breeding proposal. In March 1986, the project committee was formed after influential persons became involved. During March 1987, nine zebras were selected and captured at the Etosha National Park in Namibia. On 24 April 1987, these zebras were brought to the specially constructed breeding camp complex at the Nature Conservation farm "Vrolijkheid" near Robertson, South Africa. This marked the start of the quagga re-breeding project. Additional zebras were selected for the lightness of their stripes and incorporated into the project to increase the rate at which the zebras lost their stripes. Some of the zebras of the project that failed to develop the more quagga-like physical traits were released into the Addo Elephant National Park. After the number of zebras increased,
https://en.wikipedia.org/wiki/Clobber
Clobber is an abstract strategy game invented in 2001 by combinatorial game theorists Michael H. Albert, J.P. Grossman and Richard Nowakowski. It has subsequently been studied by Elwyn Berlekamp and Erik Demaine among others. Since 2005, it has been one of the events in the Computer Olympiad. Details Clobber is best played with two players and takes an average of 15 minutes to play. It is suggested for ages 8 and up. It is typically played on a rectangular white and black checkerboard. Players take turns to move one of their own pieces onto an orthogonally adjacent opposing piece, removing it from the game. The winner of the game is the player who makes the last move (i.e. whose opponent cannot move). To start the game, each of the squares on the checkerboard is occupied by a stone. White stones are placed on the white squares and black stones on the black squares. To move, the player must pick up one of his or her own stones and "clobber" an opponent's stone on an adjacent square, either horizontally or vertically. Once the opponent's stone is clobbered, it must then be removed from the board and replaced by the stone that was moved. The player who, on their turn, is unable to move, loses the game. Variants In computational play (e.g., Computer Olympiad), clobber is generally played on a 10x10 board. There are also variations in the initial layout of the pieces. Another variant is Cannibal Clobber, where a stone may not only capture stones of the opponent but also other stones of its owner. An advantage of Cannibal Clobber over Clobber is that a player may not only win, but win by a non-trivial margin. Cannibal Clobber was proposed in the summer of 2003 by Ingo Althoefer. Another variant, also proposed by Ingo Althoefer in 2015, is San Jego: Here the pieces are not clobbered, but stacked to towers. Each tower belongs to the player with the piece on its top. At the end the player with the highest tower is declared winner. Draws are possible.
https://en.wikipedia.org/wiki/Rabi%20problem
The Rabi problem concerns the response of an atom to an applied harmonic electric field, with an applied frequency very close to the atom's natural frequency. It provides a simple and generally solvable example of light–atom interactions and is named after Isidor Isaac Rabi. Classical Rabi problem In the classical approach, the Rabi problem can be represented by the solution to the driven damped harmonic oscillator with the electric part of the Lorentz force as the driving term: where it has been assumed that the atom can be treated as a charged particle (of charge e) oscillating about its equilibrium position around a neutral atom. Here xa is its instantaneous magnitude of oscillation, its natural oscillation frequency, and its natural lifetime: which has been calculated based on the dipole oscillator's energy loss from electromagnetic radiation. To apply this to the Rabi problem, one assumes that the electric field E is oscillatory in time and constant in space: and xa is decomposed into a part ua that is in-phase with the driving E field (corresponding to dispersion) and a part va that is out of phase (corresponding to absorption): Here x0 is assumed to be constant, but ua and va are allowed to vary in time. However, if the system is very close to resonance (), then these values will be slowly varying in time, and we can make the assumption that , and , . With these assumptions, the Lorentz force equations for the in-phase and out-of-phase parts can be rewritten as where we have replaced the natural lifetime with a more general effective lifetime T (which could include other interactions such as collisions) and have dropped the subscript a in favor of the newly defined detuning , which serves equally well to distinguish atoms of different resonant frequencies. Finally, the constant has been defined. These equations can be solved as follows: After all transients have died away, the steady-state solution takes the simple f
https://en.wikipedia.org/wiki/Joule%20effect
Joule effect and Joule's law are any of several different physical effects discovered or characterized by English physicist James Prescott Joule. These physical effects are not the same, but all are frequently or occasionally referred to in the literature as the "Joule effect" or "Joule law" These physical effects include: "Joule's first law" (Joule heating), a physical law expressing the relationship between the heat generated and the current flowing through a conductor. Joule's second law states that the internal energy of an ideal gas is independent of its volume and pressure, depending only on its temperature. Magnetostriction, a property of ferromagnetic materials that causes them to change their shape when subjected to a magnetic field. The Joule effect (during Joule expansion), the temperature change of a gas (usually cooling) when it is allowed to expand freely. The Joule–Thomson effect, the temperature change of a gas when it is forced through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. The Gough–Joule effect or the Gow–Joule effect, which is the tendency of elastomers to contract if heated while they are under tension. Joule's first law Between 1840 and 1843, Joule carefully studied the heat produced by an electric current. From this study, he developed Joule's laws of heating, the first of which is commonly referred to as the Joule effect. Joule's first law expresses the relationship between heat generated in a conductor and current flow, resistance, and time. Magnetostriction The magnetostriction effect describes a property of ferromagnetic materials which causes them to change their shape when subjected to a magnetic field. Joule first reported observing the change in the length of ferromagnetic rods in 1842. Joule expansion In 1845, Joule studied the free expansion of a gas into a larger volume. This became known as Joule expansion. The cooling of a gas by allowing it to expand freel
https://en.wikipedia.org/wiki/Spatial%20visualization%20ability
Spatial visualization ability or visual-spatial ability is the ability to mentally manipulate 2-dimensional and 3-dimensional figures. It is typically measured with simple cognitive tests and is predictive of user performance with some kinds of user interfaces. Measurement The cognitive tests used to measure spatial visualization ability including mental rotation tasks like the Mental Rotations Test or mental cutting tasks like the Mental Cutting Test; and cognitive tests like the VZ-1 (Form Board), VZ-2 (Paper Folding), and VZ-3 (Surface Development) tests from the Kit of Factor-Reference cognitive tests produced by Educational Testing Service. Though the descriptions of spatial visualization and mental rotation sound similar, mental rotation is a particular task that can be accomplished using spatial visualization. The Minnesota Paper Form Board Test involves giving participants a shape and a set of smaller shapes which they are then instructed to determine which combination of small shapes will fill the larger shape completely without overlapping. The Paper Folding test involves showing participants a sequence of folds in a piece of paper, through which a set of holes is then punched. The participants must choose which of a set of unfolded papers with holes corresponds to the one they have just seen. The Surface Development test involves giving participants a flat shape with numbered sides and a three-dimensional shape with lettered sides and asking the participants to indicate which numbered side corresponds to which lettered side. History The construct of spatial visualization ability was first identified as separate from general intelligence in the 20th Century, and its implications for computer system design were identified in the 1980s. In 1987, Kim Vicente and colleagues ran a battery of cognitive tests on a set of participants and then determined which cognitive abilities correlated with performance on a computerized information search task. The
https://en.wikipedia.org/wiki/Expansive%20clay
Expansive clay is a clay soil that is prone to large volume changes (swelling and shrinking) that are directly related to changes in water content. Soils with a high content of expansive minerals can form deep cracks in drier seasons or years; such soils are called vertisols. Soils with smectite clay minerals, including montmorillonite and bentonite, have the most dramatic shrink-swell capacity. The mineral make-up of this type of soil is responsible for the moisture retaining capabilities. All clays consist of mineral sheets packaged into layers, and can be classified as either 1:1 or 2:1. These ratios refer to the proportion of tetrahedral sheets to octahedral sheets. Octahedral sheets are sandwiched between two tetrahedral sheets in 2:1 clays, while 1:1 clays have sheets in matched pairs. Expansive clays have an expanding crystal lattice in a 2:1 ratio; however, there are 2:1 non-expansive clays. Mitigation of the effects of expansive clay on structures built in areas with expansive clays is a major challenge in geotechnical engineering. Some areas mitigate foundation cracking by watering around the foundation with a soaker hose during dry conditions. This process can be automated by a timer, or using a soil moisture sensor controller. Even though irrigation is expensive, the cost is small compared to repairing a cracked foundation. Admixtures can be added to expansive clays to reduce the shrink-swell properties, as well. One laboratory test to measure the expansion potential of soil is ASTM D 4829. See also Argillipedoturbation Dispersion (soil)
https://en.wikipedia.org/wiki/Sarcoglycan
The sarcoglycans are a family of transmembrane proteins (α, β, γ, δ or ε) involved in the protein complex responsible for connecting the muscle fibre cytoskeleton to the extracellular matrix, preventing damage to the muscle fibre sarcolemma through shearing forces. The dystrophin glycoprotein complex (DGC) is a membrane-spanning complex that links the interior cytoskeleton to the extracellular matrix in muscle. The sarcoglycan complex is a subcomplex within the DGC and is composed of six muscle-specific, transmembrane proteins (alpha-, beta-, gamma-, delta-, epsilon-,and zeta-sarcoglycan). The sarcoglycans are asparagine-linked glycosylated proteins with single transmembrane domains. The disorders caused by the mutations of the sarcoglycans are called sarcoglycanopathies. Mutations in the α, β, γ or δ genes (not ε) encoding these proteins can lead to the associated limb-girdle muscular dystrophy. Genes SGCA SGCB SGCD SGCE SGCG SGCZ
https://en.wikipedia.org/wiki/2%2C3-Bisphosphoglyceric%20acid
2,3-Bisphosphoglyceric acid (conjugate base 2,3-bisphosphoglycerate) (2,3-BPG), also known as 2,3-diphosphoglyceric acid (conjugate base 2,3-diphosphoglycerate) (2,3-DPG), is a three-carbon isomer of the glycolytic intermediate 1,3-bisphosphoglyceric acid (1,3-BPG). -2,3-BPG is present in human red blood cells (RBC; erythrocyte) at approximately 5 mmol/L. It binds with greater affinity to deoxygenated hemoglobin (e.g., when the red blood cell is near respiring tissue) than it does to oxygenated hemoglobin (e.g., in the lungs) due to conformational differences: 2,3-BPG (with an estimated size of about 9 Å) fits in the deoxygenated hemoglobin conformation (with an 11-Angstrom pocket), but not as well in the oxygenated conformation (5 Angstroms). It interacts with deoxygenated hemoglobin beta subunits and decreases the affinity for oxygen and allosterically promotes the release of the remaining oxygen molecules bound to the hemoglobin. Therefore, it enhances the ability of RBCs to release oxygen near tissues that need it most. 2,3-BPG is thus an allosteric effector. Its function was discovered in 1967 by Reinhold Benesch and Ruth Benesch. Metabolism 2,3-BPG is formed from 1,3-BPG by the enzyme BPG mutase. It can then be broken down by 2,3-BPG phosphatase to form 3-phosphoglycerate. Its synthesis and breakdown are, therefore, a way around a step of glycolysis, with the net expense of one ATP per molecule of 2,3-BPG generated as the high-energy carboxylic acid-phosphate mixed anhydride bond is cleaved by bisphosphoglycerate mutase. The normal glycolytic pathway generates 1,3-BPG, which may be dephosphorylated by phosphoglycerate kinase (PGK), generating ATP, or it may be shunted into the Luebering-Rapoport pathway, where bisphosphoglycerate mutase catalyzes the transfer of a phosphoryl group from C1 to C2 of 1,3-BPG, giving 2,3-BPG. 2,3-BPG, the most concentrated organophosphate in the erythrocyte, forms 3-PG by the action of bisphosphoglycerate phosphatase. The con
https://en.wikipedia.org/wiki/Bacillus%20mycoides
Bacillus mycoides is a bacterium of the genus Bacillus. Like other Bacillus species, B. mycoides is Gram positive, rod-shaped, and forms spores. B. mycoides is distinguished from other Bacillus species by its unusual growth on agar plates, where it forms expansive hairy colonies with characteristic swirls. Description B. mycoides are rod-shaped cells about 1 micron across and 3 to 5 microns long. When growing, they either grow as single cells or form loosely connected chains of cells. They are not motile. B. mycoides can survive with or without oxygen and grows at temperatures ranging from 10 to 15 °C to 35–40 °C. B. mycoides is distinguished from a number of other Bacillus species in the unusual morphology of the colonies it forms when grown on agar plates. B. mycoides forms white opaque colonies that are characteristically hairy in appearance (often referred to as "rhizoid"). These colonies rapidly spread to fill the plate and are characterized by a repeating spiral pattern. B. mycoides has the unusual property of being able to respond to mechanical force and surface structure variations in the media on which it is growing. Ecology and distribution B. mycoides is present in a wide variety of environments, especially soil. Role in disease B. mycoides are capable of causing disease in some fish, and were the reported cause of an outbreak of necrotic lesions in channel catfish in a commercial pond in Alabama. B. weihenstephanensis In 1998 a new Bacillis species was described, and named Bacillus weihenstephanensis. However, twenty years later, a comparison of the complete genome sequences of B. weihenstephanensis and B. mycoides demonstrated that B. weihenstephanensis was a later synonym for B. mycoides, and thus not a valid species, nor species name.
https://en.wikipedia.org/wiki/Reproductive%20isolation
The mechanisms of reproductive isolation are a collection of evolutionary mechanisms, behaviors and physiological processes critical for speciation. They prevent members of different species from producing offspring, or ensure that any offspring are sterile. These barriers maintain the integrity of a species by reducing gene flow between related species. The mechanisms of reproductive isolation have been classified in a number of ways. Zoologist Ernst Mayr classified the mechanisms of reproductive isolation in two broad categories: pre-zygotic for those that act before fertilization (or before mating in the case of animals) and post-zygotic for those that act after it. The mechanisms are genetically controlled and can appear in species whose geographic distributions overlap (sympatric speciation) or are separate (allopatric speciation). Pre-zygotic isolation Pre-zygotic isolation mechanisms are the most economic in terms of the natural selection of a population, as resources are not wasted on the production of a descendant that is weak, non-viable or sterile. These mechanisms include physiological or systemic barriers to fertilization. Temporal or habitat isolation Any of the factors that prevent potentially fertile individuals from meeting will reproductively isolate the members of distinct species. The types of barriers that can cause this isolation include: different habitats, physical barriers, and a difference in the time of sexual maturity or flowering. An example of the ecological or habitat differences that impede the meeting of potential pairs occurs in two fish species of the family Gasterosteidae (sticklebacks). One species lives all year round in fresh water, mainly in small streams. The other species lives in the sea during winter, but in spring and summer individuals migrate to river estuaries to reproduce. The members of the two populations are reproductively isolated due to their adaptations to distinct salt concentrations. An example of rep
https://en.wikipedia.org/wiki/Forma%20specialis
Forma specialis (plural: formae speciales), abbreviated f. sp. (plural ff. spp.) without italics, is an informal taxonomic grouping allowed by the International Code of Nomenclature for algae, fungi, and plants, that is applied to a parasite (most frequently a fungus) which is adapted to a specific host. This classification may be applied by authors who do not feel that a subspecies or variety name is appropriate, and it is therefore not necessary to specify morphological differences that distinguish this form. The literal meaning of the term is 'special form', but this grouping does not correspond to the more formal botanical use of the taxonomic rank of forma or form. An example is Puccinia graminis f. sp. avenae, which affects oats. An alternative term in contexts not related to biological nomenclature is physiological race (sometimes also given as biological race, and in that context treated as synonymous with biological form), except in that the name of a race is added after the binomial scientific name (and may be arbitrary, e.g. an alphanumeric code, usually with the word "race"), e.g. "Podosphaera xanthii race S". A forma specialis is used as part of the infraspecific scientific name (and follows Latin-based scientific naming conventions), inserted after the interpolation "f. sp.", as in the "Puccinia graminis f. sp. avenae" example. History, and use with "pathotype" The forma specialis category was introduced and recommended in the International Code of Botanical Nomenclature of 1930, but was not widely adopted. Fungal pathogens within Alternaria alternata species have also been called pathotypes (not to be confused with pathotype as used in bacteriology) by author Syoyo Nishimura who stated:"[E]ach pathogen should be called a distinct pathotype of A. alternata" Some authors have subsequently used forma specialis and "pathotype" together for the species A. alternata: "Currently there are seven pathotypes of A. alternata described ..., but this term is
https://en.wikipedia.org/wiki/Birch%27s%20theorem
In mathematics, Birch's theorem, named for Bryan John Birch, is a statement about the representability of zero by odd degree forms. Statement of Birch's theorem Let K be an algebraic number field, k, l and n be natural numbers, r1, ..., rk be odd natural numbers, and f1, ..., fk be homogeneous polynomials with coefficients in K of degrees r1, ..., rk respectively in n variables. Then there exists a number ψ(r1, ..., rk, l, K) such that if then there exists an l-dimensional vector subspace V of Kn such that Remarks The proof of the theorem is by induction over the maximal degree of the forms f1, ..., fk. Essential to the proof is a special case, which can be proved by an application of the Hardy–Littlewood circle method, of the theorem which states that if n is sufficiently large and r is odd, then the equation has a solution in integers x1, ..., xn, not all of which are 0. The restriction to odd r is necessary, since even degree forms, such as positive definite quadratic forms, may take the value 0 only at the origin.
https://en.wikipedia.org/wiki/International%20Encyclopedia%20of%20Unified%20Science
The International Encyclopedia of Unified Science (IEUS) was a series of publications devoted to unified science. The IEUS was conceived at the Mundaneum Institute in The Hague in the 1930s, and published in the United States beginning in 1938. It was an ambitious project that was never completed. The IEUS was an output of the Vienna Circle to address the "growing concern throughout the world for the logic, the history, and the sociology of science..." Only the first section Foundations of the Unity of Science (FUS) was published; it contains two volumes for a total of nineteen monographs published from 1938 to 1969. International Congresses for the Unity of Science Creation of the IEUS was facilitated by the International Congresses for the Unity of Science organized by members of the Vienna Circle. After a preliminary conference in Prague in 1934, the First International Congress for the Unity of Science was held at the Sorbonne, Paris, 16–21 September 1935. It was attended by about 170 people from over twenty different countries. With the active involvement of Kazimierz Ajdukiewicz (Poland), Susan Stebbing (England), and Federigo Enriques (Italy) the scope of the project for an IEUS was considerably expanded. The congress expressed its approval of the planned IEUS as proposed by the Mundaneum, and further set up a committee to plan future congresses. This committee included the following members: Marcel Boll Percy Williams Bridgman Henri Bonnet Niels Bohr Rudolf Carnap Élie Cartan Jacob Clay Morris Raphael Cohen Federigo Enriques Phillip Frank Maurice René Fréchet Ferdinand Gonseth Jacques Hadamard Maurice Janet Herbert Spencer Jennings Jørgen Jørgensen Hans Kelsen Tadeusz Kotarbiński André Lalande Paul Langevin Karl Lashley Clarence Irving Lewis Jan Łukasiewicz Richard von Mises Charles W. Morris Otto Neurath Charles Nicolle Charles Kay Ogden Jean Baptiste Perrin Hans Reichenbach Abel Rey Charles Rist Louis Rougier Bertrand
https://en.wikipedia.org/wiki/Digital%20subchannel
In broadcasting, digital subchannels are a method of transmitting more than one independent program stream simultaneously from the same digital radio or television station on the same radio frequency channel. This is done by using data compression techniques to reduce the size of each individual program stream, and multiplexing to combine them into a single signal. The practice is sometimes called "multicasting". ATSC television United States The ATSC digital television standard used in the United States supports multiple program streams over-the-air, allowing television stations to transmit one or more subchannels over a single digital signal. A virtual channel numbering scheme distinguishes broadcast subchannels by appending the television channel number with a period digit (".xx"). Simultaneously, the suffix indicates that a television station offers additional programming streams. By convention, the suffix position ".1" is normally used to refer to the station's main digital channel and the ".0" position is reserved for analog channels. For example, most of the owned-and-operated stations/affiliates of Trinity Broadcasting Network transmit five streams in the following format: The most of any large broadcaster in the United States, Ion Television stations transmit eight channels (in standard definition) and the Katz Broadcasting subchannel services Court TV, Ion Mystery, Bounce TV, Laff, Grit, Defy TV, and Scripps News. More programming streams can be fit into a single channel space at the cost of broadcast quality. Among smaller stations, KAXT-CD in San Francisco is believed to have the most feeds of any individual over-the-air broadcaster, offering twelve video and several audio feeds (all transmitted in standard definition). WANN-CD in Atlanta, Georgia, with ten video and ten audio feeds, comes at a close second. Several cable-to-air broadcasters, such as those in Willmar, Minnesota and Cortez, Colorado, have multiplexed more than five separate cable telev
https://en.wikipedia.org/wiki/Hedgehog%20signaling%20pathway
The Hedgehog signaling pathway is a signaling pathway that transmits information to embryonic cells required for proper cell differentiation. Different parts of the embryo have different concentrations of hedgehog signaling proteins. The pathway also has roles in the adult. Diseases associated with the malfunction of this pathway include cancer. The Hedgehog signaling pathway is one of the key regulators of animal development and is present in all bilaterians. The pathway takes its name from its polypeptide ligand, an intracellular signaling molecule called Hedgehog (Hh) found in fruit flies of the genus Drosophila; fruit fly larva lacking the Hh gene are said to resemble hedgehogs. Hh is one of Drosophila's segment polarity gene products, involved in establishing the basis of the fly body plan. Larvae without Hh are short and spiny, resembling the hedgehog animal. The molecule remains important during later stages of embryogenesis and metamorphosis. Mammals have three Hedgehog homologues, Desert (DHH), Indian (IHH), and Sonic (SHH), of which Sonic is the best studied. The pathway is equally important during vertebrate embryonic development and is therefore of interest in evolutionary developmental biology. In knockout mice lacking components of the pathway, the brain, skeleton, musculature, gastrointestinal tract and lungs fail to develop correctly. Recent studies point to the role of Hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target Hedgehog signaling to fight this disease are being actively developed by a number of pharmaceutical companies. Discovery In the 1970s, a fundamental problem in developmental biology was to understand how a relatively simple egg can give rise to a complex segmented body plan. In the late 1970s Christiane Nüsslein-Volhard and Eric Wieschaus isolated mutations in genes that
https://en.wikipedia.org/wiki/Maxillary%20prominence
Continuous with the dorsal end of the first pharyngeal arch, and growing forward from its cephalic border, is a triangular process, the maxillary prominence (or maxillary process), the ventral extremity of which is separated from the mandibular arch by a ">"-shaped notch. The maxillary prominence forms the lateral wall and floor of the orbit, and in it are ossified the zygomatic bone and the greater part of the maxilla; it meets with the medial nasal prominence, from which, however, it is separated for a time by a groove, the naso-optic furrow, that extends from the furrow encircling the eyeball to the nasal pit. The maxillary prominences ultimately fuse with the medial nasal prominence and the globular processes, and form the lateral parts of the upper lip and the posterior boundaries of the nares. It is innervated by the maxillary nerve. Additional images
https://en.wikipedia.org/wiki/Maxillary%20process%20of%20inferior%20nasal%20concha
From the lower border of the inferior nasal concha, a thin lamina, the maxillary process, curves downward and laterally; it articulates with the maxilla and forms a part of the medial wall of the maxillary sinus.
https://en.wikipedia.org/wiki/Windows%20Embedded%20CE%206.0
Windows Embedded CE 6.0 (codenamed "Yamazaki") is the sixth major release of the Microsoft Windows embedded operating system targeted to enterprise-specific tools such as industrial controllers and consumer electronics devices like digital cameras. CE 6.0 features a kernel that supports 32,768 processes, up from the 32-process limit of prior versions. Each process receives 2 GB of virtual address space, up from 32 MB. Windows Embedded CE is commonly used in supermarket self-checkouts and cars as a display. Windows Embedded CE is a background system on most devices that have it. Windows Embedded CE 6.0 was released on November 1, 2006, and includes partial source code. The OS currently serves as the basis for the Zune HD portable media player. Windows Mobile 6.5 is based on Windows CE 5.2. Windows Phone 7, the first major release of the Windows Phone operating system, is based on Windows Embedded CE 6.0 R3; although Windows Phone 7 is also using Windows Embedded Compact 7 features. New features Some system components (such as filesystem, GWES (graphics, windowing, events server), device driver manager) have been moved to the kernel space. The system components which now run in kernel have been converted from EXEs to DLLs, which get loaded into kernel space. New virtual memory model. The lower 2 GB is the process VM space and is private per process. The upper 2 GB is the kernel VM space. New device driver model that supports both user mode and kernel mode drivers. The 32 process limit has been raised to 32,768 processes. The 32 megabyte virtual memory limit has been raised to the total virtual memory; up to 2 GB of private VM is available per process. The Platform Builder IDE is integrated into Microsoft Visual Studio 2005 as plugin (thus forcing the client to obtain Microsoft Visual Studio 2005 also), allowing one development environment for both platform and application development. Read-only support for UDF 2.5 filesystem. Support for Microsoft's exFAT f
https://en.wikipedia.org/wiki/Superior%20lateral%20cutaneous%20nerve%20of%20arm
The superior lateral cutaneous nerve of arm (or superior lateral brachial cutaneous nerve) is the continuation of the posterior branch of the axillary nerve, after it pierces the deep fascia. It contains axons from C5-C6 ventral rami. Structure It sweeps around the posterior border of the deltoideus and supplies the skin over the lower two-thirds of the posterior part of this muscle, as well as that covering the long head of the triceps brachii. See also Posterior cutaneous nerve of arm (Posterior brachial) Medial cutaneous nerve of arm (Medial brachial) Lateral cutaneous nerve of forearm (Lateral antebrachial) Additional images
https://en.wikipedia.org/wiki/Posterior%20cutaneous%20nerve%20of%20forearm
The posterior cutaneous nerve of forearm is a nerve found in humans and other animals. It is also known as the dorsal antebrachial cutaneous nerve, the external cutaneous branch of the musculospiral nerve, and the posterior antebrachial cutaneous nerve. It is a cutaneous nerve (a nerve that supplies skin) of the forearm. Origin It arises from the radial nerve in the posterior compartment of the arm, often along with the posterior cutaneous nerve of the arm. Course It perforates the lateral head of the triceps brachii muscle at the triceps' attachment to the humerus. The upper and smaller branch of the nerve passes to the front of the elbow, lying close to the cephalic vein, and supplies the skin of the lower half of the arm. The lower branch pierces the deep fascia below the insertion of the Deltoideus, and descends along the lateral side of the arm and elbow, and then along the back of the forearm to the wrist, supplying the skin in its course, and joining, near its termination, with the dorsal branch of the lateral antebrachial cutaneous nerve. See also Medial cutaneous nerve of forearm Lateral cutaneous nerve of forearm Posterior cutaneous nerve of arm Additional images
https://en.wikipedia.org/wiki/Posterior%20cutaneous%20nerve%20of%20arm
The posterior cutaneous nerve of arm (internal cutaneous branch of musculospiral, posterior brachial cutaneous nerve) is a branch of the radial nerve that provides sensory innervation for much of the skin on the back of the arm. It arises in the axilla. It is of small size, and passes through the axilla to the medial side of the area supplying the skin on its dorsal surface nearly as far as the olecranon. In its course it crosses behind and communicates with the intercostobrachial. See also Superior lateral cutaneous nerve of arm Inferior lateral cutaneous nerve of arm Medial cutaneous nerve of arm Posterior cutaneous nerve of forearm Additional images
https://en.wikipedia.org/wiki/Nepeta%20%C3%97%20faassenii
Nepeta × faassenii, a flowering plant also known as catmint and Faassen's catnip, is a primary hybrid of garden origin. The parent species are Nepeta racemosa and Nepeta nepetella. It is an herbaceous perennial, with oval, opposite, intricately veined, gray—green leaves, on square stems. The foliage is fragrant. It grows from tall by wide. The plant produces small but showy, abundant, two-lipped, trumpet-shaped, soft lavender flowers, from spring through autumn. Continued blooming is encouraged by deadheading. The seeds are predominantly sterile, and so the plant will not reseed as an invasive species, unlike some other nepeta species. Cultivation Nepeta × faassenii is cultivated for its attractive aromatic foliage and masses of blue flowers, as groundcover, border edging, or in pots or rock gardens. It is drought tolerant, and can be deer resistant. It has gained the Royal Horticultural Society's Award of Garden Merit. It was first cultivated by Faassen Nurseries in Tegelen, Netherlands, and named by Bergmans. Cultivars Numerous cultivars are available in the trade. 'Walker's Low' — silver grey foliage, lavender blue flowers, tall by wide, 2007 "Perennial of the Year" (by Perennial Plant Association). 'Jr. Walker' TM —PP 23,074, compact induced mutant of Walker low, small silver grey foliage, small lavender blue flowers, tall by wide, 2013 "Top Performer" (by Colorado State University Perennial Trial Garden). Bred by Michael Dobres NovaFlora LLC, introduced by Star Roses & Plants 'Blue Wonder' — blue flowers, "Missouri Botanical Garden Plant of Merit". 'Select Blue' — noticeably bluer flowers, wide. 'Six Hills Giant' — periwinkle blue flowers, tall.
https://en.wikipedia.org/wiki/Vascular%20tumor
A vascular tumor is a tumor of vascular origin; a soft tissue growth that can be either benign or malignant, formed from blood vessels or lymph vessels. Examples of vascular tumors include hemangiomas, lymphangiomas, hemangioendotheliomas, Kaposi's sarcomas, angiosarcomas, and hemangioblastomas. An angioma refers to any type of benign vascular tumor. Some vascular tumors can be associated with serious blood-clotting disorders, making correct diagnosis critical. A vascular tumor may be described in terms of being highly vascularized, or poorly vascularized, referring to the degree of blood supply to the tumor. Classification Vascular tumors make up one of the classifications of vascular anomalies. The other grouping is vascular malformations. Vascular tumors can be further subclassified as being benign, borderline or aggressive, and malignant. Vascular tumors are described as proliferative, and vascular malformations as nonproliferative. Types A vascular tumor typically grows quickly by the proliferation of endothelial cells. Most are not birth defects. Benign The most common type of benign vascular tumors are hemangiomas, most commonly infantile hemangiomas, and less commonly congenital hemangiomas. Infantile hemangioma Infantile hemangiomas are the most common type of vascular tumor to affect babies, accounting for 90% of hemangiomas. They are characterised by the abnormal proliferation of endothelial cells and of deviant blood vessel formation or architecture. Hypoxic stress seems to be a major trigger for this. Infantile hemangiomas are easily diagnosed, and little if any aggressive treatment is needed. They are characterised by rapid growth in the first few months, followed by spontaneous regression in early childhood. Congenital hemangioma Congenital hemangiomas are present and fully formed at birth, and only account for 2% of the hemangiomas. They do not have the postnatal phase of proliferation common to infantile hemangiomas. There are two main vari
https://en.wikipedia.org/wiki/Pneumograph
A pneumograph, also known as a pneumatograph or spirograph, is a device for recording velocity and force of chest movements during respiration. Principle of operation There are various kinds of pneumographic devices, which have different principles of operation. In one mechanism, a flexible rubber vessel is attached to the chest. The vessel is equipped with sensors. Others are impedance based. In these methods, a high frequency (tens to hundreds of kHz) low amplitude current is injected across the chest cavity. The voltage resulting from this current injection is measured and the resistance is derived from the application of Ohm's law (R = V/I). Current flows less easily through the chest as the lungs fill, so the resistance rises with increasing lung volume.
https://en.wikipedia.org/wiki/Barbarian%20%281987%20video%20game%29
Barbarian is a 1987 platform game by Psygnosis. It was first developed for the Atari ST, and was ported to the Amiga, Commodore 64, MS-DOS, MSX, Amstrad CPC, and ZX Spectrum. The Amiga port was released in 1987; the others were released in 1988. The cover artwork (part of "Red Dragon" figure/landscape) is by fantasy artist Roger Dean. The game spawned a 1991 sequel, Barbarian II. Gameplay The game opens with a striking—for the era—animation of a muscle-bound barbarian cutting a chain with a sword. On the Amiga and Atari ST versions, the animation is accompanied by a loud, digital sound effect. In the game, the player is Hegor, a barbarian who must traverse several dungeons and underground habitats to defeat his brother, the evil sorcerer Necron. He has a sword, a shield and bow in his arsenal of weapons. Running and jumping, as with many platform games, comprises a large part of the gameplay of this title. The game used a unique control system to make up for lack of more than one joystick button on many systems. The player would first press the one button after which a "menu" of actions would appear along the bottom of the screen. The player then selected the desired action by cycling through the choices with the joystick and then pressing the button again when the desired action was highlighted. In the original versions, this game tried to emulate the visual style of the game cover and opening animation. The game used very detailed and colorful sprites and a variety of thoughtful sound effects to accompany the onscreen action. The IBM PC version plays digitized speech in the opening sequence and other sound effects using the speaker. Reception David Plotkin of STart praised Barbarians graphics and sound as "the most impressive I've ever seen in an ST game". He hesitated to recommend the game, however, because the lack of savegame forced restarts after the frequent unavoidable deaths. The game was reviewed in 1989 in Dragon #150 by Courtney Harrington in "The
https://en.wikipedia.org/wiki/Physical%20and%20Theoretical%20Chemistry%20Laboratory%20%28Oxford%29
The Physical and Theoretical Chemistry Laboratory (PTCL) is a major chemistry laboratory at the University of Oxford, England. It is located in the main Science Area of the university on South Parks Road. Previously it was known as the Physical Chemistry Laboratory. History The original Physical Chemistry Laboratory was built in 1941 and at that time also housed the inorganic chemistry laboratory. It replaced the Balliol-Trinity Laboratories. The east wing of the building was completed in 1959 and inorganic chemistry, already in its own building on South Parks Road, then became a separate department in 1961. In 1972, the Department of Theoretical Chemistry was established in a house on South Parks Road, and in 1994, the amalgamation of the physical and theoretical chemistry departments took place. This was followed shortly by the theoretical group moving into the PTCL annexe in 1995. The university is in the early planning stages of the demolition of the PTCL building, to be replaced by a second chemistry research laboratory. Selected chemists The following Oxford Physical and Theoretical chemists are of note: John Albery Peter Atkins Ronnie Bell E. J. Bowen Richard G. Compton Charles Coulson Frederick Dainton Cyril Hinshelwood Peter J. Hore Graham Richards Rex Richards Timothy Softley Robert K. Thomas Harold Thompson See also Balliol-Trinity Laboratories, a forerunner of the PTCL Department of Chemistry, University of Oxford
https://en.wikipedia.org/wiki/Splanchnopleuric%20mesenchyme
In the anatomy of an embryo, the splanchnopleuric mesenchyme is a structure created during embryogenesis when the lateral mesodermal germ layer splits into two layers. The inner (or splanchnic) layer adheres to the endoderm, and with it forms the splanchnopleure (mesoderm external to the coelom plus the endoderm). See also Post development the somato and splanchnopleuric junction lies at the duodeno-jejunal flexure. somatopleure mesenchyme
https://en.wikipedia.org/wiki/Somatopleuric%20mesenchyme
In the anatomy of an embryo, the somatopleure is a structure created during embryogenesis when the lateral plate mesoderm splits into two layers. The outer (or somatic) layer becomes applied to the inner surface of the ectoderm, and with it (partially) forms the somatopleure. The combination of ectoderm and mesoderm, or somatopleure, forms the amnion, the chorion and the lateral body wall of the embryo. Limb formation, from the somatic mesoderm, is induced by hox genes and the expression of other molecules through an epithelial-mesenchyme transition. The embryonic somatopleure is then divided into 3 sections, the anterior limb bud formation, the posterior limb bud formation and the non limb forming wall. The bud forming sections grow in size. The somatic mesoderm under the ectoderm proliferates in mesenchyme form. In chicken, the extraembryonic tissues are separated into two layers: the splanchnopleure composed of the endoderm and splanchnic mesoderm, and the somatopleure composed of the ectoderm and somatic mesoderm along with the formation of the coelomic cavity after gastrulation. The amnion and chorion are derived from the somatopleure with a presumptive border of the ectamnion. Following the anterior extension of the extraembryonic mesoderm and formation of the coelom, the anterior and lateral amniotic folds arise along the ectamnion and grow posteriorly over the head of the embryo. A portion of the amniogenic somatopleure adjacent to the base of the head fold is identified as the region contributing to embryonic tissues in the thoracic wall and pharyngeal and cardiac regions. The somatopleure is known to serve as the matrix of the ventrolateral body wall and gives rise to connective tissue, tendons and the sternum. See also Splanchnopleuric mesenchyme
https://en.wikipedia.org/wiki/KLV
KLV (Key-Length-Value) is a data encoding standard, often used to embed information in video feeds. The standard uses a type–length–value encoding scheme. Items are encoded into Key-Length-Value triplets, where key identifies the data, length specifies the data's length, and value is the data itself. It is defined in SMPTE 336M-2007 (Data Encoding Protocol Using Key-Length Value), approved by the Society of Motion Picture and Television Engineers. Due to KLV's large degree of interoperability, it has also been adopted by the Motion Imagery Standards Board. Byte packing In a binary stream of data, a KLV set is broken down in the following fashion, with all integer-interpretation being big endian: Key field The first few bytes are the Key, much like a key in a standard hash table data structure. Keys can be 1, 2, 4, or 16 bytes in length. Presumably in a separate specification document you would agree on a key length for a given application. Sixteen byte keys are usually reserved for use as globally registered unique identifiers, and the Value portion of such a packet usually contains a series of more KLV sets with smaller keys. Length field Following the bytes for the Key are bytes for the Length field which will tell you how many bytes follow the length field and make up the Value portion. There are four kinds of encoding for the Length field: 1-byte, 2-byte, 4-byte and Basic Encoding Rules (BER). The 1-, 2-, and 4-byte variants are pretty straightforward: make an unsigned integer out of the bytes, and that integer is the number of bytes that follow. BER length encoding is a bit more complicated but the most flexible. If the first byte in the length field does not have the high bit set (0x80), then that single byte represents an integer between 0 and 127 and indicates the number of Value bytes that immediately follows. If the high bit is set, then the lower seven bits indicate how many bytes follow that themselves make up a length field. For example if the
https://en.wikipedia.org/wiki/Stan%20Frankel
Stanley Phillips Frankel (1919 – May, 1978) was an American computer scientist. He worked in the Manhattan Project and developed various computers as a consultant. Early life He was born in Los Angeles, attended graduate school at the University of Rochester, received his PhD in physics from the University of California, Berkeley, and began his career as a post-doctoral student under J. Robert Oppenheimer at University of California, Berkeley in 1942. Career Frankel helped develop computational techniques used in the nuclear research taking place at the time, notably making some of the early calculations relating to the diffusion of neutrons in a critical assembly of uranium with Eldred Nelson. He joined the T (Theoretical) Division of the Manhattan Project at Los Alamos in 1943. His wife Mary Frankel was also hired to work as a human computer in the T Division. While at Los Alamos, Frankel and Nelson organized a group of scientists' wives, including Mary, to perform some of the repetitive calculations using Marchant and Friden desk calculators to divide the massive calculations required for the project. This became Group T-5 under New York University mathematician Donald Flanders when he arrived in the late summer of 1943. Mathematician Dana Mitchell noticed that the Marchant calculators broke under heavy use and persuaded Frankel and Nelson to order IBM 601 punched card machines. This experience led to Frankel' interest in the then-dawning field of digital computers. In August 1945, Frankel and Nick Metropolis traveled to the Moore School of Engineering in Pennsylvania to learn how to program the ENIAC computer. That fall they helped design a calculation that would determine the likelihood of being able to develop a fusion weapon. Edward Teller used the ENIAC results to prepare a report in the spring of 1946 that answered this question in the affirmative. After losing his security clearance (and thus his job) during the red scare of the early 1950s, Franke
https://en.wikipedia.org/wiki/Kinetic%20inductance
Kinetic inductance is the manifestation of the inertial mass of mobile charge carriers in alternating electric fields as an equivalent series inductance. Kinetic inductance is observed in high carrier mobility conductors (e.g. superconductors) and at very high frequencies. Explanation A change in electromotive force (emf) will be opposed by the inertia of the charge carriers since, like all objects with mass, they prefer to be traveling at constant velocity and therefore it takes a finite time to accelerate the particle. This is similar to how a change in emf is opposed by the finite rate of change of magnetic flux in an inductor. The resulting phase lag in voltage is identical for both energy storage mechanisms, making them indistinguishable in a normal circuit. Kinetic inductance () arises naturally in the Drude model of electrical conduction considering not only the DC conductivity but also the finite relaxation time (collision time) of the mobile charge carriers when it is not tiny compared to the wave period 1/f. This model defines a complex conductance at radian frequency ω=2πf given by . The imaginary part, -σ2, represents the kinetic inductance. The Drude complex conductivity can be expanded into its real and imaginary components: where is the mass of the charge carrier (i.e. the effective electron mass in metallic conductors) and is the carrier number density. In normal metals the collision time is typically s, so for frequencies < 100 GHz is very small and can be ignored; then this equation reduces to the DC conductance . Kinetic inductance is therefore only significant at optical frequencies, and in superconductors whose . For a superconducting wire of cross-sectional area , the kinetic inductance of a segment of length can be calculated by equating the total kinetic energy of the Cooper pairs in that region with an equivalent inductive energy due to the wire's current : where is the electron mass ( is the mass of a Cooper pair), is the
https://en.wikipedia.org/wiki/Spermatic%20plexus
The spermatic plexus (or testicular plexus) is derived from the renal plexus, receiving branches from the aortic plexus. It accompanies the internal spermatic artery to the testis. Additional images
https://en.wikipedia.org/wiki/Aorticorenal%20ganglion
The aorticorenal ganglion is composed of the superior mesenteric, renal, and inferior mesenteric ganglia. This is distinct from the celiac ganglia. However, they are part of the preaortic ganglia. Sympathetic input to the gut comes from the sympathetic chain next to the thoracic vertebrae. The upper nerve supply arrives from cell bodies at the levels of T5–T9, leaves the sympathetic chain by the greater splanchnic nerve, and synapses in the celiac ganglion before proceeding onto the foregut. Below this the lesser splanchnic nerve arises from T10–T11, leaves the sympathetic chain and synapses at the aorticorenal ganglion before going onto also supply the kidney and upper ureter. Below this the least splanchnic nerve arises from T12 and leaves the sympathetic chain to synapse at the "renal plexus."
https://en.wikipedia.org/wiki/Inferior%20mesenteric%20ganglion
The inferior mesenteric ganglion is a ganglion located near where the inferior mesenteric artery branches from the abdominal aorta. Additional images See also Superior mesenteric ganglion
https://en.wikipedia.org/wiki/Superior%20mesenteric%20plexus
The superior mesenteric plexus is a continuation of the lower part of the celiac plexus, receiving a branch from the junction of the right vagus nerve with the plexus. It surrounds the superior mesenteric artery, accompanies it into the mesentery, and divides into a number of secondary plexuses, which are distributed to all the parts supplied by the artery, viz., pancreatic branches to the pancreas; intestinal branches to the small intestine; and ileocolic, right colic, and middle colic branches, which supply the corresponding parts of the great intestine. The nerves composing this plexus are white in color and firm in texture; in the upper part of the plexus close to the origin of the superior mesenteric artery is the superior mesenteric ganglion. Additional images See also Inferior mesenteric plexus
https://en.wikipedia.org/wiki/Superior%20mesenteric%20ganglion
The superior mesenteric ganglion is a ganglion in the upper part of the superior mesenteric plexus. It lies close to the origin of the superior mesenteric artery. Structure The superior mesenteric ganglion is the synapsing point for one of the pre- and post-synaptic nerves of the sympathetic division of the autonomic nervous system. Specifically, contributions to the superior mesenteric ganglion arise from the lesser splanchnic nerve, which typically arises from the spinal nerve roots of T10 and T11. This nerve goes on to innervate the jejunum, the ileum, the ascending colon and the transverse colon. While the sympathetic input of the midgut is innervated by the sympathetic nerves of the thorax, parasympathetic innervation is done by the vagus nerve, which travels along the plexuses that arise from the anterior and posterior vagal trunks of the stomach.
https://en.wikipedia.org/wiki/Alkali-metal%20thermal%20to%20electric%20converter
An alkali-metal thermal-to-electric converter (AMTEC, originally called the sodium heat engine or SHE) is a thermally regenerative electrochemical device for the direct conversion of heat to electrical energy. It is characterized by high potential efficiencies and no moving parts except for the working fluid, which make it a candidate for space power applications. It was invented by Joseph T. Kummer and Neill Weber at Ford in 1966, and is described in US Patents , , and . Design An Alkali-metal thermal to electric converter works by pumping something, usually sodium, though any Alkali metal will do, through, around, and over, a circuit. The heat evaporates the sodium at one end. This puts it at high pressure. It then passes through/over the Anode, releasing electrons, thus, charge. It then passes through an electrolyte to conduct it to the other side. This works because the electrolyte chosen can conduct Ions, but not electrons so well. At the Cathode, the Alkali metal gets its electrons back, effectively pumping electrons through the external circuit. The pressure from the electrolyte pushes it to a low-pressure vapor chamber, where it “cools off” to a liquid again. An electromagnetic pump, or a wick, takes this liquid sodium back to the hot side. This device accepts a heat input in a range 900–1300 K and produces direct current with predicted device efficiencies of 15–40%. In the AMTEC, sodium is driven around a closed thermodynamic cycle between a high-temperature heat reservoir and a cooler reservoir at the heat rejection temperature. The unique feature of the AMTEC cycle is that sodium ion conduction between a high-pressure or -activity region and a low-pressure or -activity region on either side of a highly ionically conducting refractory solid electrolyte is thermodynamically nearly equivalent to an isothermal expansion of sodium vapor between the same high and low pressures. Electrochemical oxidation of neutral sodium at the anode leads to sodium ions
https://en.wikipedia.org/wiki/Supine%20position
The supine position ( or ) means lying horizontally with the face and torso facing up, as opposed to the prone position, which is face down. When used in surgical procedures, it grants access to the peritoneal, thoracic and pericardial regions; as well as the head, neck and extremities. Using anatomical terms of location, the dorsal side is down, and the ventral side is up, when supine. Semi-supine In scientific literature "semi-supine" commonly refers to positions where the upper body is tilted (at 45° or variations) and not completely horizontal. Relation to sudden infant death syndrome The decline in death due to sudden infant death syndrome (SIDS) is said to be attributable to having babies sleep in the supine position. The realization that infants sleeping face down, or in a prone position, had an increased mortality rate re-emerged into medical awareness at the end of the 1980s when two researchers, Susan Beal in Australia and Gus De Jonge in the Netherlands, independently noted the association. It is believed that in the prone position babies are more at risk to re-breathe their own carbon dioxide. Because of the immature state of their central chemoreceptors, infants do not respond to the subsequent respiratory acidosis that develops. Typical non-infants realize autonomic responses of increased rate and depth of respiration (hyperventilation, yawning). Obstructive sleep apnea Obstructive sleep apnea (OSA) is a form of sleep apnea that occurs more frequently when throat muscles relax and is most severe when individuals are sleeping in the supine position. Studies and evidence show that OSA related to sleeping in the supine position is related to the airway positioning, reduced lung volume, and the inability of airway muscles to dilate enough to compensate as the airway collapses. With individuals who have OSA, many health care providers encourage their patients to avoid the supine position while asleep and sleep laterally or sleep with the head of th
https://en.wikipedia.org/wiki/Complex%20Hadamard%20matrix
A complex Hadamard matrix is any complex matrix satisfying two conditions: unimodularity (the modulus of each entry is unity): orthogonality: , where denotes the Hermitian transpose of and is the identity matrix. The concept is a generalization of the Hadamard matrix. Note that any complex Hadamard matrix can be made into a unitary matrix by multiplying it by ; conversely, any unitary matrix whose entries all have modulus becomes a complex Hadamard upon multiplication by . Complex Hadamard matrices arise in the study of operator algebras and the theory of quantum computation. Real Hadamard matrices and Butson-type Hadamard matrices form particular cases of complex Hadamard matrices. Complex Hadamard matrices exist for any natural (compare the real case, in which existence is not known for every ). For instance the Fourier matrices (the complex conjugate of the DFT matrices without the normalizing factor), belong to this class. Equivalency Two complex Hadamard matrices are called equivalent, written , if there exist diagonal unitary matrices and permutation matrices such that Any complex Hadamard matrix is equivalent to a dephased Hadamard matrix, in which all elements in the first row and first column are equal to unity. For and all complex Hadamard matrices are equivalent to the Fourier matrix . For there exists a continuous, one-parameter family of inequivalent complex Hadamard matrices, For the following families of complex Hadamard matrices are known: a single two-parameter family which includes , a single one-parameter family , a one-parameter orbit , including the circulant Hadamard matrix , a two-parameter orbit including the previous two examples , a one-parameter orbit of symmetric matrices, a two-parameter orbit including the previous example , a three-parameter orbit including all the previous examples , a further construction with four degrees of freedom, , yielding other examples than , a single point - one
https://en.wikipedia.org/wiki/Claire%20F.%20Gmachl
Claire F. Gmachl is the Eugene Higgins Professor of Electrical Engineering at Princeton University. She is best known for her work in the development of quantum cascade lasers. Education and honors Gmachl earned her M.Sc. in physics from the University of Innsbruck in 1991. She went on to receive her Ph.D. in electrical engineering from the Technical University of Vienna in 1995, graduating sub auspiciis Praesidentis (with special honors by the president of the Austrian republic). Her studies focused on integrated optical modulators and tunable surface-emitting lasers in the near infrared. From 1996 to 1998, she was a postdoctoral member of technical staff at Bell Laboratories. In 1998, she became a formal member of technical staff at Bell Labs and in 2002 she was named a distinguished member of technical staff, in part due to her work on the development of the quantum cascade laser. In 2003, she left Bell Labs and took a position as associate professor in the department of electrical engineering at Princeton University, where she is currently working as a full professor since 2007. In 2004, Popular Science named Gmachl in its "Class of 2004 - Brilliant 10," its list of the 10 most promising scientists under 40. She went on, in September 2005, to win the MacArthur Foundation's "genius grant." Recently, she was named the director of the new Mid-InfraRed Technologies for Health and the Environment (MIRTHE) Center, funded by the National Science Foundation. Gmachl succeeded Sandra Bermann as head of Whitman College, Princeton University on July 1, 2019. Research Although Gmachl originally intended to study theoretical applied mathematics, her interest soon turned to theoretical applied physics, and, with the encouragement of an advisor, experimental sciences. As such, she works in the fields of optics and semiconductor laser technology. Gmachl has conceived several novel designs for solid-state lasers and her work has led to advances in the development of q
https://en.wikipedia.org/wiki/Tacit%20programming
Tacit programming, also called point-free style, is a programming paradigm in which function definitions do not identify the arguments (or "points") on which they operate. Instead the definitions merely compose other functions, among which are combinators that manipulate the arguments. Tacit programming is of theoretical interest, because the strict use of composition results in programs that are well adapted for equational reasoning. It is also the natural style of certain programming languages, including APL and its derivatives, and concatenative languages such as Forth. The lack of argument naming gives point-free style a reputation of being unnecessarily obscure, hence the epithet "pointless style". Unix scripting uses the paradigm with pipes. Examples Python Tacit programming can be illustrated with the following Python code. A sequence of operations such as the following: def example(x): return baz(bar(foo(x))) ... can be written in point-free style as the composition of a sequence of functions, without parameters: from functools import partial, reduce def compose(*fns): return partial(reduce, lambda v, fn: fn(v), fns) example = compose(foo, bar, baz) For a more complex example, the Haskell code can be translated as: p = partial(compose, partial(compose, f), g) Functional programming A simple example (in Haskell) is a program which computes the sum of a list of numbers. We can define the sum function recursively using a pointed style (cf. value-level programming) as: sum (x:xs) = x + sum xs sum [] = 0 However, using a fold we can replace this with: sum xs = foldr (+) 0 xs And then the argument is not needed, so this simplifies to sum = foldr (+) 0 which is point-free. Another example uses function composition: p x y z = f (g x y) z The following Haskell-like pseudo-code exposes how to reduce a function definition to its point-free equivalent: p = \x -> \y -> \z -> f (g x y) z = \x -> \y -> f (g x y) = \x -> \y -> (f . (g x)) y = \x
https://en.wikipedia.org/wiki/Reuleaux%20tetrahedron
The Reuleaux tetrahedron is the intersection of four balls of radius s centered at the vertices of a regular tetrahedron with side length s. The spherical surface of the ball centered on each vertex passes through the other three vertices, which also form vertices of the Reuleaux tetrahedron. Thus the center of each ball is on the surfaces of the other three balls. The Reuleaux tetrahedron has the same face structure as a regular tetrahedron, but with curved faces: four vertices, and four curved faces, connected by six circular-arc edges. This shape is defined and named by analogy to the Reuleaux triangle, a two-dimensional curve of constant width; both shapes are named after Franz Reuleaux, a 19th-century German engineer who did pioneering work on ways that machines translate one type of motion into another. One can find repeated claims in the mathematical literature that the Reuleaux tetrahedron is analogously a surface of constant width, but it is not true: the two midpoints of opposite edge arcs are separated by a larger distance, Volume and surface area The volume of a Reuleaux tetrahedron is The surface area is Meissner bodies Ernst Meissner and Friedrich Schilling showed how to modify the Reuleaux tetrahedron to form a surface of constant width, by replacing three of its edge arcs by curved patches formed as the surfaces of rotation of a circular arc. According to which three edge arcs are replaced (three that have a common vertex or three that form a triangle) there result two noncongruent shapes that are sometimes called Meissner bodies or Meissner tetrahedra. Bonnesen and Fenchel conjectured that Meissner tetrahedra are the minimum-volume three-dimensional shapes of constant width, a conjecture which is still open. In connection with this problem, Campi, Colesanti and Gronchi showed that the minimum volume surface of revolution with constant width is the surface of revolution of a Reuleaux triangle through one of its symmetry axes. One of Man Ray'
https://en.wikipedia.org/wiki/Gramme%20machine
A Gramme machine, Gramme ring, Gramme magneto, or Gramme dynamo is an electrical generator that produces direct current, named for its Belgian inventor, Zénobe Gramme, and was built as either a dynamo or a magneto. It was the first generator to produce power on a commercial scale for industry. Inspired by a machine invented by Antonio Pacinotti in 1860, Gramme was the developer of a new induced rotor in form of a wire-wrapped ring (Gramme ring) and demonstrated this apparatus to the Academy of Sciences in Paris in 1871. Although popular in 19th century electrical machines, the Gramme winding principle is no longer used since it makes inefficient use of the conductors. The portion of the winding on the interior of the ring cuts no flux and does not contribute to energy conversion in the machine. The winding requires twice the number of turns and twice the number of commutator bars as an equivalent drum-wound armature. Description The Gramme machine used a ring armature, with a series of armature coils, wound around a revolving ring of soft iron. The coils are connected in series, and the junction between each pair is connected to a commutator on which two brushes run. Permanent magnets magnetize the soft iron ring, producing a magnetic field which rotates around through the coils in order as the armature turns. This induces a voltage in two of the coils on opposite sides of the armature, which is picked off by the brushes. Earlier electromagnetic machines passed a magnet near the poles of one or two electromagnets, or rotated coils wound on double-T armatures within a static magnetic field, creating brief spikes or pulses of DC resulting in a transient output of low average power, rather than a constant output of high average power. With more than a few coils on the Gramme ring armature, the resulting voltage waveform is practically constant, thus producing a near direct current supply. This type of machine needs only electromagnets producing the magnetic field
https://en.wikipedia.org/wiki/Emotion%20in%20animals
Emotion is defined as any mental experience with high intensity and high hedonic content. The existence and nature of emotions in non-human animals are believed to be correlated with those of humans and to have evolved from the same mechanisms. Charles Darwin was one of the first scientists to write about the subject, and his observational (and sometimes anecdotal) approach has since developed into a more robust, hypothesis-driven, scientific approach. Cognitive bias tests and learned helplessness models have shown feelings of optimism and pessimism in a wide range of species, including rats, dogs, cats, rhesus macaques, sheep, chicks, starlings, pigs, and honeybees. Jaak Panksepp played a large role in the study of animal emotion, basing his research on the neurological aspect. Mentioning seven core emotional feelings reflected through a variety of neuro-dynamic limbic emotional action systems, including seeking, fear, rage, lust, care, panic and play. Through brain stimulation and pharmacological challenges, such emotional responses can be effectively monitored. Emotion has been observed and further researched through multiple different approaches including that of behaviourism, comparative, anecdotal, specifically Darwin's approach and what is most widely used today the scientific approach which has a number of subfields including functional, mechanistic, cognitive bias tests, self-medicating, spindle neurons, vocalizations and neurology. While emotions in nonhuman animals is still quite a controversial topic, it has been studied in an extensive array of species both large and small including primates, rodents, elephants, horses, birds, dogs, cats, honeybees and crayfish. Etymology, definitions, and differentiation The word "emotion" dates back to 1579, when it was adapted from the French word émouvoir, which means "to stir up". However, the earliest precursors of the word likely date back to the very origins of language. Emotions have been described as discr
https://en.wikipedia.org/wiki/Animal%20loss
The loss of a pet or an animal to which one has become emotionally bonded oftentimes results in grief which can be comparable with the death of a human loved one, or even greater, depending on the individual. The death can be felt more intensely when the owner has made a decision to end the pet's life through euthanasia. While there is strong evidence that animals can feel such loss for other animals, this article focuses on human feelings, when an animal is lost, dies or otherwise is departed. Effect of animal loss on humans There is no set amount of time for the grieving process to occur. However, mourning is much more intense for a pet upon whom the owner was emotionally dependent. Additionally, some pet owners may feel unable to express their grieving due to social customs and norms surrounding pets. If the pet owner internalizes the grief, the suffering increases. The stages of grief proposed by Elizabeth Kübler-Ross were designed in relation to human death, but can be adapted to describe the grief process for the death of a pet. Indeed, pet death includes several lessons: 1) the relationship rather than the object (the animal) is central to understand the loss; 2) the manner of death/loss will affect the grieving process; 3) the age and living situation of the bereaved will affect the grieving process. University of Michigan did a study of grief involving 174 adults who had deceased pets. Participants were administered a modified CENSHARE Pet Attachment Survey. Results indicate that initially 85.7% of owners experienced at least one symptom of grief, but the occurrence decreased to 35.1% at six months and to 22.4% at one year. Males and females reported different rates on six of 12 symptoms surveyed. The severity and length of symptoms was significantly correlated with the degree of attachment to the deceased pet. These findings indicate that pet loss can be a potential area of clinical concern, especially if the person's attachment to the pet was strong.
https://en.wikipedia.org/wiki/Pulse%20transition%20detector
A Pulse transition detector is used in flip flops in order to achieve edge triggering in the circuit. It merely converts the clock signal's rising edge to a very narrow pulse. The PTD consists of a delay gate (which delays the clock signal) and the clock signal itself passed through a NAND gate and then inverted. The benefit of edge triggering is that it removes the problems of zeroes and ones catching associated with pulse triggered flipflops (e.g. master slave flip flops). Logic gates
https://en.wikipedia.org/wiki/Cryptographic%20module
A cryptographic module is a component of a computer system that implements cryptographic algorithms in a secure way, typically with some element of tamper resistance. NIST defines a cryptographic modules as "The set of hardware, software, and/or firmware that implements security functions (including cryptographic algorithms), holds plaintext keys and uses them for performing cryptographic operations, and is contained within a cryptographic module boundary." Hardware security modules, including secure cryptoprocessors, are one way of implementing cryptographic modules. Standards for cryptographic modules include FIPS 140-3 and ISO/IEC 19790.
https://en.wikipedia.org/wiki/Brahmagupta%27s%20identity
In algebra, Brahmagupta's identity says that, for given , the product of two numbers of the form is itself a number of that form. In other words, the set of such numbers is closed under multiplication. Specifically: Both (1) and (2) can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to −b. This identity holds in both the ring of integers and the ring of rational numbers, and more generally in any commutative ring. History The identity is a generalization of the so-called Fibonacci identity (where n=1) which is actually found in Diophantus' Arithmetica (III, 19). That identity was rediscovered by Brahmagupta (598–668), an Indian mathematician and astronomer, who generalized it and used it in his study of what is now called Pell's equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126. The identity later appeared in Fibonacci's Book of Squares in 1225. Application to Pell's equation In its original context, Brahmagupta applied his discovery to the solution of what was later called Pell's equation, namely x2 − Ny2 = 1. Using the identity in the form he was able to "compose" triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x2 − Ny2 = k, to generate the new triple Not only did this give a way to generate infinitely many solutions to x2 − Ny2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or "nearly integer" solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity. See also Brahmagupta matrix Brahmagupta–Fibonacci identity Brahmagupta's interpolation formula Gauss composition law Indian mathematics List of Indian mathematicians
https://en.wikipedia.org/wiki/Telegrapher%27s%20equations
The telegrapher's equations (or just telegraph equations) are a set of two coupled, linear equations that predict the voltage and current distributions on a linear electrical transmission line. The equations are important because they allow transmission lines to be analyzed using circuit theory. The equations and their solutions are applicable from 0 Hz to frequencies at which the transmission line structure can support higher order non-TEM modes. The equations can be expressed in both the time domain and the frequency domain. In the time domain the independent variables are distance and time. The resulting time domain equations are partial differential equations of both time and distance. In the frequency domain the independent variables are distance and either frequency, or complex frequency, The frequency domain variables can be taken as the Laplace transform or Fourier transform of the time domain variables or they can be taken to be phasors. The resulting frequency domain equations are ordinary differential equations of distance. An advantage of the frequency domain approach is that differential operators in the time domain become algebraic operations in frequency domain. The equations come from Oliver Heaviside who developed the transmission line model starting with an August 1876 paper, On the Extra Current. The model demonstrates that the electromagnetic waves can be reflected on the wire, and that wave patterns can form along the line. Originally developed to describe telegraph wires, the theory can also be applied to radio frequency conductors, audio frequency (such as telephone lines), low frequency (such as power lines), and pulses of direct current. Distributed components The telegrapher's equations, like all other equations describing electrical phenomena, result from Maxwell's equations. In a more practical approach, one assumes that the conductors are composed of an infinite series of two-port elementary components, each representing an in
https://en.wikipedia.org/wiki/Distributed%20System%20Security%20Architecture
Distributed System Security Architecture or (DSSA) is a computer security architecture that provides a suite of functions including login, authentication, and access control in a distributed system. To differ from other similar architectures, the DSSA architecture offers the ability to access all these functions without the trusted server (known as a certificate authority) being active. In DSSA, security objects are handled by owners and access is controlled by the central, universally trusted, certificate authority. DSSA/SPX DSSA/SPX is the authentication protocol of DSSA. The CDC is a certificate granting server while the certificate is a ticket signed by CA which contains the public key of the party being certified. Since the CDC is merely distributing previously signed certificates, it is not necessary for it to be trusted. External links Tromsø University Gasser (1989) Cryptographic protocols
https://en.wikipedia.org/wiki/Portable%2C%20Extensible%20Toolkit%20for%20Scientific%20Computation
The Portable, Extensible Toolkit for Scientific Computation (PETSc, pronounced PET-see; the S is silent), is a suite of data structures and routines developed by Argonne National Laboratory for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the Message Passing Interface (MPI) standard for all message-passing communication. PETSc is the world’s most widely used parallel numerical software library for partial differential equations and sparse matrix computations. PETSc received an R&D 100 Award in 2009. The PETSc Core Development Group won the SIAM/ACM Prize in Computational Science and Engineering for 2015. PETSc is intended for use in large-scale application projects, many ongoing computational science projects are built around the PETSc libraries. Its careful design allows advanced users to have detailed control over the solution process. PETSc includes a large suite of parallel linear and nonlinear equation solvers that are easily used in application codes written in C, C++, Fortran and now Python. PETSc provides many of the mechanisms needed within parallel application code, such as simple parallel matrix and vector assembly routines that allow the overlap of communication and computation. In addition, PETSc includes support for parallel distributed arrays useful for finite difference methods. Components PETSc consists of a variety of components consisting of major classes and supporting infrastructure. Users typically interact with objects of the highest level classes relevant to their application, essential lower level objects such as vectors, and may customize or extend any others. All major components of PETSc have an extensible plugin architecture. Features and modules PETSc provides many features for parallel computation, broken into several modules: Index sets, including permutations, for indexing into vectors, renumbering, etc. Parallel vectors; and matrices (generally spar
https://en.wikipedia.org/wiki/HRU%20%28security%29
The HRU security model (Harrison, Ruzzo, Ullman model) is an operating system level computer security model which deals with the integrity of access rights in the system. It is an extension of the Graham-Denning model, based around the idea of a finite set of procedures being available to edit the access rights of a subject on an object . It is named after its three authors, Michael A. Harrison, Walter L. Ruzzo and Jeffrey D. Ullman. Along with presenting the model, Harrison, Ruzzo and Ullman also discussed the possibilities and limitations of proving the safety of systems using an algorithm. Description of the model The HRU model defines a protection system consisting of a set of generic rights R and a set of commands C. An instantaneous description of the system is called a configuration and is defined as a tuple of current subjects , current objects and an access matrix . Since the subjects are required to be part of the objects, the access matrix contains one row for each subject and one column for each subject and object. An entry for subject and object is a subset of the generic rights . The commands are composed of primitive operations and can additionally have a list of pre-conditions that require certain rights to be present for a pair of subjects and objects. The primitive requests can modify the access matrix by adding or removing access rights for a pair of subjects and objects and by adding or removing subjects or objects. Creation of a subject or object requires the subject or object not to exist in the current configuration, while deletion of a subject or object requires it to have existed prior to deletion. In a complex command, a sequence of operations is executed only as a whole. A failing operation in a sequence makes the whole sequence fail, a form of database transaction. Discussion of safety Harrison, Ruzzo and Ullman discussed whether there is an algorithm that takes an arbitrary initial configuration and answers the following que
https://en.wikipedia.org/wiki/TUNIS
TUNIS (Toronto University System) was a Unix-like operating system, developed at the University of Toronto in the early 1980s. TUNIS was a portable operating system compatible with Unix V7, but with a completely redesigned kernel, written in Concurrent Euclid. Programs that ran under Unix V7 could be run under TUNIS with no modification. TUNIS was designed for teaching, and was intended to provide a model for the design of well-structured, highly portable, easily understood Unix-like operating systems. It made extensive use of Concurrent Euclid modules to isolate machine dependencies and provide a clean internal structure through information hiding. TUNIS also made use of Concurrent Euclid's built-in processes and synchronization features to make it easy to understand and maintain. TUNIS targeted the PDP-11, Motorola 6809 and 68000, and National Semiconductor 32016 architectures, and supported distribution across multiple CPUs using Concurrent Euclid's synchronization features.
https://en.wikipedia.org/wiki/Biodiversity%20and%20drugs
Biodiversity plays a vital role in maintaining human and animal health. Numerous plants, animals, and fungi are used in medicine, as well as to produce vital vitamins, painkillers, and other things. Natural products have been recognized and used as medicines by ancient cultures all around the world. Many animals are also known to self-medicate using plants and other materials available to them. More than 60% of the world's population relies almost entirely on plant medicine for primary health care. About 119 pure chemicals are extracted from less than 90 species of higher plants and used as medicines throughout the world, for example, caffeine, methyl salicylate, and quinine. Antibiotics Streptomycin, neomycin, and erythromycin are derived from tropical soil fungi. Plant drugs A lot of plant species are used in today's studies and have been studied thoroughly for their potential value as a source of drugs. It is possible that some plant species may be a source of drugs against high blood pressure, AIDS, or heart troubles. In China, Japan, India, and Germany, there is a great deal of interest in and support for the search for new drugs from higher plants. Sweet wormwood Each species carries unique genetic material in its DNA and in its chemical factory responding to these genetic instructions. For example, in the valleys of central China, a fernlike endangered weed called sweet wormwood grows, which is the only source of artemisinin, a drug that is nearly 100 percent effective against malaria. If this plant were lost to extinction, then the ability to control malaria, even today a potent killer, would diminish. Zoopharmacognosy Zoopharmacognosy is the study of how animals use plants, insects, and other inorganic materials in self-medication. For example, apes have been observed selecting a particular part of a medicinal plant by taking off leaves and breaking the stem to suck out the juice. In an interview with the late Neil Campbell, Eloy Rodriguez descri
https://en.wikipedia.org/wiki/AT%26T%20UNIX%20PC
The AT&T UNIX PC is a Unix desktop computer originally developed by Convergent Technologies (later acquired by Unisys), and marketed by AT&T Information Systems in the mid- to late-1980s. The system was codenamed "Safari 4" and is also known as the PC 7300, and often dubbed the "3B1". Despite the latter name, the system had little in common with AT&T's line of 3B series computers. The system was tailored for use as a productivity tool in office environments and as an electronic communication center. Hardware configuration 10 MHz Motorola 68010 (16-bit external bus, 32-bit internal) with custom, discrete MMU Internal MFM hard drive, originally 10 MB, later models with up to 67 MB Internal 5-1/4" floppy drive At least 512 KB RAM on main board (1 MB or 2 MB were also options), expandable up to an additional 2 MB via expansion cards (4 MB max total) 32 KB VRAM 16 KB ROM (up to 32 KB ROM supported using 2x 27128 EPROMs) 2 KB SRAM (for MMU page table) Monochrome green phosphor monitor Internal 300/1200 bit/s modem RS-232 serial port Centronics parallel port 3 S4BUS expansion slots 3 phone jacks PC 7300 The initial PC 7300 model offered a modest 512 KB of memory and a small, low performance 10 MB hard drive. This model, although progressive in offering a Unix system for desktop office operation, was underpowered and produced considerable fan and drive bearing noise even when idling. The modern-looking "wedge" design by Mike Nuttall was innovative, and the machine gained notoriety appearing in numerous movies and TV shows as the token "computer". AT&T 3B/1 An enhanced model, "3B/1", was introduced in October 1985 starting at . The cover was redesigned to accommodate a full-height 67 MB hard drive. This cover change added a 'hump' to the case, expanded onboard memory to 1 or 2 MB, as well as added a better power supply. S/50 Convergent Technologies offered an S/50 which was a re-badged PC 7300. Olivetti AT&T 3B1 British Olivetti released the "Olivetti A
https://en.wikipedia.org/wiki/Screen%20protector
A screen protector is an additional sheet of material—commonly polyurethane or laminated glass—that can be attached to the screen of an electronic device and protect it against physical damage. History The first screen protector was designed and patented by Herbert Schlegel in 1968 for use on television screens. Screen protectors first entered the mobile-device market after the rise of personal digital assistants (PDAs). Since PDAs were often operated via a stylus, the tip of the stylus could scratch the sensitive LCD screen surface. Therefore, screen protectors provided sacrificial protection from this damage. Since then, the ubiquity of mobile-devices have seen the screen protector become more widely used. Materials Screen protectors are made of either plastics, such as polyethylene terephthalate (PET) or thermoplastic polyurethane (TPU), or of laminated tempered glass, similar to the device’s original screen they are meant to protect. Plastic screen protectors cost less than glass and are thinner (around thick, compared to for glass) and more flexible. At the same price, glass will resist scratches better than plastic, and feel more like the device's screen, though higher priced plastic protectors may be better than the cheapest tempered glass models, since glass will shatter or crack with sufficient impact force. Screen protectors' surface can be glossy or matte. Glossy protectors retain the display's original clarity, while a matte ("anti-glare") surface facilitates readability in bright environments and mitigates stains such as finger prints. Disadvantages Screen protectors have been known to interfere with the operation of some touchscreens. Also, an existing oleophobic coating of a touchscreen will be covered, although some tempered glass screen protectors come with their own oleophobic coating. On some devices, the thickness of screen protectors can affect the look and feel of the device. Function The main function of this screen protector is
https://en.wikipedia.org/wiki/Frzb
Frzb (pronounced like the toy frisbee) is a Wnt-binding protein especially important in embryonic development. It is a competitor for the cell-surface G-protein receptor Frizzled. Frizzled is a tissue polarity gene in Drosophila melanogaster and encodes integral proteins that function as cell-surface receptors for Wnts called serpentine receptors. The integral membrane proteins contain a cysteine-rich domain thought to be the Wnt binding domain in extracellular region. The signals are initiated at the 7 transmembrane domain and transmitted through receptor coupling to G-proteins. This protein is expressed in chondrocytes making it important in skeletal development in the embryo and fetus. Frzb is localized in the extracellular plasma membrane. Unlike frizzled, frzb lacks the 7 transmembrane domains normally found in G-protein-coupled receptors. It is still considered a homolog of frizzled because it contains a Cysteine Rich Domain (CRD), and because of its intracellular C-terminus which is crucial for signaling. The CRD is highly conserved in diverse proteins, such as receptor tyrosine kinases and functions as a ligand binding domain. The C-terminal is a carboxyl terminus located intracellularly and is required for canonical signaling. The serpentine receptors (frzb) couple binds to ligand (Wnt protein) and activates G-proteins. A signal transduction cascade results in the secretion of first and second group antagonists. First group antagonists are composed of secreted Frizzled Related protein family (Sfrp) and Wnt inhibitory factor (Wif). Both Srfp and Wif bind directly to Wnt proteins blocking activation of the receptor. Second group of antagonists contains a class of Wnt inhibitory proteins known as Frizzled Receptor-like Proteins (FRPs). FRPs bind to the LRP (low-density-lipoprotein-related protein) co-receptors blocking activation of the Wnt signaling pathway. One such pathway that involves Frizzled (Fz) family is the Wnt/β-Catenin (β-Cat) signaling. β-Ca
https://en.wikipedia.org/wiki/Urofacial%20syndrome
Urofacial syndrome, or Ochoa syndrome, is an autosomal recessive congenital disorder characterized by an association of a lower urinary tract and bowel dysfunction with a typical facial expression: When attempting to smile, the patient seems to be crying or grimacing. It was first described by the Colombian physician Bernardo Ochoa in the early 1960s. The inverted facial expression presented by children with this syndrome allows for early detection of the syndrome, which is vital for establishing a better prognosis as urinary related problems associated with this disease can cause harm if left untreated. Incontinence is another easily detectable symptom of the syndrome that is due to detrusor-sphincter discoordination. It may be associated with HPSE2. Signs and symptoms Infants with the disorder exhibit an inverted smile; they appear to be crying when they are actually smiling, in conjunction with uropathy. They also may be affected by hydronephrosis. Symptoms of this disease can start at very young ages. Many people with this syndrome will die in their teens to early 20s because of the renal failure (uropathy) if not diagnosed and treated. Children with the syndrome have abnormal facial development that cause an inverted smile, nerve connections are however normal. When attempting to smile, the child will appear to cry. Urinary problems arise as a result of a neurogenic bladder. Most patients older than the age of toilet training, present with enuresis, urinary-tract infection, hydronephrosis, and a spectrum of radiological abnormalities typical of obstructive or neurogenic bladders. Radiological abnormalities include things such as: trabeculated bladder, vesicoureteral reflex, external sphincter spasm, pyelonephritis, hyperreflexic bladder, noninhibited detrusor contraction, etc. Urinary abnormalities might result in renal deterioration and failure. This can be prevented by taking proper measures to restore normal micturition and by taking antibiotics to prevent
https://en.wikipedia.org/wiki/Dynamic%20network%20analysis
Dynamic network analysis (DNA) is an emergent scientific field that brings together traditional social network analysis (SNA), link analysis (LA), social simulation and multi-agent systems (MAS) within network science and network theory. Dynamic networks are a function of time (modeled as a subset of the real numbers) to a set of graphs; for each time point there is a graph. This is akin to the definition of dynamical systems, in which the function is from time to an ambient space, where instead of ambient space time is translated to relationships between pairs of vertices. Overview There are two aspects of this field. The first is the statistical analysis of DNA data. The second is the utilization of simulation to address issues of network dynamics. DNA networks vary from traditional social networks in that they are larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels of uncertainty. The main difference of DNA to SNA is that DNA takes interactions of social features conditioning structure and behavior of networks into account. DNA is tied to temporal analysis but temporal analysis is not necessarily tied to DNA, as changes in networks sometimes result from external factors which are independent of social features found in networks. One of the most notable and earliest of cases in the use of DNA is in Sampson's monastery study, where he took snapshots of the same network from different intervals and observed and analyzed the evolution of the network. DNA statistical tools are generally optimized for large-scale networks and admit the analysis of multiple networks simultaneously in which, there are multiple types of nodes (multi-node) and multiple types of links (multi-plex). Multi-node multi-plex networks are generally referred to as meta-networks or high-dimensional networks. In contrast, SNA statistical tools focus on single or at most two mode data and facilitate the analysis of only one type of link at a time. DNA statistical too
https://en.wikipedia.org/wiki/Receptaculites
Receptaculites is the name-bearing genus for an extinct group of conspicuous benthic marine genera, the receptaculitids (formally Receptaculitaceae or Receptaculitidae), that lived from the Early Ordovician through the Permian period, peaking in the Middle Ordovician. The group's phylogenetic origin has long been obscure, with some arguing that they were calcareous algae, probably of the order Dasycladales. Receptaculitids lived in warm, shallow seas, but consensus disagreeing. They have been described from all continents except Antarctica. In some areas they were important reef-formers, and they also occur as isolated specimens. Receptaculites and its relatives have a double-spiral, radiating pattern of rhombus-shaped plates supported by spindle-like objects called meroms. Fossils can usually be identified by the intersecting patterns of clockwise and counterclockwise rows of plates or stalk spaces, superficially similar to the arrangement of disk florets on a sunflower—hence the common name "sunflower coral" (sic). Receptaculitids have sometimes been compared to the morphologically similar, but probably distantly related, cyclocrinitids.
https://en.wikipedia.org/wiki/Waste%20heat
Waste heat is heat that is produced by a machine, or other process that uses energy, as a byproduct of doing work. All such processes give off some waste heat as a fundamental result of the laws of thermodynamics. Waste heat has lower utility (or in thermodynamics lexicon a lower exergy or higher entropy) than the original energy source. Sources of waste heat include all manner of human activities, natural systems, and all organisms, for example, incandescent light bulbs get hot, a refrigerator warms the room air, a building gets hot during peak hours, an internal combustion engine generates high-temperature exhaust gases, and electronic components get warm when in operation. Instead of being "wasted" by release into the ambient environment, sometimes waste heat (or cold) can be used by another process (such as using hot engine coolant to heat a vehicle), or a portion of heat that would otherwise be wasted can be reused in the same process if make-up heat is added to the system (as with heat recovery ventilation in a building). Thermal energy storage, which includes technologies both for short- and long-term retention of heat or cold, can create or improve the utility of waste heat (or cold). One example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating. Another is seasonal thermal energy storage (STES) at a foundry in Sweden. The heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes, and is used for space heating in an adjacent factory as needed, even months later. An example of using STES to use natural waste heat is the Drake Landing Solar Community in Alberta, Canada, which, by using a cluster of boreholes in bedrock for interseasonal heat storage, obtains 97 percent of its year-round heat from solar thermal collectors on the garage roofs. Another STES application is storing winter cold underground, for summer air conditioning. On a biological scale, all organisms reject w
https://en.wikipedia.org/wiki/Math%20circle
A math circle is a learning space where participants engage in the depths and intricacies of mathematical thinking, propagate the culture of doing mathematics, and create knowledge. To reach these goals, participants partake in problem-solving, mathematical modeling, the practice of art, and philosophical discourse. Some circles involve competition, while others do not. Characteristics Math circles can have a variety of styles. Some are very informal, with the learning proceeding through games, stories, or hands-on activities. Others are more traditional enrichment classes but without formal examinations. Some have a strong emphasis on preparing for Olympiad competitions; some avoid competition as much as possible. Models can use any combination of these techniques, depending on the audience, the mathematician, and the environment of the circle. Athletes have sports teams through which to deepen their involvement with sports; math circles can play a similar role for kids who like to think. Two features all math circles have in common are (1) that they are composed of students who want to be there - either like math, or want to like math, and (2) that they give students a social context in which to enjoy mathematics. History Mathematical enrichment activities in the United States have been around since sometime before 1977, in the form of residential summer programs, math contests, and local school-based programs. The concept of a math circle, on the other hand, with its emphasis on convening professional mathematicians and secondary school students regularly to solve problems, appeared in the U.S. in 1994 with Robert and Ellen Kaplan at Harvard University. This form of mathematical outreach made its way to the U.S. most directly from the former Soviet Union and present-day Russia and Bulgaria. They first appeared in the Soviet Union during the 1930s; they have existed in Bulgaria since sometime before 1907. The tradition arrived in the U.S. with émigrés who had r
https://en.wikipedia.org/wiki/Error%20exponent
In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity. Error exponent in channel coding For time-invariant DMC's The channel coding theorem states that for any ε > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity. Assuming a channel coding setup as follows: the channel can transmit any of messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done. Let be the th random codeword in the codebook, where goes from to . Suppose the first message is selected, so codeword is transmitted. Given that is rece
https://en.wikipedia.org/wiki/Sidearm%20%28baseball%29
In baseball, sidearm is a motion for throwing a ball along a low, approximately horizontal plane rather than a high, mostly vertical plane (overhand). Sidearm is a common way of throwing the ball in the infield, because many throws must be made hurriedly from the glove after fielding ground balls. An infielder's quickest throw to the bases is often from just above ground level, necessitating a horizontal release of the ball. Sidearm pitchers, also known as sidewinders, are uncommon at all levels of baseball except in Japan, where sidearm pitchers are widely popular. Few find sidearm a natural delivery, and those who do are often discouraged by coaches who know little about sidearm mechanics, and who believe that overhand pitching affords greater velocity. This is generally true, since overhand pitching provides better mechanical leverage with which the body can use to accelerate the ball. But what the sidearm pitcher loses in velocity, they gain in ball movement and unusual release point. Historical progression of pitching In the middle of the nineteenth century, the game of baseball began to evolve from a sport played by amateurs for recreation into a more serious game played by professionals. One of the most dramatic changes was the transition of the pitcher's delivery from an underhand motion to an overhanded throw. Before the American Civil War, the pitcher's role was to initiate the action by offering an underhanded throw to the batter, in much the same way that a basketball referee offers up a jump ball to begin play. As the game progressed towards professionalism and became more serious, pitchers began to attempt to prevent the batter from hitting the ball by throwing faster pitches. The rules governing the delivery of pitches proved to be hard to enforce, and pitchers continued to stretch the boundaries of the rules until by the 1870s, the release point of pitches had reached the pitcher's waist level. As the game continued to evolve into the 20th centu
https://en.wikipedia.org/wiki/Bifurcation%20locus
In complex dynamics, the bifurcation locus of a parameterized family of one-variable holomorphic functions informally is a locus of those parameterized points for which the dynamical behavior changes drastically under a small perturbation of the parameter. Thus the bifurcation locus can be thought of as an analog of the Julia set in parameter space. Without doubt, the most famous example of a bifurcation locus is the boundary of the Mandelbrot set. Parameters in the complement of the bifurcation locus are called J-stable.
https://en.wikipedia.org/wiki/Scratchpad%20memory
Scratchpad memory (SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is an internal memory, usually high-speed, used for temporary storage of calculations, data, and other work in progress. In reference to a microprocessor (or CPU), scratchpad refers to a special high-speed memory used to hold small items of data for rapid retrieval. It is similar to the usage and size of a scratchpad in life: a pad of paper for preliminary notes or sketches or writings, etc. When the scratchpad is a hidden portion of the main memory then it is sometimes referred to as bump storage. In some systems it can be considered similar to the L1 cache in that it is the next closest memory to the ALU after the processor registers, with explicit instructions to move data to and from main memory, often using DMA-based data transfer. In contrast to a system that uses caches, a system with scratchpads is a system with non-uniform memory access (NUMA) latencies, because the memory access latencies to the different scratchpads and the main memory vary. Another difference from a system that employs caches is that a scratchpad commonly does not contain a copy of data that is also stored in the main memory. Scratchpads are employed for simplification of caching logic, and to guarantee a unit can work without main memory contention in a system employing multiple processors, especially in multiprocessor system-on-chip for embedded systems. They are mostly suited for storing temporary results (as it would be found in the CPU stack) that typically wouldn't need to always be committing to the main memory; however when fed by DMA, they can also be used in place of a cache for mirroring the state of slower main memory. The same issues of locality of reference apply in relation to efficiency of use; although some systems allow strided DMA to access rectangular data sets. Another difference is that scratchpads are explicitly manipulated by applications. They may be useful
https://en.wikipedia.org/wiki/Pattern%20hair%20loss
Pattern hair loss (also known as androgenetic alopecia (AGA)) is a hair loss condition that primarily affects the top and front of the scalp. In male-pattern hair loss (MPHL), the hair loss typically presents itself as either a receding front hairline, loss of hair on the crown (vertex) of the scalp, or a combination of both. Female-pattern hair loss (FPHL) typically presents as a diffuse thinning of the hair across the entire scalp. Male pattern hair loss seems to be due to a combination of oxidative stress, the microbiome of the scalp, genetics, and circulating androgens; particularly dihydrotestosterone (DHT). Men with early onset androgenic alopecia (before the age of 35) have been deemed the male phenotypic equivalent for polycystic ovary syndrome (PCOS). As an early clinical expression of insulin resistance and metabolic syndrome, AGA is related to being an increased risk factor for cardiovascular diseases, glucose metabolism disorders, type 2 diabetes, and enlargement of the prostate. The cause in female pattern hair loss remains unclear; androgenetic alopecia for women is associated with an increased risk of polycystic ovary syndrome (PCOS). Management may include simply accepting the condition or shaving one's head to improve the aesthetic aspect of the condition. Otherwise, common medical treatments include minoxidil, finasteride, dutasteride, or hair transplant surgery. Use of finasteride and dutasteride in women is not well-studied and may result in birth defects if taken during pregnancy. Pattern hair loss by the age of 50 affects about half of males and a quarter of females. It is the most common cause of hair loss. Both males aged 40–91 and younger male patients of early onset AGA (before the age of 35), had a higher likelihood of metabolic syndrome (MetS) and insulin resistance. With younger males, studies found metabolic syndrome to be at approximately a 4× increased frequency which is clinically deemed significant. Abdominal obesity, hypertensi
https://en.wikipedia.org/wiki/Preconditioner
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner of a matrix is a matrix such that has a smaller condition number than . It is also common to call the preconditioner, rather than , since itself is rarely explicitly available. In modern preconditioning, the application of , i.e., multiplication of a column vector, or a block of column vectors, by , is commonly performed in a matrix-free fashion, i.e., where neither , nor (and often not even ) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a linear system for since the rate of convergence for most iterative linear solvers increases because the condition number of a matrix decreases as a result of preconditioning. Preconditioned iterative solvers typically outperform direct solvers, e.g., Gaussian elimination, for large, especially for sparse, matrices. Iterative solvers can be used as matrix-free methods, i.e. become the only choice if the coefficient matrix is not stored explicitly, but is accessed by evaluating matrix-vector products. Description Instead of solving the original linear system for , one may consider the right preconditioned system and solve for and for . Alternatively, one may solve the left preconditioned system Both systems give the same solution as the original system as long as the preconditioner matrix is nonsingular. The left preconditioning is more traditional. The two-sided preconditioned system may be beneficial, e.g., to preserve the matrix symmetry: if the original matrix is real symmetric and real preconditioners a
https://en.wikipedia.org/wiki/Cephalium
Cephalium is a frequently brightly coloured structure of wool and bristle at the growing tip of certain cacti. It is most commonly found on cacti of the genus Melocactus and can take a number of colours, forms and shapes. The cephalium will only begin growing after a cactus has reached a certain size or age. Once flowering begins the flower buds will form from the cephalium.