source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Complex%20analysis
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable (that is, holomorphic functions). History Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory. Complex functions A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane. For any complex function, the values from the domain and their images in the range may be separated into real and imag
https://en.wikipedia.org/wiki/Computer%20program
A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components. A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. Example computer program The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers: 10 INPUT "How many numbers to average?", A 20 FOR I = 1 TO A 30 INPUT "Enter number:", B 40 LET C = C + B 50 NEXT I 60 LET D = C/A 70 PRINT "The average is", D 80 END Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer syst
https://en.wikipedia.org/wiki/Complex%20number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number , is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation has no real solution, since the square of a real number cannot be negative, but has the two nonreal complex solutions and . Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule combined with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field that has the real numbers as a subfield. The complex numbers also form a real vector space of dimension two, with as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely expressing in terms of complex numbers some geometric properties and constructions. For example, the real numbers form the real line which is identified to the horizontal axis of the complex plane. The complex numbers of a
https://en.wikipedia.org/wiki/Cryptozoology
Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson. Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars have studied cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism), noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims. Terminology, history, and approach As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French Sur la Piste des Bêtes Ignorées) in 1955, a landmark work among cryptozoologists that was followed by numerous other like works. Similarly, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). Heuvelmans himself traced cryptozoology to the work of Anthonie Cornelis Oudemans, who theorized that a large unidentified species of seal was responsible for sea serpent reports. The term cryptozoology dates from 1959 or before—Heuvelmans attributes the coinage of the term cryptozoology 'the study of hidden animals' (from A
https://en.wikipedia.org/wiki/Copenhagen%20interpretation
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. The term "Copenhagen interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors. Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the apparent subjectivity of requiring an observer, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Still, including all the variations, the interpretation remains one of the most commonly taught. Background Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening
https://en.wikipedia.org/wiki/Category%20theory
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, numerous constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. Many areas of computer science also rely on category theory, such as functional programming and semantics. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition (associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid. The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. Categories, objects, and morphisms Categories A category C consists of the following three mathematical entities: A class ob(C), whose elements are called objects; A class
https://en.wikipedia.org/wiki/Carbon%20dioxide
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i
https://en.wikipedia.org/wiki/Circumference
In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure. Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk. The is the circumference, or length, of any one of its great circles. Circle The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms. Relationship with The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter The first few decimal digits of the numerical value of are 3.141592653589793 ... Pi is defined as the ratio of a circle's circumference to its diameter Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference: The ratio of the circle's circumference to its radius is called the circle constant, and is equivalent to . The value is also the amount of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science. In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio ( since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used
https://en.wikipedia.org/wiki/Continuum%20mechanics
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century. Continuum mechanics deals with deformable bodies, as opposed to rigid bodies. A continuum model assumes that the substance of the object completely fills the space it occupies. This ignores the fact that matter is made of atoms, however provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships. Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics. Concept of a continuum The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body th
https://en.wikipedia.org/wiki/Color
Color (American English) or colour (Commonwealth English) is the visual perception based on the electromagnetic spectrum. Though color is not an inherent property of matter, color perception is related to an object's light absorption, reflection, emission spectra and interference. For most humans, colors are perceived in the visible light spectrum with three types of cone cells (trichromacy). Other animals may have a different number of cone cell types or have eyes sensitive to different wavelength, such as bees that can distinguish ultraviolet, and thus have a different color sensitivity range. Animal perception of color originates from different light wavelength or spectral sensitivity in cone cell types, which is then processed by the brain. Colors have perceived properties such as hue, colorfulness (saturation) and luminance. Colors can also be additively mixed (commonly used for actual light) or subtractively mixed (commonly used for materials). If the colors are mixed in the right proportions, because of metamerism, they may look the same as a single-wavelength light. For convenience, colors can be organized in a color space, which when being abstracted as a mathematical color model can assign each region of color with a corresponding set of numbers. As such, color spaces are an essential tool for color reproduction in print, photography, computer monitors and television. The most well-known color models are RGB, CMYK, YUV, HSL and HSV. Because the perception of color is an important aspect of human life, different colors have been associated with emotions, activity, and nationality. Names of color regions in different cultures can have different, sometimes overlapping areas. In visual arts, color theory is used to govern the use of colors in an aesthetically pleasing and harmonious way. The theory of color includes the color complements; color balance; and classification of primary colors (traditionally red, yellow, blue), secondary colors (traditionally or
https://en.wikipedia.org/wiki/Computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science. Introduction The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing Machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability. Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. Some examples of mathematical statements that are computable include: All statements characterised in modern programming languages, includ
https://en.wikipedia.org/wiki/Smart%20host
A smart host or smarthost is an email server via which third parties can send emails and have them forwarded on to the email recipients' email servers. Smarthosts were originally open mail relays, but most providers now require authentication from the sender, to verify that the sender is authorised – for example, an ISP might run a smarthost for their paying customers only. Use in spam control efforts In an effort to reduce email spam originating from their customer's IP addresses, some internet service providers (ISPs), will not allow their customers to communicate directly with recipient mailservers via the default SMTP port number 25. Instead, often they will set up a smarthost to which their customers can direct all their outward mail – or customers could alternatively use one of the commercial smarthost services. Sometimes, even if an outward port 25 is not blocked, an individual or organisation's normal external IP address has a difficulty in getting SMTP mail accepted. This could be because that IP was assigned in the past to someone who sent spam from it, or appears to be a dynamic address such as typically used for home connection. Whatever the reason for the "poor reputation" or "blacklisting", they can choose to redirect all their email out to an external smarthost for delivery. Reducing complexity When a host runs its own local mail server, a smart host is often used to transmit all mail to other systems through a central mail server. This is used to ease the management of a single mail server with aliases, security, and Internet access rather than maintaining numerous local mail servers. See also Mail submission agent
https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel%20method
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel. Description Let be a square system of linear equations, where: When and are known, and is unknown, we can use the Gauss–Seidel method to approximate . The vector denotes our initial guess for (often for ). We denote as the -th approximation or iteration of , and is the next (or k+1) iteration of . Matrix-based formula The solution is obtained iteratively via where the matrix is decomposed into a lower triangular component , and a strictly upper triangular component such that . More specifically, the decomposition of into and is given by: Why the matrix-based formula works The system of linear equations may be rewritten as: The Gauss–Seidel method now solves the left hand side of this expression for , using previous value for on the right hand side. Analytically, this may be written as: Element-based formula However, by taking advantage of the triangular form of , the elements of can be computed sequentially for each row using forward substitution: Notice that the formula uses two summations per iteration which can be expressed as one summation that uses the most recently calculated iteration of . The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small residual. Discussion The element-wise formula for
https://en.wikipedia.org/wiki/Jacobi%20method
In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi. Description Let be a square system of n linear equations, where: When and are known, and is unknown, we can use the Jacobi method to approximate . The vector denotes our initial guess for (often for ). We denote as the k-th approximation or iteration of , and is the next (or k+1) iteration of . Matrix-based formula Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:The solution is then obtained iteratively via Element-based formula The element-based formula for each row is thus:The computation of requires each element in except itself. Unlike the Gauss–Seidel method, we can't overwrite with , as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n. Algorithm Input: , (diagonal dominant) matrix A, right-hand side vector b, convergence criterion Output: Comments: pseudocode based on the element-based formula above while convergence not reached do for i := 1 step until n do for j := 1 step until n do if j ≠ i then end end end increment k end Convergence The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1: A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominan
https://en.wikipedia.org/wiki/Indexing%20Service
Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer. The IFilter plugins to extend the indexing capabilities to more file formats and protocols are compatible between the legacy Indexing Service how and the newer Windows Search Indexer. History Indexing Service was a desktop search service included with Windows NT 4.0 Option Pack as well as Windows 2000 and later. The first incarnation of the indexing service was shipped in August 1996 as a content search system for Microsoft's web server software, Internet Information Services. Its origins, however, date further back to Microsoft's Cairo operating system project, with the component serving as the Content Indexer for the Object File System. Cairo was eventually shelved, but the content indexing capabilities would go on to be included as a standard component of later Windows desktop and server operating systems, starting with Windows 2000, which includes Indexing Service 3.0. In Windows Vista, the content indexer was replaced with the Windows Search indexer which was enabled by default. Indexing Service is still included with Windows Server 2008 but is not installed or running by default. Indexing Service has been deprecated in Windows 7 and Windows Server 2008 R2. It has been removed from Windows 8. Search interfaces Comprehensive searching is available after initial building of the index, which can take up to hours or days, depending on the size of the specified directories, the speed of the hard drive, user activity, indexer settings and other factors. Searching using Indexing service works also on UNC paths and/or mapped network drives if the sharing server indexes appropriate directory and is aware of its sharing. Once the indexing service ha
https://en.wikipedia.org/wiki/Polite%20number
In number theory, a polite number is a positive integer that can be written as the sum of two or more consecutive positive integers. A positive integer which is not polite is called impolite. The impolite numbers are exactly the powers of two, and the polite numbers are the natural numbers that are not powers of two. Polite numbers have also been called staircase numbers because the Young diagrams which represent graphically the partitions of a polite number into consecutive integers (in the French notation of drawing these diagrams) resemble staircases. If all numbers in the sum are strictly greater than one, the numbers so formed are also called trapezoidal numbers because they represent patterns of points arranged in a trapezoid. The problem of representing numbers as sums of consecutive integers and of counting the number of representations of this type has been studied by Sylvester, Mason, Leveque, and many other more recent authors. The polite numbers describe the possible numbers of sides of the Reinhardt polygons. Examples and characterization The first few polite numbers are 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, ... . The impolite numbers are exactly the powers of two. It follows from the Lambek–Moser theorem that the nth polite number is f(n + 1), where Politeness The politeness of a positive number is defined as the number of ways it can be expressed as the sum of consecutive integers. For every x, the politeness of x equals the number of odd divisors of x that are greater than one. The politeness of the numbers 1, 2, 3, ... is 0, 0, 1, 0, 1, 1, 1, 0, 2, 1, 1, 1, 1, 1, 3, 0, 1, 2, 1, 1, 3, ... . For instance, the politeness of 9 is 2 because it has two odd divisors, 3 and 9, and two polite representations 9 = 2 + 3 + 4 = 4 + 5; the politeness of 15 is 3 because it has three odd divisors, 3, 5, and 15, and (as is famili
https://en.wikipedia.org/wiki/Northern%20Ireland%20flags%20issue
The Northern Ireland flags issue is one that divides the population along sectarian lines. Depending on political allegiance, people identify with differing flags and symbols, some of which have, or have had, official status in Northern Ireland. Common flags The flag of the United Kingdom, the Union Jack or Union Flag, is the only flag routinely used officially by the sovereign UK government, as well as being flown on most council buildings in Northern Ireland. The Union Flag is often flown by unionists but is disliked by nationalists. British law states that the Union Flag must be flown on designated days from central government buildings in Northern Ireland. The Ulster Banner, the flag of the pre-1973 government of Northern Ireland, was used from 1953 to 1972 by the Stormont government to represent the government of Northern Ireland. That government was granted a royal warrant to fly the Ulster Banner in 1924, but this expired when the government was dissolved under the Northern Ireland Constitution Act 1973. It continues to be used by some sports teams representing Northern Ireland internationally, for example by the Northern Ireland football team, and by the Northern Ireland Commonwealth Games team. The flag of Ireland or Irish tricolour is the state flag of the Republic of Ireland, disliked by Unionists and is regarded by republicans and nationalists as the flag of all of Ireland. The Saint Patrick's Saltire represents Northern Ireland indirectly as Ireland in the Union Jack. It is sometimes flown during Saint Patrick's Day parades in Northern Ireland, and is used to represent Northern Ireland during some royal events. Other flags flown by socialist republicans include the Starry Plough, the Sunburst flag and even the flag of the Ulster province. Loyalists sometimes display the flag of Scotland as a sign of their Scottish ancestry. Ulster nationalists use the unofficial 'Ulster Nation flag', although it has now been adopted as an Ulster-Scots flag. Controver
https://en.wikipedia.org/wiki/R10000
The R10000, code-named "T5", is a RISC microprocessor implementation of the MIPS IV instruction set architecture (ISA) developed by MIPS Technologies, Inc. (MTI), then a division of Silicon Graphics, Inc. (SGI). The chief designers are Chris Rowen and Kenneth C. Yeager. The R10000 microarchitecture is known as ANDES, an abbreviation for Architecture with Non-sequential Dynamic Execution Scheduling. The R10000 largely replaces the R8000 in the high-end and the R4400 elsewhere. MTI was a fabless semiconductor company; the R10000 was fabricated by NEC and Toshiba. Previous fabricators of MIPS microprocessors such as Integrated Device Technology (IDT) and three others did not fabricate the R10000 as it was more expensive to do so than the R4000 and R4400. History The R10000 was introduced in January 1996 at clock frequencies of 175 MHz and 195 MHz. A 150 MHz version was introduced in the O2 product line in 1997, but discontinued shortly after due to customer preference for the 175 MHz version. The R10000 was not available in large volumes until later in the year due to fabrication problems at MIPS's foundries. The 195 MHz version was in short supply throughout 1996, and was priced at US$3,000 as a result. On 25 September 1996, SGI announced that R10000s fabricated by NEC between March and the end of July that year were faulty, drawing too much current and causing systems to shut down during operation. SGI recalled 10,000 R10000s that had shipped in systems as a result, which impacted the company's earnings. In 1997, a version of R10000 fabricated in a 0.25 µm process enabled the microprocessor to reach 250 MHz. Users Users of the R10000 include: SGI: Workstations: Indigo2 (IMPACT Generation), Octane, O2 Servers: Challenge, Origin 2000 Supercomputers: Onyx, Onyx2 NEC, in its Cenju-4 supercomputer Siemens Nixdorf, in its servers run under SINIX Tandem Computers, in its Himalaya fault-tolerant servers Description The R10000 is a four-way superscalar design
https://en.wikipedia.org/wiki/R4000
The R4000 is a microprocessor developed by MIPS Computer Systems that implements the MIPS III instruction set architecture (ISA). Officially announced on 1 October 1991, it was one of the first 64-bit microprocessors and the first MIPS III implementation. In the early 1990s, when RISC microprocessors were expected to replace CISC microprocessors such as the Intel i486, the R4000 was selected to be the microprocessor of the Advanced Computing Environment (ACE), an industry standard that intended to define a common RISC platform. ACE ultimately failed for a number of reasons, but the R4000 found success in the workstation and server markets. Models There are three configurations of the R4000: the R4000PC, an entry-level model with no support for a secondary cache; the R4000SC, a model with secondary cache but no multiprocessor capability; and the R4000MC, a model with secondary cache and support for the cache coherency protocols required by multiprocessor systems. Description The R4000 is a scalar superpipelined microprocessor with an eight-stage integer pipeline. During the first stage (IF), a virtual address for an instruction is generated and the instruction translation lookaside buffer (TLB) begins the translation of the address to a physical address. In the second stage (IS), translation is completed and the instruction is fetched from an internal 8 KB instruction cache. The instruction cache is direct-mapped and virtually indexed, physically tagged. It has a 16- or 32-byte line size. Architecturally, it could be expanded to 32 KB. During the third stage (RF), the instruction is decoded and the register file is read. The MIPS III defines two register files, one for the integer unit and the other for floating-point. Each register file is 64 bits wide and contained 32 entries. The integer register file has two read ports and one write port, while the floating-point register file has two read ports and two write ports. Execution begins at stage four (EX) for bo
https://en.wikipedia.org/wiki/R5000
The R5000 is a 64-bit, bi-endian, superscalar, in-order execution 2-issue design microprocessor, that implements the MIPS IV instruction set architecture (ISA) developed by Quantum Effect Design (QED) in 1996. The project was funded by MIPS Technologies, Inc (MTI), also the licensor. MTI then licensed the design to Integrated Device Technology (IDT), NEC, NKK, and Toshiba. The R5000 succeeded the QED R4600 and R4700 as their flagship high-end embedded microprocessor. IDT marketed its version of the R5000 as the 79RV5000, NEC as VR5000, NKK as the NR5000, and Toshiba as the TX5000. The R5000 was sold to PMC-Sierra when the company acquired QED. Derivatives of the R5000 are still in production today for embedded systems. Users Users of the R5000 in workstation and server computers were Silicon Graphics, Inc. (SGI) and Siemens-Nixdorf. SGI used the R5000 in their O2 and Indy low-end workstations. The R5000 was also used in embedded systems such as network routers and high-end printers. The R5000 found its way into the arcade gaming industry, R5000 powered mainboards were used by Atari and Midway. Initially the Cobalt Qube and Cobalt RaQ used a derivative model, the RM5230 and RM5231. The Qube 2700 used the RM5230 microprocessor, whereas the Qube 2 used the RM5231. The original RaQ systems were equipped with RM5230 or RM5231 CPUs but later models used AMD K6-2 chips and then eventually Intel Pentium III CPUs for the final models. History The original roadmap called for 200 MHz operation in early 1996, 250 MHz in late 1996, succeeded in 1997 by R5000A. The R5000 was introduced in January 1996 and failed to achieve 200 MHz, topping out at 180 MHz. When positioned as a low-end workstation microprocessor, the competition included the IBM and Motorola PowerPC 604, the HP PA-7300LC and the Intel Pentium Pro. Description The R5000 is a two-way superscalar design that executes instructions in-order. The R5000 could simultaneously issue an integer and a floating-point instru
https://en.wikipedia.org/wiki/R8000
The R8000 is a microprocessor chipset developed by MIPS Technologies, Inc. (MTI), Toshiba, and Weitek. It was the first implementation of the MIPS IV instruction set architecture. The R8000 is also known as the TFP, for Tremendous Floating-Point, its name during development. History Development of the R8000 started in the early 1990s at Silicon Graphics, Inc. (SGI). The R8000 was specifically designed to provide the performance of circa 1990s supercomputers with a microprocessor instead of a central processing unit (CPU) built from many discrete components such as gate arrays. At the time, the performance of traditional supercomputers was not advancing as rapidly as reduced instruction set computer (RISC) microprocessors. It was predicted that RISC microprocessors would eventually match the performance of more expensive and larger supercomputers at a fraction of the cost and size, making computers with this level of performance more accessible and enabling deskside workstations and servers to replace supercomputers in many situations. First details of the R8000 emerged in April 1992 in an announcement by MIPS Computer Systems detailing future MIPS microprocessors. In March 1992, SGI announced it was acquiring MIPS Computer Systems, which became a subsidiary of SGI called MIPS Technologies, Inc. (MTI) in mid-1992. Development of the R8000 was transferred to MTI, where it continued. The R8000 was expected to be introduced in 1993, but it was delayed until mid-1994. The first R8000, a 75 MHz part, was introduced on 7 June 1994. It was priced at US$2,500 at the time. In mid-1995, a 90 MHz part appeared in systems from SGI. The R8000's high cost and narrow market (technical and scientific computing) restricted its market share, and although it was popular in its intended market, it was largely replaced with the cheaper and generally better performing R10000 introduced January 1996. Users of the R8000 were SGI, who used it in their Power Indigo2 workstation, Power Chal
https://en.wikipedia.org/wiki/Hexachlorobenzene
Hexachlorobenzene, or perchlorobenzene, is an organochloride with the molecular formula C6Cl6. It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt. It has been banned globally under the Stockholm Convention on Persistent Organic Pollutants. Physical and chemical properties Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid, other chlorinated compounds (such as phosgene), carbon monoxide, and carbon dioxide. History Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride, he then suggested that his compound was the same as Julin's chloride of carbon. Müller previously also believed it was the same compound as Michael Faraday's "perchloride of carbon" and obtained a small sample of Julin's chloride of carbon to send Richard Phillips and Faraday for investigation. In 1867, Henry Bassett proved that those were the same compounds and named it hexachlorobenzene". Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre. Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube. Synthesis Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorob
https://en.wikipedia.org/wiki/Monoamine%20oxidase%20A
Monoamine oxidase A, also known as MAO-A, is an enzyme (E.C. 1.4.3.4) that in humans is encoded by the MAOA gene. This gene is one of two neighboring gene family members that encode mitochondrial enzymes which catalyze the oxidative deamination of amines, such as dopamine, norepinephrine, and serotonin. A mutation of this gene results in Brunner syndrome. This gene has also been associated with a variety of other psychiatric disorders, including antisocial behavior. Alternatively spliced transcript variants encoding multiple isoforms have been observed. Structures Gene Monoamine oxidase A, also known as MAO-A, is an enzyme that in humans is encoded by the MAOA gene. The promoter of MAOA contains conserved binding sites for Sp1, GATA2, and TBP. This gene is adjacent to a related gene (MAOB) on the opposite strand of the X chromosome. In humans, there is a 30-base repeat sequence repeated several different numbers of times in the promoter region of MAO-A. There are 2R (two repeats), 3R, 3.5R, 4R, and 5R variants of the repeat sequence, with the 3R and 4R variants most common in all populations. The variants of the promoter have been found to appear at different frequencies in different ethnic groups in an American sample cohort. The epigenetic modification of MAOA gene expression through methylation likely plays an important role in women. A study from 2010 found epigenetic methylation of MAOA in men to be very low and with little variability compared to women, while having higher heritability in men than women. Protein MAO-A shares 70% amino acid sequence identity with its homologue MAO-B. Accordingly, both proteins have similar structures. Both MAO-A and MAO-B exhibit an N-terminal domain that binds flavin adenine dinucleotide (FAD), a central domain that binds the amine substrate, and a C-terminal α-helix that is inserted in the outer mitochondrial membrane. MAO-A has a slightly larger substrate-binding cavity than MAO-B, which may be the cause of slight
https://en.wikipedia.org/wiki/Mylohyoid%20line
The mylohyoid line is a bony ridge on the internal surface of the mandible. It runs posterosuperiorly. It is the site of origin of the mylohyoid muscle, the superior pharyngeal constrictor muscle, and the pterygomandibular raphe. Structure The mylohyoid line is a bony ridge on the internal surface of the body of the mandible. The mylohyoid line extends posterosuperiorly. The mylohyoid line continues as the mylohyoid groove on the internal surface of the ramus. The mylohyoid muscle originates from the anterior (front) part of the mylohyoid line. Rarely, the mylohyoid muscle may originate partially from other surfaces of the mandible. The posterior (back) part of this line, near the alveolar margin, gives attachment to a small part of the superior pharyngeal constrictor muscle, and to the pterygomandibular raphe. Function The mylohyoid line is the site of attachment of many muscles, including the mylohyoid muscle, and the superior pharyngeal constrictor muscle. It is also the site of attachment of the pterygomandibular raphe. Additional images
https://en.wikipedia.org/wiki/Pharyngeal%20raphe
The pharyngeal raphe is a raphe that serves as the origin and insertion for several of the pharyngeal constrictors (thyropharyngeal part of the inferior pharyngeal constrictor muscle, middle pharyngeal constrictor muscle, superior pharyngeal constrictor muscle). Two sides of the pharyngeal wall are joined posteriorly in the midline by the raphe. Superiorly, it attaches to the pharyngeal tubercle; inferiorly, it extends to the level of vertebra C6 where it blends with the posterior wall of the esophagus. External links Illustration (#32) Human head and neck
https://en.wikipedia.org/wiki/Noise%20barrier
A noise barrier (also called a soundwall, noise wall, sound berm, sound barrier, or acoustical barrier) is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway, railway, and industrial noise sources – other than cessation of the source activity or use of source controls. In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, and choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s. History Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries. The best of these early computer models considered the effects of roadway geometry, topography, vehicle volumes, vehicle speeds, truck mix, road surface type, and micro-meteorology. Several U.S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California; the ESL Inc. group in Sunnyvale, California; the Bolt, Beranek and Newman group in Cambridge, Massachusetts, and a research team at the University of Florida. Possibly the earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California. Numerous case studies across the U.S. soon addressed dozens of different existing and planned highways. Most were co
https://en.wikipedia.org/wiki/Canadian%20Geophysical%20Union
The Canadian Geophysical Union (French: Union géophysique canadienne) (CGU) began as a society dedicated to the scientific study of the solid earth and has evolved into one that is concerned with all aspects of the physical study of Earth and its space environment, including the Sun and solar system. To express this broader vision of the geophysical sciences, the Union has adopted a sectional structure that allows individual sections to function as semi-autonomous entities. Goals Advance and promote the scientific study of Earth and its environment in space and to serve as a national focus for the geophysical sciences in Canada. Foster cooperation between the Canadian geophysical community and other national and international scientific organizations. Encourage communication through the organization and sponsorship of conferences and the publication of scientific results. Promote integration of geophysical knowledge with that of other sciences concerned with the improvement of life on Earth. History On October 24, 1945, the National Research Council (NRC) of Canada convened the first meeting of an Associate Committee to advise it on the needs of geophysics, with J.T.Wilson as the Chairman of the committee. In 1946, this committee was amalgamated with the Canadian committee for the International Union of Geodesy and Geophysics (IUGG) to form the Associate Committee of Geodesy and Geophysics (ACGG) of the NRC. Activities of geophysicists in Canada were coordinated by ACGG by forming a number of subcommittees. In 1974, the ACGG was replaced by a professional society called "The Canadian Geophysical Union, a joined Division of the Geological Association of Canada (GAC) and of the Canadian Association of Physicists (CAP)", and with J.T.Wilson as its first president. The Canadian Geophysical Union became an independent organization in 1988, but today geophysicists still can join CGU by joining CAP or the Geophysics Division of GAC. In 1993, the CGU formed a Hydrolog
https://en.wikipedia.org/wiki/Mandibular%20symphysis
In human anatomy, the facial skeleton of the skull the external surface of the mandible is marked in the median line by a faint ridge, indicating the mandibular symphysis (Latin: symphysis menti) or line of junction where the two lateral halves of the mandible typically fuse at an early period of life (1-2 years). It is not a true symphysis as there is no cartilage between the two sides of the mandible. This ridge divides below and encloses a triangular eminence, the mental protuberance, the base of which is depressed in the center but raised on either side to form the mental tubercle. The lowest (most inferior) end of the mandibular symphysis — the point of the chin — is called the "menton". It serves as the origin for the geniohyoid and the genioglossus muscles. Other animals Solitary mammalian carnivores that rely on a powerful canine bite to subdue their prey have a strong mandibular symphysis, while pack hunters delivering shallow bites have a weaker one. When filter feeding, the baleen whales, of the suborder Mysticeti, can dynamically expand their oral cavity in order to accommodate enormous volumes of sea water. This is made possible thanks to its mandibular skull joints, especially the elastic mandibular symphysis which permits both dentaries to be rotated independently in two planes. This flexible jaw, which made the titanic body sizes of baleen whales possible, is not present in early whales and most likely evolved within Mysticeti.
https://en.wikipedia.org/wiki/Li%27s%20criterion
In number theory, Li's criterion is a particular statement about the positivity of a certain sequence that is equivalent to the Riemann hypothesis. The criterion is named after Xian-Jin Li, who presented it in 1997. In 1999, Enrico Bombieri and Jeffrey C. Lagarias provided a generalization, showing that Li's positivity condition applies to any collection of points that lie on the Re(s) = 1/2 axis. Definition The Riemann function is given by where ζ is the Riemann zeta function. Consider the sequence Li's criterion is then the statement that the Riemann hypothesis is equivalent to the statement that for every positive integer . The numbers (sometimes defined with a slightly different normalization) are called Keiper-Li coefficients or Li coefficients. They may also be expressed in terms of the non-trivial zeros of the Riemann zeta function: where the sum extends over ρ, the non-trivial zeros of the zeta function. This conditionally convergent sum should be understood in the sense that is usually used in number theory, namely, that (Re(s) and Im(s) denote the real and imaginary parts of s, respectively.) The positivity of has been verified up to by direct computation. Proof Note that . Then, starting with an entire function , let . vanishes when . Hence, is holomorphic on the unit disk iff . Write the Taylor series . Since we have so that . Finally, if each zero comes paired with its complex conjugate , then we may combine terms to get The condition then becomes equivalent to . The right-hand side of () is obviously nonnegative when both and . Conversely, ordering the by , we see that the largest term () dominates the sum as , and hence becomes negative sometimes. A generalization Bombieri and Lagarias demonstrate that a similar criterion holds for any collection of complex numbers, and is thus not restricted to the Riemann hypothesis. More precisely, let R = {ρ} be any collection of complex numbers ρ, not containing ρ = 1,
https://en.wikipedia.org/wiki/National%20Internet%20registry
A national Internet registry (or NIR) is an organization under the umbrella of a regional Internet registry with the task of coordinating IP address allocations and other Internet resource management functions at a national level within a country or economic unit. NIRs operate primarily in the Asia Pacific region, under the authority of APNIC, the regional Internet registry for that region. The following NIRs are currently operating in the APNIC region: IDNIC-APJII (Indonesia Network Information Centre-Asosiasi Penyelenggara Jasa Internet Indonesia) CNNIC, China Internet Network Information Center JPNIC, Japan Network Information Center KRNIC, Korea Internet & Security Agency TWNIC, Taiwan Network Information Center VNNIC, Vietnam Internet Network Information Center Indian Registry for Internet Names and Numbers The following NIRs are currently operating in the Latin American (LACNIC) region: NIC Mexico NIC.br There are no NIRs operating in the RIPE NCC region. See also Country code top-level domain Geolocation software Internet governance Local Internet registry
https://en.wikipedia.org/wiki/Red%20yeast%20rice
Red yeast rice (), red rice koji (べにこうじ, lit. 'red koji'), red fermented rice, red kojic rice, red koji rice, anka, or angkak, is a bright reddish purple fermented rice, which acquires its color from being cultivated with the mold Monascus purpureus. Red yeast rice is what is referred to as a "koji" in Japanese, meaning "grain or bean overgrown with a mold culture", a food preparation tradition going back to ca. 300 BC. In both the scientific and popular literature in English that draws principally on Japanese traditional use, red yeast rice is most often referred to as "red rice koji." English language articles favoring Chinese literature sources prefer the translation "red yeast rice." In addition to its culinary use, red yeast rice is also used in Chinese herbology and Traditional Chinese medicine, possibly during the Tang dynasty around AD 800. Red yeast rice is described in the Chinese pharmacopoeia Ben Cao Gang Mu by Li Shizhen. A modern-era use as a dietary supplement developed in the late 1970s after researchers were isolating lovastatin from Aspergillus and monacolins from Monascus, the latter being the same fungus used to make red yeast rice. Chemical analysis soon showed that lovastatin and monacolin K were identical. Lovastatin became the patented prescription drug Mevacor. Red yeast rice went on to become a non-prescription dietary supplement in the United States and other countries. In 1998, the U.S. Food and Drug Administration (FDA) initiated action to ban a dietary supplement containing red yeast rice extract, stating that red yeast rice products containing monacolin K are identical to a prescription drug, and thus subject to regulation as a drug. Production Red yeast rice is produced by cultivating the mold species Monascus purpureus on rice for 3–6 days at room temperature. The rice grains turn bright red at the core and reddish purple on the outside. The fully cultured rice is then either sold as the dried grain, or cooked and pasteurized to b
https://en.wikipedia.org/wiki/Bartter%20syndrome
Bartter syndrome (BS) is a rare inherited disease characterised by a defect in the thick ascending limb of the loop of Henle, which results in low potassium levels (hypokalemia), increased blood pH (alkalosis), and normal to low blood pressure. There are two types of Bartter syndrome: neonatal and classic. A closely associated disorder, Gitelman syndrome, is milder than both subtypes of Bartter syndrome. Signs and symptoms In 90% of cases, neonatal Bartter syndrome is seen between 24 and 30 weeks of gestation with excess amniotic fluid (polyhydramnios). After birth, the infant is seen to urinate and drink excessively (polyuria, and polydipsia, respectively). Life-threatening dehydration may result if the infant does not receive adequate fluids. About 85% of infants dispose of excess amounts of calcium in the urine (hypercalciuria) and kidneys (nephrocalcinosis), which may lead to kidney stones. In rare occasions, the infant may progress to kidney failure. Patients with classic Bartter syndrome may have symptoms in the first two years of life, but they are usually diagnosed at school age or later. Like infants with the neonatal subtype, patients with classic Bartter syndrome also have polyuria, polydipsia, and a tendency to dehydration, but normal or just slightly increased urinary calcium excretion without the tendency to develop kidney stones. These patients also have vomiting and growth retardation. Kidney function is also normal if the disease is treated, but occasionally patients proceed to end-stage kidney failure. Bartter syndrome consists of low levels of potassium in the blood, alkalosis, normal to low blood pressures, and elevated plasma renin and aldosterone. Numerous causes of this syndrome probably exist. Diagnostic pointers include high urinary potassium and chloride despite low serum values, increased plasma renin, hyperplasia of the juxtaglomerular apparatus on kidney biopsy, and careful exclusion of diuretic abuse. Excess production of prostaglandi
https://en.wikipedia.org/wiki/The%20Physiological%20Society
The Physiological Society, founded in 1876, is a learned society for physiologists in the United Kingdom. History The Physiological Society was founded in 1876 as a dining society "for mutual benefit and protection" by a group of 19 physiologists, led by John Burdon Sanderson and Michael Foster, as a result of the 1875 Royal Commission on Vivisection and the subsequent 1876 Cruelty to Animals Act. Other founding members included: William Sharpey, Thomas Huxley, George Henry Lewes, Francis Galton, John Marshall, George Murray Humphry, Frederick William Pavy, Lauder Brunton, David Ferrier, Philip Pye-Smith, Walter H. Gaskell, John Gray McKendrick, Emanuel Edward Klein, Edward Schafer, Francis Darwin, George Romanes, and Gerald Yeo. The aim was to promote the advancement of physiology. Charles Darwin and William Sharpey were elected as the society's first two Honorary Members. The society first met at Sanderson's London home. The first rules of the society offered membership to no more than 40, all of whom should be male "working" physiologists. Women were first admitted as members in 1915 and the centenary of this event was celebrated in 2015. Michael Foster was also founder of The Journal of Physiology in 1878, and was appointed to the first Chair of Physiology at the University of Cambridge in 1883. The archives are held at the Wellcome Library. Present day The Society consists of over 2500 members, including 14 Nobel Laureates drawn from over 50 countries. The majority of members are engaged in research, in universities or industry, into how the body works in health and disease and in teaching physiology in schools and universities. The Society also facilitates communication between scientists and with other interested groups. The Physiological Society publishes the academic journals The Journal of Physiology and Experimental Physiology, and with the American Physiological Society publishes the online only, open access journal Physiological Reports. It also
https://en.wikipedia.org/wiki/Pi-system
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that is non-empty. If then That is, is a non-empty family of subsets of that is closed under non-empty finite intersections. The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables. This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line. Definitions A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements. If every set in this -system is a subset of then it is called a For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element. Examples For any real numbers and the intervals form a
https://en.wikipedia.org/wiki/Chakravala%20method
The chakravala method () is a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation. It is commonly attributed to Bhāskara II, (c. 1114 – 1185 CE) although some attribute it to Jayadeva (c. 950 ~ 1000 CE). Jayadeva pointed out that Brahmagupta's approach to solving equations of this type could be generalized, and he then described this general method, which was later refined by Bhāskara II in his Bijaganita treatise. He called it the Chakravala method: chakra meaning "wheel" in Sanskrit, a reference to the cyclic nature of the algorithm. C.-O. Selenius held that no European performances at the time of Bhāskara, nor much later, exceeded its marvellous height of mathematical complexity. This method is also known as the cyclic method and contains traces of mathematical induction. History Chakra in Sanskrit means cycle. As per popular legend, Chakravala indicates a mythical range of mountains which orbits around the Earth like a wall and not limited by light and darkness. Brahmagupta in 628 CE studied indeterminate quadratic equations, including Pell's equation for minimum integers x and y. Brahmagupta could solve it for several N, but not all. Jayadeva (9th century) and Bhaskara (12th century) offered the first complete solution to the equation, using the chakravala method to find for the solution This case was notorious for its difficulty, and was first solved in Europe by Brouncker in 1657–58 in response to a challenge by Fermat, using continued fractions. A method for the general problem was first completely described rigorously by Lagrange in 1766. Lagrange's method, however, requires the calculation of 21 successive convergents of the continued fraction for the square root of 61, while the chakravala method is much simpler. Selenius, in his assessment of the chakravala method, states "The method represents a best approximation algorithm of minimal length that, owing to several minimization properties, with minimal effort
https://en.wikipedia.org/wiki/Seqlock
A seqlock (short for sequence lock) is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock. It is a reader–writer consistent mechanism which avoids the problem of writer starvation. A seqlock consists of storage for saving a sequence number in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence number, both after acquiring the lock and before releasing the lock. Readers read the sequence number before and after reading the shared data. If the sequence number is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence numbers are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence number before and after. The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read–write lock. Also, writers do not wait for readers, whereas with traditional read–write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers). Because of these two factors, seqlocks are more efficient than traditional read–write locks for the situation
https://en.wikipedia.org/wiki/Secular%20resonance
A secular resonance is a type of orbital resonance between two bodies with synchronized precessional frequencies. In celestial mechanics, secular refers to the long-term motion of a system, and resonance is periods or frequencies being a simple numerical ratio of small integers. Typically, the synchronized precessions in secular resonances are between the rates of change of the argument of the periapses or the rates of change of the longitude of the ascending nodes of two system bodies. Secular resonances can be used to study the long-term orbital evolution of asteroids and their families within the asteroid belt. Description Secular resonances occur when the precession of two orbits is synchronised (a precession of the perihelion, with frequency g, or the ascending node, with frequency s, or both). A small body (such as a small Solar System body) in secular resonance with a much larger one (such as a planet) will precess at the same rate as the large body. Over relatively short time periods (a million years or so), a secular resonance will change the eccentricity and the inclination of the small body. One can distinguish between: linear secular resonances between a body (no subscript) and a single other large perturbing body (e.g. a planet, subscript as numbered from the Sun), such as the ν6 = g − g6 secular resonance between asteroids and Saturn; and nonlinear secular resonances, which are higher-order resonances, usually combination of linear resonances such as the z1 = (g − g6) + (s − s6), or the ν6 + ν5 = 2g − g6 − g5 resonances. ν6 resonance A prominent example of a linear resonance is the ν6 secular resonance between asteroids and Saturn. Asteroids that approach Saturn have their eccentricity slowly increased until they become Mars-crossers, when they are usually ejected from the asteroid belt by a close encounter with Mars. The resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU and at inclinations of about 20°. See also Or
https://en.wikipedia.org/wiki/Sugar%20glass
Sugar glass (also called candy glass, edible glass, and breakaway glass) is a brittle transparent form of sugar that looks like glass. It can be formed into a sheet that looks like flat glass or an object, such as a bottle or drinking glass. Description Sugar glass is made by dissolving sugar in water and heating it to at least the "hard crack" stage (approx. 150 °C / 300 °F) in the candy making process. Glucose or corn syrup is used to prevent the sugar from recrystallizing, by getting in the way of the sugar molecules forming crystals. Cream of tartar also helps by turning the sugar into glucose and fructose. Because sugar glass is hygroscopic, it must be used soon after preparation, or it will soften and lose its brittle quality. Sugar glass has been used to simulate glass in movies, photographs, plays and professional wrestling. Other uses Sugar glass is also used to make sugar sculptures or other forms of edible art. Sugar glass with blue dye was used to represent the methamphetamine in the AMC TV series Breaking Bad. Actor Aaron Paul would eat it on set.
https://en.wikipedia.org/wiki/Influenza%20A%20virus%20subtype%20H6N2
H6N2 is an avian influenza virus with two forms: one has a low and the other a high pathogenicity. It can cause a serious problem for poultry, and also infects ducks as well. H6N2 subtype is considered to be a non-pathogenic chicken virus, the host still unknown, but could strain from feral animals, and/or aquatic bird reservoirs. H6N2 along with H6N6 are viruses that are found to replicate in mice without preadaptation, and some have acquired the ability to bind to human-like receptors. Genetic markers for H6N2 include 22-amino acid stalk deletion in neuraminidase (NA) protein gene, increased N-glycosylation, and a D144 mutation of the Haemagglutinin (HA) protein gene. Transmission of avian influenza viruses from wild aquatic birds to domestic birds usually cause subclinical infections, and occasionally, respiratory disease and drops in egg production. Some histological features presented in chicken infected with H6N2 are fibrinous yolk peritonitis, salpingitis, oophoritis, nephritis, along with swollen kidneys as well. Signs and symptoms sneezing and lacrimation prostration anorexia and fever sometimes swelling of the infraorbital sinuses with nasal mucous
https://en.wikipedia.org/wiki/Neuroimmunology
Neuroimmunology is a field combining neuroscience, the study of the nervous system, and immunology, the study of the immune system. Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis, and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases, some of which have no clear etiology. In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis. Background Neural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes. From the US National Institute of Health: "Despite the brain's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune system in both health and disease. Immune cells and neuroimmune molecules such as cytokines, chemokines, and growth factors modulate brain function through multiple signaling pathways throughout the lifespan. Immunological, physiological and psychological stressors engage cytokines and other immune molecules as mediators of interactions with neuroendocrine, neuropeptide, and neurotransmitter systems. For example, brain cytokine levels increase following stress exposure, while treatments designed to
https://en.wikipedia.org/wiki/Behavioral%20modeling
The behavioral approach to systems theory and control theory was initiated in the late-1970s by J. C. Willems as a result of resolving inconsistencies present in classical approaches based on state-space, transfer function, and convolution representations. This approach is also motivated by the aim of obtaining a general framework for system analysis and control that respects the underlying physics. The main object in the behavioral setting is the behavior – the set of all signals compatible with the system. An important feature of the behavioral approach is that it does not distinguish a priority between input and output variables. Apart from putting system theory and control on a rigorous basis, the behavioral approach unified the existing approaches and brought new results on controllability for nD systems, control via interconnection, and system identification. Dynamical system as a set of signals In the behavioral setting, a dynamical system is a triple where is the "time set" – the time instances over which the system evolves, is the "signal space" – the set in which the variables whose time evolution is modeled take on their values, and the "behavior" – the set of signals that are compatible with the laws of the system ( denotes the set of all signals, i.e., functions from into ). means that is a trajectory of the system, while means that the laws of the system forbid the trajectory to happen. Before the phenomenon is modeled, every signal in is deemed possible, while after modeling, only the outcomes in remain as possibilities. Special cases: – continuous-time systems – discrete-time systems – most physical systems a finite set – discrete event systems Linear time-invariant differential systems System properties are defined in terms of the behavior. The system is said to be "linear" if is a vector space and is a linear subspace of , "time-invariant" if the time set consists of the real or natural numbers and for a
https://en.wikipedia.org/wiki/Neuroimmune%20system
The neuroimmune system is a system of structures and processes involving the biochemical and electrophysiological interactions between the nervous system and immune system which protect neurons from pathogens. It serves to protect neurons against disease by maintaining selectively permeable barriers (e.g., the blood–brain barrier and blood–cerebrospinal fluid barrier), mediating neuroinflammation and wound healing in damaged neurons, and mobilizing host defenses against pathogens. The neuroimmune system and peripheral immune system are structurally distinct. Unlike the peripheral system, the neuroimmune system is composed primarily of glial cells; among all the hematopoietic cells of the immune system, only mast cells are normally present in the neuroimmune system. However, during a neuroimmune response, certain peripheral immune cells are able to cross various blood or fluid–brain barriers in order to respond to pathogens that have entered the brain. For example, there is evidence that following injury macrophages and T cells of the immune system migrate into the spinal cord. Production of immune cells of the complement system have also been documented as being created directly in the central nervous system. Structure The key cellular components of the neuroimmune system are glial cells, including astrocytes, microglia, and oligodendrocytes. Unlike other hematopoietic cells of the peripheral immune system, mast cells naturally occur in the brain where they mediate interactions between gut microbes, the immune system, and the central nervous system as part of the microbiota–gut–brain axis. G protein-coupled receptors that are present in both CNS and immune cell types and which are responsible for a neuroimmune signaling process include: Chemokine receptors: CXCR4 Cannabinoid receptors: CB1, CB2, GPR55 Trace amine-associated receptors: TAAR1 μ-Opioid receptors – all subtypes Cellular physiology The neuro-immune system, and study of, comprises an understanding o
https://en.wikipedia.org/wiki/Phosphatidylserine
Phosphatidylserine (abbreviated Ptd-L-Ser or PS) is a phospholipid and is a component of the cell membrane. It plays a key role in cell cycle signaling, specifically in relation to apoptosis. It is a key pathway for viruses to enter cells via apoptotic mimicry. Its exposure on the outer surface of a membrane marks the cell for destruction via apoptosis. Structure Phosphatidylserine is a phospholipid—more specifically a glycerophospholipid—which consists of two fatty acids attached in ester linkage to the first and second carbon of glycerol and serine attached through a phosphodiester linkage to the third carbon of the glycerol. Phosphatidylserine sourced from plants differs in fatty acid composition from that sourced from animals. It is commonly found in the inner (cytoplasmic) leaflet of biological membranes. It is almost entirely found in the inner monolayer of the membrane with only less than 10% of it in the outer monolayer. Introduction Phosphatidylserine (PS) is the major acidic phospholipid class that accounts for 13–15% of the phospholipids in the human cerebral cortex. In the plasma membrane, PS is localized exclusively in the cytoplasmic leaflet where it forms part of protein docking sites necessary for the activation of several key signaling pathways. These include the Akt, protein kinase C (PKC) and Raf-1 signaling that is known to stimulate neuronal survival, neurite growth, and synaptogenesis. Modulation of the PS level in the plasma membrane of neurons has a significant impact on these signaling processes. Biosynthesis Phosphatidylserine is formed in bacteria (such as E. coli) through a displacement of cytidine monophosphate (CMP) through a nucleophilic attack by the hydroxyl functional group of serine. CMP is formed from CDP-diacylglycerol by PS synthase. Phosphatidylserine can eventually become phosphatidylethanolamine by the enzyme PS decarboxylase (forming carbon dioxide as a byproduct). Similar to bacteria, yeast can form phosphatidylseri
https://en.wikipedia.org/wiki/Evolutionary%20developmental%20psychology
Evolutionary developmental psychology (EDP) is a research paradigm that applies the basic principles of evolution by natural selection, to understand the development of human behavior and cognition. It involves the study of both the genetic and environmental mechanisms that underlie the development of social and cognitive competencies, as well as the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions. EDP considers both the reliably developing, species-typical features of ontogeny (developmental adaptations), as well as individual differences in behavior, from an evolutionary perspective. While evolutionary views tend to regard most individual differences as the result of either random genetic noise (evolutionary byproducts) and/or idiosyncrasies (for example, peer groups, education, neighborhoods, and chance encounters) rather than products of natural selection, EDP asserts that natural selection can favor the emergence of individual differences via "adaptive developmental plasticity." From this perspective, human development follows alternative life-history strategies in response to environmental variability, rather than following one species-typical pattern of development. EDP is closely linked to the theoretical framework of evolutionary psychology (EP), but is also distinct from EP in several domains, including: research emphasis (EDP focuses on adaptations of ontogeny, as opposed to adaptations of adulthood); consideration of proximate ontogenetic; environmental factors (i.e., how development happens) in addition to more ultimate factors (i.e., why development happens). These things of which are the focus of mainstream evolutionary psychology. History Development and evolution Like mainstream evolutionary psychology, EDP is rooted in Charles Darwin's theory of natural selection. Darwin himself emphasized development, using the process of embryology as evidence to support his theory. From The Descent of M
https://en.wikipedia.org/wiki/Mercury%2013
The Mercury 13 were thirteen American women who took part in a privately funded program run by William Randolph Lovelace II aiming to test and screen women for spaceflight. The participants—First Lady Astronaut Trainees (or FLATs) as Jerrie Cobb called them—successfully underwent the same physiological screening tests as had the astronauts selected by NASA on April 9, 1959, for Project Mercury. While Lovelace called the project Woman in Space Program, the thirteen women became later known as the Mercury 13—a term coined in 1995 by Hollywood producer James Cross as a comparison to the Mercury Seven astronauts. The Mercury 13 women were not part of NASA's official astronaut program, never flew in space as part of a NASA mission, and never met as a whole group. In the 1960s some of these women were among those who lobbied the White House and US Congress to have women included in the astronaut program. They testified before a congressional committee in 1962. In 1963, Clare Boothe Luce wrote an article for LIFE magazine publicizing the women and criticizing NASA for its failure to include women as astronauts. One of the thirteen, Wally Funk, was launched into space in a suborbital flight aboard Blue Origin's July 20, 2021 New Shepard 4 mission Flight 16, making her the (then) oldest person to go into space at age 82. The story of these women was celebrated in numerous books, exhibits, and movies, including the 2018 Netflix-produced documentary Mercury 13. History When NASA first planned to put people in space, they believed that the best candidates would be pilots, submarine crews or members of expeditions to the Antarctic or Arctic areas. They also thought people with more extreme sports backgrounds, such as parachuting, climbing, deep sea diving, etc. would excel in the program. NASA knew that numerous people would apply for this opportunity and testing would be expensive. President Dwight Eisenhower believed that military test pilots would make the best astronaut
https://en.wikipedia.org/wiki/NTLM
In a Windows network, NT (New Technology) LAN Manager (NTLM) is a suite of Microsoft security protocols intended to provide authentication, integrity, and confidentiality to users. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product. The NTLM protocol suite is implemented in a Security Support Provider, which combines the LAN Manager authentication protocol, NTLMv1, NTLMv2 and NTLM2 Session protocols in a single package. Whether these protocols are used or can be used on a system which is governed by Group Policy settings, for which different versions of Windows have different default settings. NTLM passwords are considered weak because they can be brute-forced very easily with modern hardware. Protocol NTLM is a challenge–response authentication protocol which uses three messages to authenticate a client in a connection-oriented environment (connectionless is similar), and a fourth additional message if integrity is desired. First, the client establishes a network path to the server and sends a NEGOTIATE_MESSAGE advertising its capabilities. Next, the server responds with CHALLENGE_MESSAGE which is used to establish the identity of the client. Finally, the client responds to the challenge with an AUTHENTICATE_MESSAGE. The NTLM protocol uses one or both of two hashed password values, both of which are also stored on the server (or domain controller), and which through a lack of salting are password equivalent, meaning that if you grab the hash value from the server, you can authenticate without knowing the actual password. The two are the LM hash (a DES-based function applied to the first 14 characters of the password converted to the traditional 8-bit PC charset for the language), and the NT hash (MD4 of the little endian UTF-16 Unicode password). Both hash values are 16 bytes (128 bits) each. The NTLM protocol also uses one of two one-way functions, depending on the NTLM version; NT LanMan and
https://en.wikipedia.org/wiki/Siblicide
Siblicide (attributed by behavioural ecologist Doug Mock to Barbara M. Braun) is the killing of an infant individual by its close relatives (full or half siblings). It may occur directly between siblings or be mediated by the parents, and is driven by the direct fitness benefits to the perpetrator and sometimes its parents. Siblicide has mainly, but not only, been observed in birds. (The word is also used as a unifying term for fratricide and sororicide in the human species; unlike these more specific terms, it leaves the sex of the victim unspecified.) Siblicidal behavior can be either obligate or facultative. Obligate siblicide is when a sibling almost always ends up being killed. Facultative siblicide means that siblicide may or may not occur, based on environmental conditions. In birds, obligate siblicidal behavior results in the older chick killing the other chick(s). In facultative siblicidal animals, fighting is frequent, but does not always lead to death of a sibling; this type of behavior often exists in patterns for different species. For instance, in the blue-footed booby, a sibling may be hit by a nest mate only once a day for a couple of weeks and then attacked at random, leading to its death. More birds are facultatively siblicidal than obligatory siblicidal. This is perhaps because siblicide takes a great amount of energy and is not always advantageous. Siblicide generally only occurs when resources, specifically food sources, are scarce. Siblicide is advantageous for the surviving offspring because they have now eliminated most or all of their competition. It is also somewhat advantageous for the parents because the surviving offspring most likely have the strongest genes, and therefore likely have the highest fitness. Some parents encourage siblicide, while others prevent it. If resources are scarce, the parents may encourage siblicide because only some offspring will survive anyway, so they want the strongest offspring to survive. By letting th
https://en.wikipedia.org/wiki/Standard%20tuning
In music, standard tuning refers to the typical tuning of a string instrument. This notion is contrary to that of scordatura, i.e. an alternate tuning designated to modify either the timbre or technical capabilities of the desired instrument. Violin family The most popular bowed strings used nowadays belong to the violin family; together with their respective standard tunings, they are: Violin – G3 D4 A4 E5 (ascending perfect fifths, starting from G below middle C) Viola – C3 G3 D4 A4 (a perfect fifth below a violin's standard tuning) Cello – C2 G2 D3 A3 (an octave lower than the viola) Double bass – E1 A1 D2 G2 (ascending perfect fourths, where the highest sounding open string coincides with the G on a cello). Double bass with a low C extension – C1 E1 A1 D2 G2 (the same, except for low C, which is a major third below the low E on a standard 4-string double bass) 5-stringed double bass – B0 E1 A1 D2 G2 (a low B is added, so the tuning remains in perfect fourths) Viol family The double bass is properly the contrabass member of the viol family. Its smaller members are tuned in ascending fourths, with a major third in the middle, as follows: Treble viol – D3 G3 C4 E4 A4 D5 (ascending perfect fourths with the exception of a major third between strings 3 and 4) Tenor viol – G2 C3 F3 A3 D4 G4 (a perfect fifth below the treble viol) Bass viol – D2 G2 C3 E3 A3 D4 (an octave lower than the treble viol) 7-stringed bass viol – A1 D2 G2 C3 E3 A3 D4 (an extra low A is added) A more recent family is the violin octet, which also features a standardized tuning system (see page). Guitar family Guitars and bass guitars have more standard tunings, depending on the number of strings an instrument has. six-string guitar (the most common configuration) – E2 A2 D3 G3 B3 E4 (ascending perfect fourths, with an exception between G and B, which is a major third). Low E falls a major third above the C on a standard-tuned cello. Renaissance lute – E2 A2 D3 F♯3 B3 E4 (used by
https://en.wikipedia.org/wiki/Nonlocal%20Lagrangian
In field theory, a nonlocal Lagrangian is a Lagrangian, a type of functional containing terms that are nonlocal in the fields , i.e. not polynomials or functions of the fields or their derivatives evaluated at a single point in the space of dynamical parameters (e.g. space-time). Examples of such nonlocal Lagrangians might be: The Wess–Zumino–Witten action. Actions obtained from nonlocal Lagrangians are called nonlocal actions. The actions appearing in the fundamental theories of physics, such as the Standard Model, are local actions; nonlocal actions play a part in theories that attempt to go beyond the Standard Model and also in some effective field theories. Nonlocalization of a local action is also an essential aspect of some regularization procedures. Noncommutative quantum field theory also gives rise to nonlocal actions. Quantum measurement Quantum field theory Theoretical physics
https://en.wikipedia.org/wiki/High-energy%20X-rays
High-energy X-rays or HEX-rays are very hard X-rays, with typical energies of 80–1000 keV (1 MeV), about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the beamlines ID15 and BM18 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups. High energy (megavolt) X-rays are also used in cancer therapy, using beams generated by linear accelerators to suppress tumors. Advantages High-energy X-rays (HEX-rays) between 100 and 300 keV bear unique advantage over conventional hard X-rays, which lie in the range of 5–20 keV They can be listed as follows: High penetration into materials due to a strongly reduced photo absorption cross section. The photo-absorption strongly depends on the atomic number of the material and the X-ray energy. Several centimeter thick volumes can be accessed in steel and millimeters in lead containing samples. No radiation damage of the sample, which can pin incommensurations or destroy the chemical compound to be analyzed. The Ewald sphere has a curvature ten times smaller than in the low energy case and allows whole regions to be mapped in a reciprocal lattice, similar to electron diffraction. Access to diffuse scattering. This is absorption and not extinction limited at low energies while volume enhancement takes place at high energies. Complete 3D maps over several Brillouin zones can be easily obtained. High momentum transfers are naturally accessible due to the high momentum of the incident wave. This is of particular importance for studies of liquid, amorphous and nanocrystalline materials as well as pair distri
https://en.wikipedia.org/wiki/Glutamate%20transporter
Glutamate transporters are a family of neurotransmitter transporter proteins that move glutamate – the principal excitatory neurotransmitter – across a membrane. The family of glutamate transporters is composed of two primary subclasses: the excitatory amino acid transporter (EAAT) family and vesicular glutamate transporter (VGLUT) family. In the brain, EAATs remove glutamate from the synaptic cleft and extrasynaptic sites via glutamate reuptake into glial cells and neurons, while VGLUTs move glutamate from the cell cytoplasm into synaptic vesicles. Glutamate transporters also transport aspartate and are present in virtually all peripheral tissues, including the heart, liver, testes, and bone. They exhibit stereoselectivity for L-glutamate but transport both L-aspartate and D-aspartate. The EAATs are membrane-bound secondary transporters that superficially resemble ion channels. These transporters play the important role of regulating concentrations of glutamate in the extracellular space by transporting it along with other ions across cellular membranes. After glutamate is released as the result of an action potential, glutamate transporters quickly remove it from the extracellular space to keep its levels low, thereby terminating the synaptic transmission. Without the activity of glutamate transporters, glutamate would build up and kill cells in a process called excitotoxicity, in which excessive amounts of glutamate acts as a toxin to neurons by triggering a number of biochemical cascades. The activity of glutamate transporters also allows glutamate to be recycled for repeated release. Classes There are two general classes of glutamate transporters, those that are dependent on an electrochemical gradient of sodium ions (the EAATs) and those that are not (VGLUTs and xCT). The cystine-glutamate antiporter (xCT) is localised to the plasma membrane of cells whilst vesicular glutamate transporters (VGLUTs) are found in the membrane of glutamate-containing synapti
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory. From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics. For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory, see interpretation (model theory). In database theory, structures with no functions are studied as models for relational databases, in the form of relational models. History In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it. Definition Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure. Domain The domain of a struct
https://en.wikipedia.org/wiki/The%20Castle%20%28video%20game%29
The Castle is a video game released by ASCII Corporation in 1986 for the FM-7 and X1 computers. It was later ported to the MSX and NEC branded personal computers, and got a single console port for the SG-1000. The game is set within a castle containing 100 rooms, most of which contain one or more puzzles. It was followed by Castlequest (Castle Excellent in Japan). Both games are early examples of the Metroidvania genre. Gameplay The object of the game is to navigate through the Castle to rescue the Princess. The player can push certain objects throughout the game to accomplish progress. In some rooms, the prince can only advance to the next room by aligning cement blocks, Honey Jars, Candle Cakes, and Elevator Controlling Block. Additionally, the player's progress is blocked by many doors requiring a key of the same color to unlock, and a key is removed from the player's inventory upon use. The prince must be standing on a platform next to the door to be able to unlock it, and cannot simply jump or fall and press against the door. The player can navigate the castle with the help of a map that can be obtained early in the game. The map will provide the player with a matrix of 10x10 rooms and will highlight the room in which the princess is located and the rooms that he had visited. The player must also avoid touching enemies like Knights, Bishops, Wizards, Fire Spirits, Attack Cats and Phantom Flowers.
https://en.wikipedia.org/wiki/Saber%20%28Fate/stay%20night%29
, whose real name is (alternatively, Altria Pendragon), is a fictional character from the Japanese 2004 visual novel Fate/stay night by Type-Moon. Saber is a heroic warrior who is summoned by a teenager named Shirou Emiya to participate in a war between masters and servants who are fighting to accomplish their dreams using the mythical Holy Grail. Saber's relationship with the story's other characters depends on the player's decisions; she becomes a love interest to Shirou in the novel's first route and also serves as that route's servant protagonist, a supporting character in the second, and a villain called in the third route. Saber is an agile and mighty warrior who is loyal, independent, and reserved; she appears emotionally cold but is actually suppressing her emotions to focus on her goals. She is also present in the prequel light novel Fate/Zero, in which she is the servant of Shirou's guardian Kiritsugu Emiya during the previous Holy Grail War, and in the sequel Fate/hollow ataraxia. Saber also appears in the novel's printed and animated adaptations, reprising her role in the game. Saber was created by Kinoko Nasu after the series' leading illustrator suggested having an armored woman as a protagonist for the visual novel; writer Gen Urobuchi commented on her character becoming darker depending on the situations. Urobuchi created his scenario involving Saber and Kiritsugu because their relationship was little explored in the original visual novel. Saber has been voiced by Ayako Kawasumi in her Japanese appearances, and multiple actresses took the role in English-language dubs of the series' animated adaptations. Critical reception to Saber's character and role in the series and her relationship with Shirou has been generally positive. Her characterization and her relationship with the characters in Fate/Zero have also been met with a positive response. However, Saber's lack of character focus in the Unlimited Blade Works anime adaptation met mixed react
https://en.wikipedia.org/wiki/Robot%20software
Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier. Some robot software aims at developing intelligent mechanical devices. Common tasks include feedback loops, control, pathfinding, data filtering, locating and sharing data. Introduction While it is a specific type of software, it is still quite diverse. Each manufacturer has their own robot software. While the vast majority of software is about manipulation of data and seeing the result on-screen, robot software is for the manipulation of objects or tools in the real world. Industrial robot software Software for industrial robots consists of data objects and lists of instructions, known as program flow (list of instructions). For example, Go to Jig1 It is an instruction to the robot to go to positional data named Jig1. Of course, programs can also contain implicit data for example Tell axis 1 move 30 degrees. Data and program usually reside in separate sections of the robot controller memory. One can change the data without changing the program and vice versa. For example, one can write a different program using the same Jig1 or one can adjust the position of Jig1 without changing the programs that use it. Examples of programming languages for industrial robots Due to the highly proprietary nature of robot software, most manufacturers of robot hardware also provide their own software. While this is not unusual in other automated control systems, the lack of standardization of programming methods for robots does pose certain challenges. For example, there are over 30 different manufacturers of industrial robots, so there are also 30 different robot programming languages required. There are enough similarities between the different robots that it is possib
https://en.wikipedia.org/wiki/Binary%20moment%20diagram
A binary moment diagram (BMD) is a generalization of the binary decision diagram (BDD) to linear functions over domains such as booleans (like BDDs), but also to integers or to real numbers. They can deal with Boolean functions with complexity comparable to BDDs, but also some functions that are dealt with very inefficiently in a BDD are handled easily by BMD, most notably multiplication. The most important properties of BMD is that, like with BDDs, each function has exactly one canonical representation, and many operations can be efficiently performed on these representations. The main features that differentiate BMDs from BDDs are using linear instead of pointwise diagrams, and having weighted edges. The rules that ensure the canonicity of the representation are: Decision over variables higher in the ordering may only point to decisions over variables lower in the ordering. No two nodes may be identical (in normalization such nodes all references to one of these nodes should be replaced be references to another) No node may have all decision parts equivalent to 0 (links to such nodes should be replaced by links to their always part) No edge may have weight zero (all such edges should be replaced by direct links to 0) Weights of the edges should be coprime. Without this rule or some equivalent of it, it would be possible for a function to have many representations, for example 2x + 2 could be represented as 2 · (1 + x) or 1 · (2 + 2x). Pointwise and linear decomposition In pointwise decomposition, like in BDDs, on each branch point we store result of all branches separately. An example of such decomposition for an integer function (2x + y) is: In linear decomposition we provide instead a default value and a difference: It can easily be seen that the latter (linear) representation is much more efficient in case of additive functions, as when we add many elements the latter representation will have only O(n) elements, while the former (pointwise), even
https://en.wikipedia.org/wiki/Maximum%20term%20method
The maximum-term method is a consequence of the large numbers encountered in statistical mechanics. It states that under appropriate conditions the logarithm of a summation is essentially equal to the logarithm of the maximum term in the summation. These conditions are (see also proof below) that (1) the number of terms in the sum is large and (2) the terms themselves scale exponentially with this number. A typical application is the calculation of a thermodynamic potential from a partition function. These functions often contain terms with factorials which scale as (Stirling's approximation). Example Proof Consider the sum where >0 for all N. Since all the terms are positive, the value of S must be greater than the value of the largest term, , and less than the product of the number of terms and the value of the largest term. So we have Taking logarithm gives In statistical mechanics often will be : see Big O notation. Here we have For large M, is negligible with respect to M itself, and so we can see that ln S is bounded from above and below by , and so
https://en.wikipedia.org/wiki/Selection%20coefficient
In population genetics, a selection coefficient, usually denoted by the letter s, is a measure of differences in relative fitness. Selection coefficients are central to the quantitative description of evolution, since fitness differences determine the change in genotype frequencies attributable to selection. The following definition of s is commonly used. Suppose that there are two genotypes A and B in a population with relative fitnesses and respectively. Then, choosing genotype A as our point of reference, we have , and , where s measures the fitness advantage (s>0) or disadvantage (s<0) of B. For example, the lactose-tolerant allele spread from very low frequencies to high frequencies in less than 9000 years since farming with an estimated selection coefficient of 0.09-0.19 for a Scandinavian population. Though this selection coefficient might seem like a very small number, over evolutionary time, the favored alleles accumulate in the population and become more and more common, potentially reaching fixation. See also Evolutionary pressure
https://en.wikipedia.org/wiki/Two%20Generals%27%20Problem
In computing, the Two Generals' Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured. The Two Generals' Problem appears often as an introduction to the more general Byzantine Generals problem in introductory classes about computer networking (particularly with regard to the Transmission Control Protocol, where it shows that TCP can't guarantee state consistency between endpoints and why this is the case), though it applies to any type of two-party communication where failures of communication are possible. A key concept in epistemic logic, this problem highlights the importance of common knowledge. Some authors also refer to this as the Two Generals' Paradox, the Two Armies Problem, or the Coordinated Attack Problem. The Two Generals' Problem was the first computer communication problem to be proved to be unsolvable. An important consequence of this proof is that generalizations like the Byzantine Generals problem are also unsolvable in the face of arbitrary communication failures, thus providing a base of realistic expectations for any distributed consistency protocols. Definition Two armies, each led by a different general, are preparing to attack a fortified city. The armies are encamped near the city, each in its own valley. A third valley separates the two hills, and the only way for the two generals to communicate is by sending messengers through the valley. Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured. While the two generals have agreed that they will attack, they haven't
https://en.wikipedia.org/wiki/Kadowaki%E2%80%93Woods%20ratio
The Kadowaki–Woods ratio is the ratio of A, the quadratic term of the resistivity and γ2, the linear term of the specific heat. This ratio is found to be a constant for transition metals, and for heavy-fermion compounds, although at different values. In 1968 M. J. Rice pointed out that the coefficient A should vary predominantly as the square of the linear electronic specific heat coefficient γ; in particular he showed that the ratio A/γ2 is material independent for the pure 3d, 4d and 5d transition metals. Heavy-fermion compounds are characterized by very large values of A and γ. Kadowaki and Woods showed that A/γ2 is material-independent within the heavy-fermion compounds, and that it is about 25 times larger than in aforementioned transition metals. According to the theory of electron-electron scattering the ratio A/γ2 contains indeed several non-universal factors, including the square of the strength of the effective electron-electron interaction. Since in general the interactions differ in nature from one group of materials to another, the same values of A/γ2 are only expected within a particular group. In 2005 Hussey proposed a re-scaling of A/γ2 to account for unit cell volume, dimensionality, carrier density and multi-band effects. In 2009 Jacko, Fjaerestad, and Powell demonstrated fdx(n)A/γ2 to have the same value in transition metals, heavy fermions, organics and oxides with A varying over 10 orders of magnitude, where fdx(n) may be written in terms of the dimensionality of the system, the electron density and, in layered systems, the interlayer spacing or the interlayer hopping integral. See also Wilson ratio
https://en.wikipedia.org/wiki/Hygrophanous
The adjective hygrophanous refers to the color change of mushroom tissue (especially the pileus surface) as it loses or absorbs water, which causes the pileipellis to become more transparent when wet and opaque when dry. When identifying hygrophanous species, one needs to be careful when matching colors to photographs or descriptions, as color can change dramatically soon after picking. Genera that are characterized by hygrophanous species include Agrocybe, Psathyrella, Psilocybe, Panaeolus, and Galerina. External links IMA Mycological Glossary: Hygrophanous Wisconsin Mycological Society: Psathyrella Photographs of Psathyrella, a mushroom with a strongly hygrophanous pileus. Fungal morphology and anatomy
https://en.wikipedia.org/wiki/Cystidium
A cystidium (: cystidia) is a relatively large cell found on the sporocarp of a basidiomycete (for example, on the surface of a mushroom gill), often between clusters of basidia. Since cystidia have highly varied and distinct shapes that are often unique to a particular species or genus, they are a useful micromorphological characteristic in the identification of basidiomycetes. In general, the adaptive significance of cystidia is not well understood. Classification By position Cystidia may occur on the edge of a lamella (or analogous hymenophoral structure) (cheilocystidia), on the face of a lamella (pleurocystidia), on the surface of the cap (dermatocystidia or pileocystidia), on the margin of the cap (circumcystidia) or on the stipe (caulocystidia). Especially the pleurocystidia and cheilocystidia are important for identification within many genera. Sometimes the cheilocystidia give the gill edge a distinct colour which is visible to the naked eye or with a hand lens. By morphology Chrysocystidia are cystidia whose contents contain a distinct refractive yellow body, that becomes more deeply yellow when exposed to ammonia or other alkaline compounds. Chrysocystidia are characteristic of many (though not all) members of the agaric family Strophariaceae. Gloeocystidia have an oily or granular appearance under the microscope. Like gloeohyphae, they may be yellowish or clear (hyaline) and can sometimes selectively be coloured by sulphovanillin or other reagents. Metuloids are thick-walled cystidia with an apex having any of several distinct shapes.
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematical%20High%20Achievers%20Selection%20Contest
Hong Kong Mathematical High Achievers Selection Contest (HKMHASC, Traditional Chinese: 香港青少年數學精英選拔賽) is a yearly mathematics competition for students of or below Secondary 3 in Hong Kong. It is jointly organized by Po Leung Kuk and Hong Kong Association of Science and Mathematics Education since the academic year 1998-1999. Recently, there are more than 250 secondary schools participating. Format and Scoring Each participating school may send at most 5 students into the contest. There is one paper, divided into Part A and Part B, with two hours given. Part A is usually made up of 14 - 18 easier questions, carrying one mark each. In Part A, only answers are required. Part B is usually made up of 2 - 4 problems with different difficulties, and may carry different number of marks, varying from 4 to 8. In Part B, workings are required and marked. No calculators or calculation assisting equipments (e.g. printed mathematical tables) are allowed. Awards and Further Training Awards are given according to the total mark. The top 40 contestants are given the First Honour Award (一等獎), the next 80 the Second Honour Award (二等獎), and the Third Honour Award (三等獎) for the next 120. Moreover, the top 4 can obtain an award, namely the Champion and the 1st, 2nd and 3rd Runner-up. Group Awards are given to schools, according to the sum of marks of the 3 contestants with highest mark. The first 4 are given the honour of Champion and 1st, 2nd and 3rd Runner-up. The honour of Top 10 (首十名最佳成績) is given to the 5th-10th, and Group Merit Award (團體優異獎) is given to the next 10. First Honour Award achievers would receive further training. Eight students with best performance will be chosen to participate in the Invitational World Youth Mathematics Inter-City Competition (IWYMIC). List of Past Champions (1999-2019) 98-99: Queen Elizabeth School, Ying Wa College 99-00: Queen's College 00-01: La Salle College 01-02: St. Paul's College 02-03: Queen's College 03-04: La Salle College 04-05: La
https://en.wikipedia.org/wiki/Intracrine
Intracrine refers to a hormone that acts inside a cell, regulating intracellular events. In simple terms it means that the cell stimulates itself by cellular production of a factor that acts within the cell. Steroid hormones act through intracellular (mostly nuclear) receptors and, thus, may be considered to be intracrines. In contrast, peptide or protein hormones, in general, act as endocrines, autocrines, or paracrines by binding to their receptors present on the cell surface. Several peptide/protein hormones or their isoforms also act inside the cell through different mechanisms. These peptide/protein hormones, which have intracellular functions, are also called intracrines. The term 'intracrine' is thought to have been coined to represent peptide/protein hormones that also have intracellular actions. To better understand intracrine, we can compare it to paracrine, autocrine and endocrine. The autocrine system deals with the autocrine receptors of a cell allowing for the hormones to bind, which have been secreted from that same cell. The paracrine system is one where nearby cells get hormones from a cell, and change the functioning of those nearby cells. The endocrine system refers to when the hormones from a cell affect another cell that is very distant from the one that released the hormone. Paracrine physiology has been understood for decades now and the effects of paracrine hormones have been observed when for example, an obesity associate tumor will face the effects of local adipocytes, even if it is not in direct contact with the fat pads in concern. Endocrine physiology on the other hand is a growing field and has had a new area explored, called intracrinology. In intracrinology, the sex steroids produced locally, exert their action in the same cell where they are produced. The biological effects produced by intracellular actions are referred as intracrine effects, whereas those produced by binding to cell surface receptors are called endocrine, autocrin
https://en.wikipedia.org/wiki/Amphibious%20fish
Amphibious fish are fish that are able to leave water for extended periods of time. About 11 distantly related genera of fish are considered amphibious. This suggests that many fish genera independently evolved amphibious traits, a process known as convergent evolution. These fish use a range of terrestrial locomotory modes, such as lateral undulation, tripod-like walking (using paired fins and tail), and jumping. Many of these locomotory modes incorporate multiple combinations of pectoral-, pelvic-, and tail-fin movement. Many ancient fish had lung-like organs, and a few, such as the lungfish and bichir, still do. Some of these ancient "lunged" fish were the ancestors of tetrapods. In most recent fish species, though, these organs evolved into the swim bladders, which help control buoyancy. Having no lung-like organs, modern amphibious fish and many fish in oxygen-poor water use other methods, such as their gills or their skin to breathe air. Amphibious fish may also have eyes adapted to allow them to see clearly in air, despite the refractive index differences between air and water. List of amphibious fish Lung breathers Lungfish (Dipnoi): Six species have limb-like fins, and can breathe air. Some are obligate air breathers, meaning they will drown if not given access to breathe air. All but one species bury in the mud when the body of water they live in dries up, surviving up to two years until water returns. Bichir (Polypteridae): These 12 species are the only ray-finned fish to retain lungs. They are facultative air breathers, requiring access to surface air to breathe in poorly oxygenated water. Various other "lunged" fish: now extinct, a few of this group were ancestors of the stem tetrapods that led to all tetrapods: Lissamphibia, sauropsids and mammals. Gill or skin breathers Rockskippers: These blennies are found on islands in the Indian and Pacific Oceans. They come onto land to catch prey and escape aquatic predators, often for 20 minutes or more.
https://en.wikipedia.org/wiki/Leukocyte-promoting%20factor
Leukocyte-promoting factor, more commonly known as leukopoietin, is a category of substances produced by neutrophils when they encounter a foreign antigen. Leukopoietin stimulates the bone marrow to increase the rate of leukopoiesis in order to replace the neutrophils that will inevitably be lost when they begin to phagocytose the foreign antigens. Leukocyte-promoting factors include colony stimulating factors (CSFs) (produced by monocytes and T lymphocytes), interleukins (produced by monocytes, macrophages, and endothelial cells), prostaglandins, and lactoferrin. See also White blood cell Leukocytosis Complete blood count Indium-111 WBC scan Leukocyte extravasation
https://en.wikipedia.org/wiki/Leukopoiesis
Leukopoiesis is a form of hematopoiesis in which white blood cells (WBC, or leukocytes) are formed in bone marrow located in bones in adults and hematopoietic organs in the fetus. White blood cells, indeed all blood cells, are formed from the differentiation of pluripotent hematopoietic stem cells which give rise to several cell lines with unlimited differentiation potential. These immediate cell lines, or colonies, are progenitors of red blood cells (erythrocytes), platelets (megakaryocytes), and the two main groups of WBCs, myelocytes and lymphocytes. See also Lymphopoiesis Myelopoiesis
https://en.wikipedia.org/wiki/Artificial%20creation
Artificial creation is a field of research that studies the primary synthesis of complex lifelike structures from primordial lifeless origins. The field bears some similarity to artificial life, but unlike artificial life, artificial creation focuses on the primary emergence of complex structures and processes of abiogenesis. Artificial creation does not rely exclusively on the application of evolutionary computation and genetic algorithms to optimize artificial creatures or grow synthetic life forms. Artificial creation instead studies systems of rules of interaction, initial conditions and primordial building blocks that can generate complex lifelike structures, based exclusively on repeated application of rules of interaction. An essential difference that distinguishes artificial creation from other related fields is that no explicit fitness function is used to select for fit structures. Structures exist based only on their ability to persist as entities that do not violate the system's rules of interaction. Artificial creation studies the way in which complex emergent properties can arise to form a self-organizing system. Origins Although concepts and elements of artificial creation are represented to some degree in many areas, the field itself is less than a decade old. There are models of self-organizing systems that produce emergent properties in biology, computer science, mathematics, engineering and other fields. Artificial creation differs from these in that it focuses on underlying properties of systems that can generate the endless environmentally interactive complexity of living systems. One of the primary impetuses for the exploration of artificial creation comes from the realization in the artificial life and evolutionary computing fields that some basic assumptions common in these fields represent subtle mischaracterizations of natural evolution. These fall into two general classes: 1) there are fundamental problems with the use of fi
https://en.wikipedia.org/wiki/Charge%20sharing
Charge sharing is an effect of signal degradation through transfer of charges from one electronic domain to another. Charge sharing in semiconductor radiation detectors In pixelated semiconductor radiation detectors - such as photon-counting or hybrid-pixel-detectors, charge sharing refers to the diffusion of electrical charges with a negative impact on image quality. Formation of charge sharing In the active detector layer of photon detectors, incident photons are converted to electron-hole pairs via the photoelectric effect. The resulting charge cloud is being accelerated towards the readout electronics via an applied voltage bias. Because of thermic energy and repulsion due to the electric fields inside such a device, the charge cloud diffuses, effectively getting larger in lateral size. In pixelated detectors, this effect can lead to a detection of parts of the initial charge cloud in neighbouring pixels. As the probability for this cross talk increases towards pixel edges, it is more prominent in detectors with smaller pixel size. Furthermore, fluorescence of the detector material above its K-edge can lead to additional charge carriers that add to the effect of charge-sharing. Especially in photon counting detectors, charge sharing can lead to errors in the signal count. Problems of charge sharing Especially in photon counting detectors, the energy of an incident photon is correlated with the net sum of the charge in the primary charge cloud. This kind of detectors often use thresholds to be able to act over a certain noise level but also to discriminate incident photons of different energies. If a certain part of the charge cloud is diffusing to the read-out electronics of a neighbouring pixel, this results in the detection of two events with lower energy than the primary photon. Furthermore, if the resulting charge in one of the affected pixels is smaller than the threshold, the event is discarded as noise. In general, this leads to the underestimation
https://en.wikipedia.org/wiki/Heaviside%E2%80%93Lorentz%20units
Heaviside–Lorentz units (or Lorentz–Heaviside units) constitute a system of units and quantities that extends the CGS with a particular set of equations that defines electromagnetic quantities, named for Oliver Heaviside and Hendrik Antoon Lorentz. They share with the CGS-Gaussian system that the electric constant and magnetic constant do not appear in the defining equations for electromagnetism, having been incorporated implicitly into the electromagnetic quantities. Heaviside–Lorentz units may be thought of as normalizing and , while at the same time revising Maxwell's equations to use the speed of light instead. The Heaviside–Lorentz unit system, like the International System of Quantities upon which the SI system is based, but unlike the CGS-Gaussian system, is rationalized, with the result that there are no factors of appearing explicitly in Maxwell's equations. That this system is rationalized partly explains its appeal in quantum field theory: the Lagrangian underlying the theory does not have any factors of when this system is used. Consequently, electromagnetic quantities in the Heaviside–Lorentz system differ by factors of in the definitions of the electric and magnetic fields and of electric charge. It is often used in relativistic calculations, and are used in particle physics. They are particularly convenient when performing calculations in spatial dimensions greater than three such as in string theory. Motivation In the mid-late 19th Century, electromagnetic measurements were frequently made in either the so-called electrostatic (ESU) or electromagnetic (EMU) systems of units. These were based respectively on Coulomb's and Ampere's Law. Use of these systems, as with to the subsequently developed Gaussian CGS units, resulted in many factors of appearing in formulas for electromagnetic results, including those without circular or spherical symmetry. For example, in the CGS-Gaussian system, the capacitance of sphere of radius is while that
https://en.wikipedia.org/wiki/Zero-product%20property
In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words, This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers , the rational numbers , the real numbers , and the complex numbers — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain. Algebraic context Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers with ordinary addition and multiplication, which is only a (commutative) semiring. Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that , then either or because and can also be considered as elements of . Examples A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field. If is a prime number, then the ring of integers modulo has the zero-product property (in fact, it is a field). The Gaussian integers are an integral domain because they are a subring of the complex numbers. In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not
https://en.wikipedia.org/wiki/BORGChat
BORGChat is a LAN messaging software program. It has achieved a relative state of popularity and it is considered to be a complete LAN chat program. It has been superseded by commercial products which allow voice chat, video conferencing, central monitoring and administration. An extension called "BORGVoice" adds word producing chat capabilities to BORGChat, the extension remains in alpha stage. History BORGChat was first published from Ionut Cioflan (nickname "IOn") in 2002. The name comes from the BORG race from Star Trek: The Borg is a massive society of cybernetic automatons abducted and assimilated from thousands of species. The Borg collective improves by consuming technologies, in a similar way wishes BORGChat to "assimilate". Features The software supports the following features: Public and private chat rooms (channels), support for own chat rooms Avatars with user information and online alerts Sending private messages Sending files and pictures, with pause and bandwidth management Animated smileys (emoticons) and sound effects (beep) View computers and network shares Discussion logs in the LAN Message filter, ignore messages from other users Message board with Bulletin Board Code (bold, italic, underline) Multiple chat status modes: Available/Busy/Away with customizable messages Multi language support (with the possibility of adding more languages): English, Romanian, Swedish, Spanish, Polish, Slovak, Italian, Bulgarian, German, Russian, Turkish, Ukrainian, Slovenian, Czech, Danish, French, Latvian, Portuguese, Urdu, Dutch, Hungarian, Serbian, Macedonian. See also Synchronous conferencing Comparison of LAN messengers
https://en.wikipedia.org/wiki/Content-addressable%20storage
Content-addressable storage (CAS), also referred to as content-addressed storage or fixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations. Content-addressable storage is similar to content-addressable memory. CAS systems work by passing the content of the file through a cryptographic hash function to generate a unique key, the "content address". The file system's directory stores these addresses and a pointer to the physical storage of the content. Because an attempt to store the same file will generate the same key, CAS systems ensure that the files within them are unique, and because changing the file will result in a new key, CAS systems provide assurance that the file is unchanged. CAS became a significant market during the 2000s, especially after the introduction of the 2002 Sarbanes–Oxley Act which required the storage of enormous numbers of documents for long periods and retrieved only rarely. Ever-increasing performance of traditional file systems and new software systems have eroded the value of legacy CAS systems, which have become increasingly rare after roughly 2018. However, the principles of content addressability continue to be of great interest to computer scientists, and form the core of numerous emerging technologies, such as peer-to-peer file sharing, cryptocurrencies, and distributed computing. Description Location-based approaches Traditional file systems generally track files based on their filename. On random-access media like a floppy disk, this is accomplished using a directory that consists of some sort of list of filenames and pointers to the data. The pointers refer to a physical location on the disk, normally using disk sectors. On more modern systems and larger formats like hard drives, the directory is itself split into many
https://en.wikipedia.org/wiki/Phenomics
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist
https://en.wikipedia.org/wiki/Elliott%20Avedon%20Museum%20and%20Archive%20of%20Games
The Elliott Avedon Museum and Archive of Games was a public board game museum housed at the University of Waterloo, in Waterloo, Ontario, Canada. It was established in 1971 as the Museum and Archive of Games, and renamed in 2000 in honour of its founder and first curator. It housed over 5,000 objects and documents related to games. It was administered by the Faculty of Applied Health Sciences, and was found within B.C. Matthews Hall, near the north end of the main campus. The museum had both physical and virtual exhibits about a diversity of board games and related objects. The resources of the museum contributed to the university's program in Recreation and Leisure Studies. The University closed the museum in 2009 and transferred the physical collection to the Canadian Museum of Civilization (now known as the Canadian Museum of History) however information about the collection, which includes over 5000 objects and a large number of archival documents about games, is still hosted on the University website. There are over 700 web pages of virtual exhibits which includes videos, photographs, diagrams, other graphics, and textual information about games. See also History of games History of video games
https://en.wikipedia.org/wiki/Particle%20aggregation
Particle agglomeration refers to the formation of assemblages in a suspension and represents a mechanism leading to the functional destabilization of colloidal systems. During this process, particles dispersed in the liquid phase stick to each other, and spontaneously form irregular particle assemblages, flocs, or agglomerates. This phenomenon is also referred to as coagulation or flocculation and such a suspension is also called unstable. Particle agglomeration can be induced by adding salts or other chemicals referred to as coagulant or flocculant. Particle agglomeration can be a reversible or irreversible process. Particle agglomerates defined as "hard agglomerates" are more difficult to redisperse to the initial single particles. In the course of agglomeration, the agglomerates will grow in size, and as a consequence they may settle to the bottom of the container, which is referred to as sedimentation. Alternatively, a colloidal gel may form in concentrated suspensions which changes its rheological properties. The reverse process whereby particle agglomerates are re-dispersed as individual particles, referred to as peptization, hardly occurs spontaneously, but may occur under stirring or shear. Colloidal particles may also remain dispersed in liquids for long periods of time (days to years). This phenomenon is referred to as colloidal stability and such a suspension is said to be functionally stable. Stable suspensions are often obtained at low salt concentrations or by addition of chemicals referred to as stabilizers or stabilizing agents. The stability of particles, colloidal or otherwise, is most commonly evaluated in terms of zeta potential. This parameter provides a readily quantifiable measure of interparticle repulsion, which is the key inhibitor of particle aggregation. Similar agglomeration processes occur in other dispersed systems too. In emulsions, they may also be coupled to droplet coalescence, and not only lead to sedimentation but also to crea
https://en.wikipedia.org/wiki/Bruce%20Maccabee
Bruce Maccabee (born May 6, 1942) is an American optical physicist formerly employed by the U.S. Navy, and a ufologist. Biography Maccabee received a B.S. in physics at Worcester Polytechnic Institute in Worcester, Mass., and then at American University, Washington, DC, (M.S. and Ph.D. in physics). In 1972 he began his career at the Naval Ordnance Laboratory, White Oak, Silver Spring, Maryland; which later became the Naval Surface Warfare Center Dahlgren Division. Maccabee retired from government service in 2008. He has worked on optical data processing, generation of underwater sound with lasers and various aspects of the Strategic Defense Initiative (SDI) and Ballistic Missile Defense (BMD) using high power lasers. Ufology Maccabee has been interested in UFOs since the late 1960s when he joined the National Investigations Committee on Aerial Phenomena (NICAP) and was active in research and investigation for NICAP until its demise in 1980. He became a member of the Mutual UFO Network (MUFON) in 1975 and was subsequently appointed to the position of state director for Maryland, a position he still holds. In 1979 he was instrumental in establishing the Fund for UFO Research (FUFOR) and was the chairman for about 13 years. He presently serves on the National Board of the Fund. His UFO research and investigations (which, he often stresses, are completely unrelated to his Navy work) have included the Kenneth Arnold sighting (June 24, 1947), the McMinnville, Oregon (Trent) photos of 1950, the Gemini 11 astronaut photos of September, 1966, the Tehran UFO incident of September 1976, the New Zealand sightings of December 1978, the Japan Airlines (JAL1628) sighting of November 1986, the numerous sightings of Gulf Breeze UFO incident, 1987–1988, the "red bubba" sightings, 1990-1992 (including his own sighting in September, 1991), the Mexico City video of August, 1997 (which he deemed a hoax), the Phoenix lights sightings of March 13, 1997, 2004 Mexican UFO incident a
https://en.wikipedia.org/wiki/Thermal%20physics
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics. Overview Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory. A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction. See also Heat transfer physics Information theory Philosophy of thermal and statistical physics Thermodynamic instruments
https://en.wikipedia.org/wiki/Transheterozygote
The term transheterozygote is used in modern genetics periodicals in two different ways. In the first, the transheterozygote has one mutant (-) and one wildtype allele (+) at each of two different genes (A-/A+ and B-/B+ where A and B are different genes). In the second, the transheterozygote carries two different mutated alleles of the same gene (A*/A', see example below). This second definition also applies to the term "heteroallelic combination". Organisms with one mutant and one wildtype allele at one locus are called simply heterozygous, not transheterozygous. Transheterozygotes are useful in the study of genetic interactions and complementation testing. Transheterozygous at two loci A transheterozygote is a diploid organism that is heterozygous at two different loci (genes). Each of the two loci has one natural (or wild type) allele and one allele that differs from the natural allele because of a mutation. Such an organism can be created by crossing together two organisms that carry one mutation each, in two different genes, and selecting for the presence of both mutations simultaneously in an individual offspring. The offspring will have one mutant allele and one wildtype allele at each of the two genes being studied. Transheterozygotes are useful in the study of genetic interactions. An example from Drosophila research: the wing vein phenotype of a recessive mutation in the Epidermal growth factor receptor (Egfr), a gene required for communication between cells, can be dominantly enhanced by a recessive mutation in Notch, another cell-signalling gene. A transheterozygote between Egfr and Notch has the genotype Notch/+ ; Egfr/+ (where Notch and Egfr represent mutant alleles, and + represents wildtype alleles). The dominant interaction between Egfr and Notch suggested that the Egfr and Notch signalling pathways act together within the cell to affect the pattern of veins in the fly's wings. Heteroallelic combination at one locus Transheterozygote refers
https://en.wikipedia.org/wiki/Free%20entropy
A thermodynamic free entropy is an entropic thermodynamic potential analogous to the free energy. Also known as a Massieu, Planck, or Massieu–Planck potentials (or functions), or (rarely) free information. In statistical mechanics, free entropies frequently appear as the logarithm of a partition function. The Onsager reciprocal relations in particular, are developed in terms of entropic potentials. In mathematics, free entropy means something quite different: it is a generalization of entropy defined in the subject of free probability. A free entropy is generated by a Legendre transformation of the entropy. The different potentials correspond to different constraints to which the system may be subjected. Examples The most common examples are: where is entropy is the Massieu potential is the Planck potential is internal energy is temperature is pressure is volume is Helmholtz free energy is Gibbs free energy is number of particles (or number of moles) composing the i-th chemical component is the chemical potential of the i-th chemical component is the total number of components is the th components. Note that the use of the terms "Massieu" and "Planck" for explicit Massieu-Planck potentials are somewhat obscure and ambiguous. In particular "Planck potential" has alternative meanings. The most standard notation for an entropic potential is , used by both Planck and Schrödinger. (Note that Gibbs used to denote the free energy.) Free entropies where invented by French engineer François Massieu in 1869, and actually predate Gibbs's free energy (1875). Dependence of the potentials on the natural variables Entropy By the definition of a total differential, From the equations of state, The differentials in the above equation are all of extensive variables, so they may be integrated to yield Massieu potential / Helmholtz free entropy Starting over at the definition of and taking the total differential, we have via a Legendre transform (and the c
https://en.wikipedia.org/wiki/Idle%20animation
Idle animations are animations within video games that occur when the player character is not performing any actions. They serve to give games personality, as an Easter Egg for the player, or for realism. History One of the earliest games to feature an idle animation was Android Nim in 1978. The androids blink, look around, and seemingly talk to one another until the player gives an order. Another two early examples are Maziacs and The Pharaoh's Curse released in 1983. Idle animations grew in usage throughout the 16 bit era. Incorporating idle animations was done to give personality towards games and their characters as they are the only in-game actions aside from cutscenes where the characters are free to act independent of the player's input. The idle animation length and details can depend on interaction between the player and character, such as third person player idle animations are longer to avoid looking robotic on repeated viewing.  In modern 3D games idle animation are done to give realism. For games targeting towards younger audiences the idle animations are more likely to be complex or humorous. In comparison games targeted towards older audiences tend to include more basic idle animations. Examples Maziacs - The sprite character will tap his feet, blink, and sit down. Sonic the Hedgehog - Sonic will impatiently tap his foot when the player does not move. Donkey Kong Country 2: Diddy Kong's Quest - Diddy Kong juggles a few balls after a few seconds without input. Super Mario 64 - Mario looks around and eventually will fall asleep. Grand Theft Auto: San Andreas - Carl "CJ" Johnson will sing songs including "Nuthin' But A'G' Thang" and "My Lovin' (You're Never Gonna Get It)." Red Dead Redemption 2 - When left on a horse for a while Arthur Morgan will pet the horse.
https://en.wikipedia.org/wiki/Jugular%20foramen
A jugular foramen is one of the two (left and right) large foramina (openings) in the base of the skull, located behind the carotid canal. It is formed by the temporal bone and the occipital bone. It allows many structures to pass, including the inferior petrosal sinus, three cranial nerves, the sigmoid sinus, and meningeal arteries. Structure The jugular foramen is formed in front by the petrous portion of the temporal bone, and behind by the occipital bone. It is generally slightly larger on the right side than on the left side. Contents The jugular foramen may be subdivided into three compartments, each with their own contents. The anterior compartment transmits the inferior petrosal sinus. The intermediate compartment transmits the glossopharyngeal nerve, the vagus nerve, and the accessory nerve. The posterior compartment transmits the sigmoid sinus (becoming the internal jugular vein), and some meningeal branches from the occipital artery and ascending pharyngeal artery. An alternative imaging based subclassification exists, delineated by the jugular spine which is a bony ridge partially separating the jugular foramen into two parts: The smaller, anteromedial, "pars nervosa" compartment contains CN IX, (tympanic nerve, a branch of CN IX), and receives the venous return from inferior petrosal sinus. The larger, posterolateral, "pars vascularis" compartment contains CN X, CN XI, Arnold's nerve (or the auricular branch of CN X involved in the Arnold's reflex, where external auditory meatus stimulation causes cough), jugular bulb, and posterior meningeal branch of ascending pharyngeal artery. Clinical significance Obstruction of the jugular foramen can result in jugular foramen syndrome. Additional images See also Occipitomastoid suture
https://en.wikipedia.org/wiki/Carlson%27s%20theorem
In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem. Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions. Statement Assume that satisfies the following three conditions. The first two conditions bound the growth of at infinity, whereas the third one states that vanishes on the non-negative integers. is an entire function of exponential type, meaning that for some real values , . There exists such that for every non-negative integer . Then is identically zero. Sharpness First condition The first condition may be relaxed: it is enough to assume that is analytic in , continuous in , and satisfies for some real values , . Second condition To see that the second condition is sharp, consider the function . It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of , and indeed it is not identically zero. Third condition A result, due to , relaxes the condition that vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if vanishes on a subset of upper density 1, meaning that This condition is sharp, meaning that the theorem fails for sets of upper density smaller than 1. Applications Suppose is a function that possesses all finite forward differences . Consider then the Newton series with is the binomial coefficient and is the -th forward difference. By construction, one then has that for all non-negative integers , so that the difference . This is one of the conditions of Carlson's theorem; if obeys the othe
https://en.wikipedia.org/wiki/Costal%20cartilage
The costal cartilages are bars of hyaline cartilage that serve to prolong the ribs forward and contribute to the elasticity of the walls of the thorax. Costal cartilage is only found at the anterior ends of the ribs, providing medial extension. Differences from Ribs 1-12 The first seven pairs are connected with the sternum; the next three are each articulated with the lower border of the cartilage of the preceding rib; the last two have pointed extremities, which end in the wall of the abdomen. Like the ribs, the costal cartilages vary in their length, breadth, and direction. They increase in length from the first to the seventh, then gradually decrease to the twelfth. Their breadth, as well as that of the intervals between them, diminishes from the first to the last. They are broad at their attachments to the ribs, and taper toward their sternal extremities, excepting the first two, which are of the same breadth throughout, and the sixth, seventh, and eighth, which are enlarged where their margins are in contact. They also vary in direction: the first descends a little to the sternum, the second is horizontal, the third ascends slightly, while the others are angular, following the course of the ribs for a short distance, and then ascending to the sternum or preceding cartilage. Structure Each costal cartilage presents two surfaces, two borders, and two extremities. Surfaces The anterior surface is convex, and looks forward and upward: that of the first gives attachment to the costoclavicular ligament and the subclavius muscle; those of the first six or seven at their sternal ends, to the pectoralis major. The others are covered by, and give partial attachment to, some of the flat muscles of the abdomen. The posterior surface is concave, and directed backward and downward; that of the first gives attachment to the sternothyroideus, those of the third to the sixth inclusive to the transversus thoracis muscle, and the six or seven inferior ones to the transvers
https://en.wikipedia.org/wiki/%CE%A9-consistent%20theory
In mathematical logic, an ω-consistent (or omega-consistent, also called numerically segregative) theory is a theory (collection of sentences) that is not only (syntactically) consistent (that is, does not prove a contradiction), but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem. Definition A theory T is said to interpret the language of arithmetic if there is a translation of formulas of arithmetic into the language of T so that T is able to prove the basic axioms of the natural numbers under this translation. A T that interprets arithmetic is ω-inconsistent if, for some property P of natural numbers (defined by a formula in the language of T), T proves P(0), P(1), P(2), and so on (that is, for every standard natural number n, T proves that P(n) holds), but T also proves that there is some natural number n such that P(n) fails. This may not generate a contradiction within T because T may not be able to prove for any specific value of n that P(n) fails, only that there is such an n. In particular, such n is necessarily a nonstandard integer in any model for T (Quine has thus called such theories "numerically insegregative"). T is ω-consistent if it is not ω-inconsistent. There is a weaker but closely related property of Σ1-soundness. A theory T is Σ1-sound (or 1-consistent, in another terminology) if every Σ01-sentence provable in T is true in the standard model of arithmetic N (i.e., the structure of the usual natural numbers with addition and multiplication). If T is strong enough to formalize a reasonable model of computation, Σ1-soundness is equivalent to demanding that whenever T proves that a Turing machine C halts, then C actually halts. Every ω-consistent theory is Σ1-sound, but not vice versa. More generally, we can define an analogous concept for higher levels of the arithmetical hierarchy
https://en.wikipedia.org/wiki/Multiple%20sequence%20alignment
Multiple sequence alignment (MSA) may refer to the process or the result of sequence alignment of three or more biological sequences, generally protein, DNA, or RNA. In many cases, the input set of query sequences are assumed to have an evolutionary relationship by which they share a linkage and are descended from a common ancestor. From the resulting MSA, sequence homology can be inferred and phylogenetic analysis can be conducted to assess the sequences' shared evolutionary origins. Visual depictions of the alignment as in the image at right illustrate mutation events such as point mutations (single amino acid or nucleotide changes) that appear as differing characters in a single alignment column, and insertion or deletion mutations (indels or gaps) that appear as hyphens in one or more of the sequences in the alignment. Multiple sequence alignment is often used to assess sequence conservation of protein domains, tertiary and secondary structures, and even individual amino acids or nucleotides. Computational algorithms are used to produce and analyse the MSAs due to the difficulty and intractability of manually processing the sequences given their biologically-relevant length. MSAs require more sophisticated methodologies than pairwise alignment because they are more computationally complex. Most multiple sequence alignment programs use heuristic methods rather than global optimization because identifying the optimal alignment between more than a few sequences of moderate length is prohibitively computationally expensive. On the other hand, heuristic methods generally fail to give guarantees on the solution quality, with heuristic solutions shown to be often far below the optimal solution on benchmark instances. Problem statement Given sequences , similar to the form below: A multiple sequence alignment is taken of this set of sequences by inserting any amount of gaps needed into each of the sequences of until the modified sequences, , all conform to leng
https://en.wikipedia.org/wiki/Tetracycline-controlled%20transcriptional%20activation
Tetracycline-controlled transcriptional activation is a method of inducible gene expression where transcription is reversibly turned on or off in the presence of the antibiotic tetracycline or one of its derivatives (e.g. doxycycline). Tetracycline-controlled gene expression is based upon the mechanism of resistance to tetracycline antibiotic treatment found in gram-negative bacteria. In nature, the Ptet promoter expresses TetR (the repressor) and TetA, the protein that pumps tetracycline antibiotic out of the cell. The difference between Tet-On and Tet-Off is not whether the transactivator turns a gene on or off, as the name might suggest; rather, both proteins activate expression. The difference relates to their respective response to tetracycline or doxycycline (Dox, a more stable tetracycline analogue); Tet-Off activates expression in the absence of Dox, whereas Tet-On activates in the presence of Dox. Tet-Off and Tet-On The two most commonly used inducible expression systems for research of eukaryote cell biology are named Tet-Off and Tet-On. The Tet-Off system for controlling expression of genes of interest in mammalian cells was developed by Professors and Manfred Gossen at the University of Heidelberg and first published in 1992. The Tet-Off system makes use of the tetracycline transactivator (tTA) protein, which is created by fusing one protein, TetR (tetracycline repressor), found in Escherichia coli bacteria, with the activation domain of another protein, VP16, found in the herpes simplex virus. The resulting tTA protein is able to bind to DNA at specific TetO operator sequences. In most Tet-Off systems, several repeats of such TetO sequences are placed upstream of a minimal promoter such as the CMV promoter. The entirety of several TetO sequences with a minimal promoter is called a tetracycline response element (TRE), because it responds to binding of the tetracycline transactivator protein tTA by increased expression of the gene or genes downstre
https://en.wikipedia.org/wiki/Photosynthetic%20efficiency
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction 6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2 where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence. Typical efficiencies Plants Quoted values sunlight-to-biomass efficien
https://en.wikipedia.org/wiki/Ukkonen%27s%20algorithm
In computer science, Ukkonen's algorithm is a linear-time, online algorithm for constructing suffix trees, proposed by Esko Ukkonen in 1995. The algorithm begins with an implicit suffix tree containing the first character of the string. Then it steps through the string, adding successive characters until the tree is complete. This order addition of characters gives Ukkonen's algorithm its "on-line" property. The original algorithm presented by Peter Weiner proceeded backward from the last character to the first one from the shortest to the longest suffix. A simpler algorithm was found by Edward M. McCreight, going from the longest to the shortest suffix. Implicit suffix tree While generating suffix tree using Ukkonen's algorithm, we will see implicit suffix tree in intermediate steps depending on characters in string S. In implicit suffix trees, there will be no edge with $ (or any other termination character) label and no internal node with only one edge going out of it. High level description of Ukkonen's algorithm Ukkonen's algorithm constructs an implicit suffix tree T for each prefix S[1...i] of S (S being the string of length n). It first builds T using 1 character, then T using 2 character, then T using 3 character, ..., T using the n character. You can find the following characteristics in a suffix tree that uses Ukkonen's algorithm: Implicit suffix tree T is built on top of implicit suffix tree T . At any given time, Ukkonen's algorithm builds the suffix tree for the characters seen so far and so it has on-line property, allowing the algorithm to have an execution time of O(n). Ukkonen's algorithm is divided into n phases (one phase for each character in the string with length n). Each phase i+1 is further divided into i+1 extensions, one for each of the i+1 suffixes of S[1...i+1]. Suffix extension is all about adding the next character into the suffix tree built so far. In extension j of phase i+1, algorithm finds the end of S[j...i] (which is alre
https://en.wikipedia.org/wiki/Light-addressable%20potentiometric%20sensor
A light-addressable potentiometric sensor (LAPS) is a sensor that uses light (e.g. LEDs) to select what will be measured. Light can activate carriers in semiconductors. History An example is the pH-sensitive LAPS (range pH4 to pH10) that uses LEDs in combination with (semi-conducting) silicon and pH-sensitive Ta2O5 (SiO2; Si3N4) insulator. The LAPS has several advantages over other types of chemical sensors. The sensor surface is completely flat, no structures, wiring or passivation are required. At the same time, the "light-addressability" of the LAPS makes it possible to obtain a spatially resolved map of the distribution of the ion concentration in the specimen. The spatial resolution of the LAPS is an important factor and is determined by the beam size and the lateral diffusion of photocarries in the semiconductor substrate. By illuminating parts of the semiconductor surface, electron-hole pairs are generated and a photocurrent flows. The LAPS is a semiconductor based chemical sensor with an electrolyte-insulator-semiconductor (EIS) structure. Under a fixed bias voltage, the AC (kHz range) photocurrent signal varies depending on the solution. A two-dimensional mapping of the surface from the LAPS is possible by using a scanning laser beam. Optoelectronics Sensors
https://en.wikipedia.org/wiki/Sort%20%28C%2B%2B%29
sort is a generic function in the C++ Standard Library for doing comparison sorting. The function originated in the Standard Template Library (STL). The specific sorting algorithm is not mandated by the language standard and may vary across implementations, but the worst-case asymptotic complexity of the function is specified: a call to must perform no more than comparisons when applied to a range of elements. Usage The function is included from the header of the C++ Standard Library, and carries three arguments: . Here, is a templated type that must be a random access iterator, and and must define a sequence of values, i.e., must be reachable from by repeated application of the increment operator to . The third argument, also of a templated type, denotes a comparison predicate. This comparison predicate must define a strict weak ordering on the elements of the sequence to be sorted. The third argument is optional; if not given, the "less-than" () operator is used, which may be overloaded in C++. This code sample sorts a given array of integers (in ascending order) and prints it out. #include <algorithm> #include <iostream> int main() { int array[] = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(std::begin(array), std::end(array)); for (size_t i = 0; i < std::size(array); ++i) { std::cout << array[i] << ' '; } std::cout << '\n'; } The same functionality using a container, using its and methods to obtain iterators: #include <algorithm> #include <iostream> #include <vector> int main() { std::vector<int> vec = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(vec.begin(), vec.end()); for (size_t i = 0; i < vec.size(); ++i) { std::cout << vec[i] << ' '; } std::cout << '\n'; } Genericity is specified generically, so that it can work on any random-access container and any way of determining that an element of such a container should be placed before another element . Although generically specified, is not easil
https://en.wikipedia.org/wiki/NAT%20Port%20Mapping%20Protocol
NAT Port Mapping Protocol (NAT-PMP) is a network protocol for establishing network address translation (NAT) settings and port forwarding configurations automatically without user effort. The protocol automatically determines the external IPv4 address of a NAT gateway, and provides means for an application to communicate the parameters for communication to peers. Apple introduced NAT-PMP in 2005 by as part of the Bonjour specification, as an alternative to the more common ISO Standard Internet Gateway Device Protocol implemented in many NAT routers. The protocol was published as an informational Request for Comments (RFC) by the Internet Engineering Task Force (IETF) in RFC 6886. NAT-PMP runs over the User Datagram Protocol (UDP) and uses port number 5351. It has no built-in authentication mechanisms because forwarding a port typically does not allow any activity that could not also be achieved using STUN methods. The benefit of NAT-PMP over STUN is that it does not require a STUN server and a NAT-PMP mapping has a known expiration time, allowing the application to avoid sending inefficient keep-alive packets. NAT-PMP is the predecessor to the Port Control Protocol (PCP). See also Port Control Protocol (PCP) Internet Gateway Device Protocol (UPnP IGD) Universal Plug and Play (UPnP) NAT traversal STUN Zeroconf
https://en.wikipedia.org/wiki/Vertical%20and%20horizontal%20bundles
In mathematics, the vertical bundle and the horizontal bundle are vector bundles associated to a smooth fiber bundle. More precisely, given a smooth fiber bundle , the vertical bundle and horizontal bundle are subbundles of the tangent bundle of whose Whitney sum satisfies . This means that, over each point , the fibers and form complementary subspaces of the tangent space . The vertical bundle consists of all vectors that are tangent to the fibers, while the horizontal bundle requires some choice of complementary subbundle. To make this precise, define the vertical space at to be . That is, the differential (where ) is a linear surjection whose kernel has the same dimension as the fibers of . If we write , then consists of exactly the vectors in which are also tangent to . The name is motivated by low-dimensional examples like the trivial line bundle over a circle, which is sometimes depicted as a vertical cylinder projecting to a horizontal circle. A subspace of is called a horizontal space if is the direct sum of and . The disjoint union of the vertical spaces VeE for each e in E is the subbundle VE of TE; this is the vertical bundle of E. Likewise, provided the horizontal spaces vary smoothly with e, their disjoint union is a horizontal bundle. The use of the words "the" and "a" here is intentional: each vertical subspace is unique, defined explicitly by . Excluding trivial cases, there are an infinite number of horizontal subspaces at each point. Also note that arbitrary choices of horizontal space at each point will not, in general, form a smooth vector bundle; they must also vary in an appropriately smooth way. The horizontal bundle is one way to formulate the notion of an Ehresmann connection on a fiber bundle. Thus, for example, if E is a principal G-bundle, then the horizontal bundle is usually required to be G-invariant: such a choice is equivalent to a connection on the principal bundle. This notably occurs when E is the frame bundle
https://en.wikipedia.org/wiki/The%20Last%20Blade%202
The Last Blade 2 is a video game developed and released by SNK in 1998. Like its predecessor, The Last Blade, it is a weapons-based versus fighting game originally released to arcades via the Neo Geo MVS arcade system, although it has since been released for various other platforms. Gameplay Gameplay elements remain the same as their predecessor with some minor adjustments. An "EX" mode was added to play, which is a combination of "Speed" and "Power". The mood is grimmer than its predecessor through the introduction to the game. The characters are colored slightly darker, and the game's cut-scenes are made longer to emphasize the importance of the plot. Characters are no longer equal, hosting greater differences in strengths and weaknesses than before. Plot The game is set one year after the events of the first game. Long before humanity existed, death was an unknown, equally distant concept. The "Messenger from Afar" was born when death first came to the world. With time, the Sealing Rite was held to seal Death behind Hell's Gate. At that time, two worlds were born, one near and one far, beginning the history of life and death. Half a year has passed since Suzaku's madness, and the underworld is still linked by a great portal. Our world has been called upon. Legends of long ago told of the sealing of the boundary between the two worlds. The Sealing Rite would be necessary to hold back the spirits of that far away world. Characters Three new characters were introduced: Hibiki Takane: daughter of a famed swordsmith, she is searching for the silver-haired man that requested the final blade her father would ever make. Setsuna: a being believed to be the "Messenger from Afar", he requested a blade to be forged by Hibiki's father and is out to slay the Sealing Maiden. Kojiroh Sanada: Shinsengumi captain of Unit Zero; investigating the Hell's Portal. Kojiroh is actually Kaori, his sister, who assumed his identity after his death to carry on his work. Home versi
https://en.wikipedia.org/wiki/Tridiagonal%20matrix%20algorithm
In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system for n unknowns may be written as where and . For such systems, the solution can be obtained in operations instead of required by Gaussian elimination. A first sweep eliminates the 's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrices commonly arise from the discretization of 1D Poisson equation and natural cubic spline interpolation. Thomas' algorithm is not stable in general, but is so in several special cases, such as when the matrix is diagonally dominant (either by rows or columns) or symmetric positive definite; for a more precise characterization of stability of Thomas' algorithm, see Higham Theorem 9.12. If stability is required in the general case, Gaussian elimination with partial pivoting (GEPP) is recommended instead. Method The forward sweep consists of the computation of new coefficients as follows, denoting the new coefficients with primes: and The solution is then obtained by back substitution: The method above does not modify the original coefficient vectors, but must also keep track of the new coefficients. If the coefficient vectors may be modified, then an algorithm with less bookkeeping is: For do followed by the back substitution The implementation in a VBA subroutine without preserving the coefficient vectors: Sub TriDiagonal_Matrix_Algorithm(N%, A#(), B#(), C#(), D#(), X#()) Dim i%, W# For i = 2 To N W = A(i) / B(i - 1) B(i) = B(i) - W * C(i - 1) D(i) = D(i) - W * D(i - 1) Next i X(N) = D(N) / B(N) For i = N - 1 To 1 Step -1 X(i) = (D(i) - C(i) * X(i + 1)) / B(i) Next i End Sub Derivation The derivation of the tridiagonal matrix algorithm is a special case
https://en.wikipedia.org/wiki/Successive%20over-relaxation
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young Jr. and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young Jr. Formulation Given a square system of n linear equations with unknown x: where: Then A can be decomposed into a diagonal component D, and strictly lower and upper triangular components L and U: where The system of linear equations may be rewritten as: for a constant ω > 1, called the relaxation factor. The method of successive over-relaxation is an iterative technique that solves the left hand side of this expression for x, using the previous value for x on the right hand side. Analytically, this may be written as: where is the kth approximation or iteration of and is the next or k + 1 iteration of . However, by taking advantage of the triangular form of (D+ωL), the elements of x(k+1) can be computed sequentially using forward substitution: Convergence The choice of relaxation factor ω is not necessarily easy, and depends upon the properties of the coefficient matrix. In 1947, Ostrowski proved that if is symmetric and positive-definite then for . Thus, convergence of the iteration process follows, but we are generally interested in faster convergence rather than just convergence. Convergence R
https://en.wikipedia.org/wiki/Internal%20auditory%20meatus
The internal auditory meatus (also meatus acusticus internus, internal acoustic meatus, internal auditory canal, or internal acoustic canal) is a canal within the petrous part of the temporal bone of the skull between the posterior cranial fossa and the inner ear. Structure The opening to the meatus is called the porus acusticus internus or internal acoustic opening. It is located inside the posterior cranial fossa of the skull, near the center of the posterior surface of the petrous part of the temporal bone. The size varies considerably. Its outer margins are smooth and rounded. The canal which comprises the internal auditory meatus is short (about 1 cm) and runs laterally into the bone. The lateral (outer) aspect of the canal is known as the fundus. The fundus is subdivided by two thin crests of bone to form three separate canals, through which course the facial and vestibulocochlear nerve branches. The falciform crest first divides the meatus into superior and inferior sections; a vertical crest (Bill's bar, named by William F. House) then divides the upper passage into anterior and posterior sections. Although there are three osseous canals, the fundus is conceptually divided more commonly into four quadrant areas according to the four major nerve branches of the inner ear: anterior superior - facial nerve area (contains facial nerve and nervus intermedius) anterior inferior - cochlear nerve area (contains cochlear nerve) posterior superior - superior vestibular area (contains superior division of vestibular nerve) posterior inferior - inferior vestibular area (contains inferior division of vestibular nerve) The cochlear and vestibular branches of cranial nerve VIII separate according to this schema and terminate in the inner ear. The facial nerve continues traveling through the facial canal, eventually exiting the skull at the stylomastoid foramen. Function The internal auditory meatus provides a passage through which the vestibulocochlear nerve (C
https://en.wikipedia.org/wiki/Microecosystem
Microecosystems can exist in locations which are precisely defined by critical environmental factors within small or tiny spaces. Such factors may include temperature, pH, chemical milieu, nutrient supply, presence of symbionts or solid substrates, gaseous atmosphere (aerobic or anaerobic) etc. Some examples Pond microecosystems These microecosystems with limited water volume are often only of temporary duration and hence colonized by organisms which possess a drought-resistant spore stage in the lifecycle, or by organisms which do not need to live in water continuously. The ecosystem conditions applying at a typical pond edge can be quite different from those further from shore. Extremely space-limited water ecosystems can be found in, for example, the water collected in bromeliad leaf bases and the "pitchers" of Nepenthes. Animal gut microecosystems These include the buccal region (especially cavities in the gingiva), rumen, caecum etc. of mammalian herbivores or even invertebrate digestive tracts. In the case of mammalian gastrointestinal microecology, microorganisms such as protozoa, bacteria, as well as curious incompletely defined organisms (such as certain large structurally complex Selenomonads, Quinella ovalis "Quin's Oval", Magnoovum eadii "Eadie's Oval", Oscillospira etc.) can exist in the rumen as incredibly complex, highly enriched mixed populations, (see Moir and Masson images ). This type of microecosystem can adjust rapidly to changes in the nutrition or health of the host animal (usually a ruminant such as cow, sheep, goat etc.); see Hungate's "The Rumen and its microbes 1966). Even within a small closed system such as the rumen there may exist a range of ecological conditions: Many organisms live freely in the rumen fluid whereas others require the substrate and metabolic products supplied by the stomach wall tissue with its folds and interstices. Interesting questions are also posed concerning the transfer of the strict anaerobe organisms in t
https://en.wikipedia.org/wiki/Fault%20Simulator
DevPartner Fault Simulator is a software development tool used to simulate application errors. It helps developers and quality assurance engineers write, test and debug those parts of the software responsible for handling fault situations which can occur within applications. The target application, where faults are simulated, behaves as if those faults were the result of a real software or hardware problem which the application could face. DevPartner Fault Simulator works with applications written for Microsoft Windows and .NET platforms and is integrated with the Microsoft Visual Studio development environment. DevPartner Fault Simulator belonged to the DevPartner family of products offered by Compuware. At some point before the product line was sold to Micro Focus in 2009, the product was retired. See also NuMega Software testing tools
https://en.wikipedia.org/wiki/Difference%20polynomials
In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases. Definition The general difference polynomial sequence is given by where is the binomial coefficient. For , the generated polynomials are the Newton polynomials The case of generates Selberg's polynomials, and the case of generates Stirling's interpolation polynomials. Moving differences Given an analytic function , define the moving difference of f as where is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck. Generating function The generating function for the general difference polynomials is given by This generating function can be brought into the form of the generalized Appell representation by setting , , and . See also Carlson's theorem Bernoulli polynomials of the second kind