source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Human%20waste
Human waste (or human excreta) refers to the waste products of the human digestive system, menses, and human metabolism including urine and feces. As part of a sanitation system that is in place, human waste is collected, transported, treated and disposed of or reused by one method or another, depending on the type of toilet being used, ability by the users to pay for services and other factors. Fecal sludge management is used to deal with fecal matter collected in on-site sanitation systems such as pit latrines and septic tanks. The sanitation systems in place differ vastly around the world, with many people in developing countries having to resort to open defecation where human waste is deposited in the environment, for lack of other options. Improvements in "water, sanitation and hygiene" (WASH) around the world is a key public health issue within international development and is the focus of Sustainable Development Goal 6. People in developed countries tend to use flush toilets where the human waste is mixed with water and transported to sewage treatment plants. Children's excreta can be disposed of in diapers and mixed with municipal solid waste. Diapers are also sometimes dumped directly into the environment, leading to public health risks. Terminology The term "human waste" is used in the general media to mean several things, such as sewage, sewage sludge, blackwater - in fact anything that may contain some human feces. In the stricter sense of the term, human waste is in fact human excreta, i.e. urine and feces, with or without water being mixed in. For example, dry toilets collect human waste without the addition of water. Health aspects Human waste is considered a biowaste, as it is a vector for both viral and bacterial diseases. It can be a serious health hazard if it gets into sources of drinking water. The World Health Organization (WHO) reports that nearly 2.2 million people die annually from diseases caused by contaminated water, such as cho
https://en.wikipedia.org/wiki/SEA-PHAGES
SEA-PHAGES stands for Science Education Alliance-Phage Hunters Advancing Genomics and Evolutionary Science; it was formerly called the National Genomics Research Initiative. This was the first initiative launched by the Howard Hughes Medical Institute (HHMI) Science Education Alliance (SEA) by their director Tuajuanda C. Jordan in 2008 to improve the retention of Science, technology, engineering, and mathematics (STEM) students. SEA-PHAGES is a two-semester undergraduate research program administered by the University of Pittsburgh's Graham Hatfull's group and the Howard Hughes Medical Institute's Science Education Division. Students from over 100 universities nationwide engage in authentic individual research that includes a wet-bench laboratory and a bioinformatics component. Curriculum During the first semester of this program, classes of around 18-24 undergraduate students work under the supervision of one or two university faculty members and a graduate student assistant—who have completed two week-long training workshops—to isolate and characterize their own personal bacteriophage that infects a specific bacterial host cell from local soil samples. Once students have successfully isolated a phage, they are able to classify them by visualizing them through Electron microscope (EM) images. Also, DNA is extracted and purified by the students, and one sample is sent for sequencing to be ready for the second semester's curriculum. The second semester consists of the annotation of the genome the class sent to be sequenced. In that case, students work together to evaluate the genes for start-stop coordinates, ribosome-binding sites, and possible functions of those proteins in which the sequence codes. Once the annotation is completed, it is submitted to the National Center for Biotechnology Information's (NCBI) DNA sequence database GenBank. If there is still time in the semester or the sent DNA was not able to be sequenced, the class could request genome file fro
https://en.wikipedia.org/wiki/Directory-based%20coherence
Directory-based coherence is a mechanism to handle Cache coherence problem in Distributed shared memory (DSM) a.k.a. Non-Uniform Memory Access (NUMA). Another popular way is to use a special type of computer bus between all the nodes as a "shared bus" (a.k.a. System bus). Directory-based coherence uses a special directory to serve instead of the shared bus in the bus-based coherence protocols. Both of these designs use the corresponding medium (i.e. directory or bus) as a tool to facilitate the communication between different nodes, and to guarantee that the coherence protocol is working properly along all the communicating nodes. In directory based cache coherence, this is done by using this directory to keep track of the status of all cache blocks, the status of each block includes in which cache coherence "state" that block is, and which nodes are sharing that block at that time, which can be used to eliminate the need to broadcast all the signals to all nodes, and only send it to the nodes that are interested in this single block. Following are a few advantages and disadvantages of the directory based cache coherence protocol: Scalability: This is one of the strongest motivations for going to directory based designs. What we mean by scalability, in short, is how good a specific system is in handling the growing amount of work that it is responsible to do . For this criteria, Bus based systems cannot do well due to the limitation caused when having a shared bus that all nodes are using in the same time. For a relatively small number of nodes, bus systems can do well. However, while the number of nodes is growing, some problems may occur in this regard. Especially since only one node is allowed to use the bus at a time, which will significantly harm the performance of the overall system. On the other hand, using directory-based systems, there will be no such bottleneck to constrain the scalability of the system. Simplicity: This is one of the points where the
https://en.wikipedia.org/wiki/List%20of%20set%20theory%20topics
This page is a list of articles related to set theory. Articles on individual set theory topics Lists related to set theory Glossary of set theory List of large cardinal properties List of properties of sets of reals List of set identities and relations Set theorists Societies and organizations Association for Symbolic Logic The Cabal Topics Set theory
https://en.wikipedia.org/wiki/List%20of%20mathematics%20history%20topics
This is a list of mathematics history topics, by Wikipedia page. See also list of mathematicians, timeline of mathematics, history of mathematics, list of publications in mathematics. 1729 (anecdote) Adequality Archimedes Palimpsest Archimedes' use of infinitesimals Arithmetization of analysis Brachistochrone curve Chinese mathematics Cours d'Analyse Edinburgh Mathematical Society Erlangen programme Fermat's Last Theorem Greek mathematics Thomas Little Heath Hilbert's problems History of topos theory Hyperbolic quaternion Indian mathematics Islamic mathematics Italian school of algebraic geometry Kraków School of Mathematics Law of Continuity Lwów School of Mathematics Nicolas Bourbaki Non-Euclidean geometry Scottish Café Seven bridges of Königsberg Spectral theory Synthetic geometry Tautochrone curve Unifying theories in mathematics Waring's problem Warsaw School of Mathematics Academic positions Lowndean Professor of Astronomy and Geometry Lucasian professor Rouse Ball Professor of Mathematics Sadleirian Chair See also History
https://en.wikipedia.org/wiki/Processing%20delay
In a network based on packet switching, processing delay is the time it takes routers to process the packet header. Processing delay is a key component in network delay. During processing of a packet, routers may check for bit-level errors in the packet that occurred during transmission as well as determining where the packet's next destination is. Processing delays in high-speed routers are typically on the order of microseconds or less. After this nodal processing, the router directs the packet to the queue where further delay can happen (queuing delay). In the past, the processing delay has been ignored as insignificant compared to the other forms of network delay. However, in some systems, the processing delay can be quite large especially where routers are performing complex encryption algorithms and examining or modifying packet content. Deep packet inspection done by some networks examine packet content for security, legal, or other reasons, which can cause very large delay and thus is only done at selected inspection points. Routers performing network address translation also have higher than normal processing delay because those routers need to examine and modify both incoming and outgoing packets. See also Latency (engineering)
https://en.wikipedia.org/wiki/Cross-covariance
In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation for the expectation operator, if the processes have the mean functions and , then the cross-covariance is given by Cross-covariance is related to the more commonly used cross-correlation of the processes in question. In the case of two random vectors and , the cross-covariance would be a matrix (often denoted ) with entries Thus the term cross-covariance is used in order to distinguish this concept from the covariance of a random vector , which is understood to be the matrix of covariances between the scalar components of itself. In signal processing, the cross-covariance is often called cross-correlation and is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. It is a function of the relative time between the signals, is sometimes called the sliding dot product, and has applications in pattern recognition and cryptanalysis. Cross-covariance of random vectors Cross-covariance of stochastic processes The definition of cross-covariance of random vectors may be generalized to stochastic processes as follows: Definition Let and denote stochastic processes. Then the cross-covariance function of the processes is defined by: where and . If the processes are complex-valued stochastic processes, the second factor needs to be complex conjugated: Definition for jointly WSS processes If and are a jointly wide-sense stationary, then the following are true: for all , for all and for all By setting (the time lag, or the amount of time by which the signal has been shifted), we may define . The cross-covariance function of two jointly WSS processes is therefore given by: which is equivalent to . Uncorrelatedness Two stochastic processes and are called uncorrelated i
https://en.wikipedia.org/wiki/Apache%20Celix
Apache Celix is an open-source implementation of the OSGi specification adapted to C and C++ developed by the Apache Software Foundation. The project aims to provide a framework to develop (dynamic) modular software applications using component and/or service-oriented programming. Apache Celix is primarily developed in C and adds an additional abstraction, in the form of a library, to support for C++. Modularity in Apache Celix is achieved by supporting - run-time installed - bundles. Bundles are zip files and can contain software modules in the form of shared libraries. Modules can provide and request dynamic services, for and from other modules, by interacting with a provided bundle context. Services in Apache Celix are "plain old" structs with function pointers or "plain old C++ Objects" (POCO). History Apache Celix was welcomed in the Apache Incubator at November 2010 and graduated to Top Level Project from the Apache Incubator in July 2014.
https://en.wikipedia.org/wiki/Scanning%20mobility%20particle%20sizer
A scanning mobility particle sizer (SMPS) is an analytical instrument that measures the size and number concentration of aerosol particles with diameters from 2.5 nm to 1000 nm. They employ a continuous, fast-scanning technique to provide high-resolution measurements. Applications The particles that are investigated can be of biological or chemical nature. The instrument can be used for air quality measurement indoors, vehicle exhaust, research in bioaerosols, atmospheric studies, and toxicology testing.
https://en.wikipedia.org/wiki/Perfect%20fluid
In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure p. Real fluids are "sticky" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Quark–gluon plasma is the closest known substance to a perfect fluid. In space-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity vector field of the fluid and where is the metric tensor of Minkowski spacetime. In time-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity of the fluid and where is the metric tensor of Minkowski spacetime. This takes on a particularly simple form in the rest frame where is the energy density and is the pressure of the fluid. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids. Perfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. In general relativity, the expression for the stress–energy tensor of a perfect fluid is written as where U is the 4-velocity vector field of the fluid and where is the inverse metric, written with a space-positive signature. See also Equation of state Ideal gas Fluid solutions in general relativity Potential flow
https://en.wikipedia.org/wiki/Orban%20%28audio%20processing%29
Orban is an international company making audio processors for radio, television and Internet broadcasters. It has been operating since founder Bob Orban sold his first product in 1967. The company was originally based in San Francisco, California. History The Orban company started in 1967 when Bob Orban built and sold his first product, a stereo synthesizer, to WOR-FM in New York City, a year before Orban earned his master's degree from Stanford University. He teamed with synthesizer pioneers Bernie Krause and Paul Beaver to promote his products. In 1970, Orban established manufacturing and design in San Francisco. Bob Orban partnered with John Delantoni to form Orban Associates in 1975. The company was bought by Harman International in 1989, and the firm moved to nearby San Leandro in 1991. In 2000, Orban was bought by Circuit Research Labs (CRL) who moved manufacturing to Tempe, Arizona, in 2005, keeping the design team in the San Francisco Bay Area. Orban expanded into Germany in 2006 by purchasing Dialog4 System Engineering in Ludwigsburg. Orban USA acquired the company in 2009, based in Arizona. The Orban company was acquired by Daysequerra in 2016, moving manufacturing to New Jersey. In 2020, Orban Labs consolidated divisions and streamlined operations, with Orban Europe GmbH assuming responsibility for all Orban product sales worldwide. Over its years of trading, the Orban company has released many well-known audio-processing products, including the Orban Optimod 8000, which was the first audio processor to include FM processing and a stereo generator under one package, an innovative idea at the time, as no other processor took into account 75 μs pre-emphasis curve employed by FM, which leads to low average modulation and many peaks. This was followed by the Orban Optimod 8100, which went on to become the company's most successful product, and the Orban Optimod 8200, the first successful digital signal processor. It was entirely digital and featured a two
https://en.wikipedia.org/wiki/Hindgut%20fermentation
Hindgut fermentation is a digestive process seen in monogastric herbivores, animals with a simple, single-chambered stomach. Cellulose is digested with the aid of symbiotic bacteria. The microbial fermentation occurs in the digestive organs that follow the small intestine: the large intestine and cecum. Examples of hindgut fermenters include proboscideans and large odd-toed ungulates such as horses and rhinos, as well as small animals such as rodents, rabbits and koalas. In contrast, foregut fermentation is the form of cellulose digestion seen in ruminants such as cattle which have a four-chambered stomach, as well as in sloths, macropodids, some monkeys, and one bird, the hoatzin. Cecum Hindgut fermenters generally have a cecum and large intestine that are much larger and more complex than those of a foregut or midgut fermenter. Research on small cecum fermenters such as flying squirrels, rabbits and lemurs has revealed these mammals to have a GI tract about 10-13 times the length of their body. This is due to the high intake of fiber and other hard to digest compounds that are characteristic to the diet of monogastric herbivores. Unlike in foregut fermenters, the cecum is located after the stomach and small intestine in monogastric animals, which limits the amount of further digestion or absorption that can occur after the food is fermented. Large intestine In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients. Cecotropes are surrounded by a layer of mucus which protects them from stomach acid but which does not inhibit nutrient absorption in the small intestine. Coprophagy is also practiced by some rodents, such as the capybara, guinea pig and related species, and by the marsupial common ringtail possum. This process is also beneficial in allowing for restoration of the microflora pop
https://en.wikipedia.org/wiki/Soft-bodied%20organism
Soft-bodied organisms are animals that lack skeletons. The group roughly corresponds to the group Vermes as proposed by Carl von Linné. All animals have muscles but, since muscles can only pull, never push, a number of animals have developed hard parts that the muscles can pull on, commonly called skeletons. Such skeletons may be internal, as in vertebrates, or external, as in arthropods. However, many animals groups do very well without hard parts. This include animals such as earthworms, jellyfish, tapeworms, squids and an enormous variety of animals from almost every part of the kingdom Animalia. Commonality Most soft-bodied animals are small, but they do make up the majority of the animal biomass. If we were to weigh up all animals on Earth with hard parts against soft-bodied ones, estimates indicate that the biomass of soft-bodied animals would be at least twice that of animals with hard parts, quite possibly much larger. Particularly the roundworms are extremely numerous. The nematodologist Nathan Cobb described the ubiquitous presence of nematodes on Earth as follows: "In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable, since for every massing of human beings there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites." Anatomy Not being a true phylogenetic group, soft-bodied organisms vary enormously in anatomy. Cnidarians and flatworms have a single opening to the gut and a d
https://en.wikipedia.org/wiki/Stevens%20Award
The Stevens Award is a software engineering lecture award given by the Reengineering Forum, an industry association. The international Stevens Award was created to recognize outstanding contributions to the literature or practice of methods for software and systems development. The first award was given in 1995. The presentations focus on the current state of software methods and their direction for the future. This award lecture is named in memory of Wayne Stevens (1944-1993), a consultant, author, pioneer, and advocate of the practical application of software methods and tools. The Stevens Award and lecture is managed by the Reengineering Forum. The award was founded by International Workshop on Computer Aided Software Engineering (IWCASE), an international workshop association of users and developers of computer-aided software engineering (CASE) technology, which merged into The Reengineering Forum. Wayne Stevens was a charter member of the IWCASE executive board. Recipients 1995: Tony Wasserman 1996: David Harel 1997: Michael Jackson 1998: Thomas McCabe 1999: Tom DeMarco 2000: Gerald Weinberg 2001: Peter Chen 2002: Cordell Green 2003: Manny Lehman 2004: François Bodart 2005: Mary Shaw, Jim Highsmith 2006: Grady Booch 2007: Nicholas Zvegintzov 2008: Harry Sneed 2009: Larry Constantine 2010: Peter Aiken 2011: Jared Spool, Barry Boehm 2012: Philip Newcomb 2013: Jean-Luc Hainaut 2014: François Coallier 2015: Pierre Bourque See also List of computer science awards
https://en.wikipedia.org/wiki/Bibliography%20of%20encyclopedias%3A%20biology
This is a list of encyclopedias as well as encyclopedic and biographical dictionaries published on the subject of biology in any language. Entries are in the English language unless specifically stated as otherwise. General biology Becher, Anne, Joseph Richey. American environmental leaders: From colonial times to the present. Grey House, 2008. . Butcher, Russell D., Stephen E. Adair, Lynn A. Greenwalt. America's national wildlife refuges: A complete guide. Roberts Rinehart Publishers in cooperation with Ducks Unlimited, 2003. . Ecological Internet, Inc. EcoEarth.info: Environment portal and search engine. Ecological Internet, Inc. . Friday, Adrian & Davis S. Ingram. The Cambridge Encyclopedia of Life Sciences. Cambridge, 1985. Gaither, Carl C., Alma E. Cavazos-Gaither, Andrew Slocombe. Naturally speaking: A dictionary of quotations on biology, botany, nature and zoology. Institute of Physics, 2001. . Gibson, Daniel, National Audubon Society. Audubon guide to the national wildlife refuges. Southwest: Arizona, Nevada, New Mexico, Texas. St. Martin's Griffin, 2000. . Goudie, Andrew, David J. Cuff. Encyclopedia of global change: Environmental change and human society. Oxford University Press, 2002. . Gove, Doris. Audubon guide to the national wildlife refuges. Southeast : Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, Puerto Rico, South Carolina, Tennessee, U.S. Virgin Islands. St. Martin's Griffin, 2000. . Grassy, John. Audubon guide to the national wildlife refuges: Northern Midwest: Illinois, Indiana, Iowa, Michigan, Minnesota, Nebraska, North Dakota, Ohio, South Dakota, Wisconsin. St. Martin's Griffin, c2000. . Grassy, John. Audubon guide to the national wildlife refuges: Rocky Mountains: Colorado, Idaho, Montana, Utah, Wyoming. St. Martin's Griffin, 2000. . Gray, Peter. Encyclopedia of the Biological Sciences. Krieger, 1981. Grinstein, Louise S., Carol A. Biermann, Rose K. Rose. Women in the biological sciences: A biobibliographic sourceboo
https://en.wikipedia.org/wiki/Supercooling
Supercooling, also known as undercooling, is the process of lowering the temperature of a liquid below its freezing point without it becoming a solid. It is achieved in the absence of a seed crystal or nucleus around which a crystal structure can form. The supercooling of water can be achieved without any special techniques other than chemical demineralization, down to . Droplets of supercooled water often exist in stratus and cumulus clouds. An aircraft flying through such a cloud sees an abrupt crystallization of these droplets, which can result in the formation of ice on the aircraft's wings or blockage of its instruments and probes. Animals rely on different phenomena with similar effects to survive in extreme temperatures. There are many other mechanisms that aid in maintaining a liquid state, such as the production of antifreeze proteins, which bind to ice crystals to prevent water molecules from binding and spreading the growth of ice. The winter flounder is one such fish that utilizes these proteins to survive in its frigid environment. This is not strictly supercooling because is the result of freezing point lowering caused by the presence of proteins. In plants, cellular barriers such as lignin, suberin, and the cuticle inhibit ice nucleators and force water into the supercooled tissue. Explanation A liquid crossing its standard freezing point will crystalize in the presence of a seed crystal or nucleus around which a crystal structure can form creating a solid. Lacking any such nuclei, the liquid phase can be maintained all the way down to the temperature at which crystal homogeneous nucleation occurs. Homogeneous nucleation can occur above the glass transition temperature, but if homogeneous nucleation has not occurred above that temperature, an amorphous (non-crystalline) solid will form. Water normally freezes at , but it can be "supercooled" at standard pressure down to its crystal homogeneous nucleation at almost . The process of supercooling
https://en.wikipedia.org/wiki/Calcium%20in%20biology
Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation. Plasma calcium levels in mammals are tightly regulated, with bone acting as the major mineral storage site. Calcium ions, Ca2+, are released from bone into the bloodstream under controlled conditions. Calcium is transported through the bloodstream as dissolved ions or bound to proteins such as serum albumin. Parathyroid hormone secreted by the parathyroid gland regulates the resorption of Ca2+ from bone, reabsorption in the kidney back into circulation, and increases in the activation of vitamin D3 to calcitriol. Calcitriol, the active form of vitamin D3, promotes absorption of calcium from the intestines and bones. Calcitonin secreted from the parafollicular cells of the thyroid gland also affects calcium levels by opposing parathyroid hormone; however, its physiological significance in humans is dubious. Intracellular calcium is stored in organelles which repetitively release and then reaccumulate Ca2+ ions in response to specific cellular events: storage sites include mitochondria and the endoplasmic reticulum. Characteristic concentrations of calcium in model organisms are: in E. coli 3mM (bound), 100nM (free), in budding yeast 2mM (bound), in mammalian cell 10-100nM (free) and in blood plasma 2mM. Humans In 2020, calcium was the 204th most commonly prescribed medication in the United States, with more than 2million prescriptions. Dietary recommendations The U.S. Institute of Medicine (IOM) established Recommended Dietary Allowanc
https://en.wikipedia.org/wiki/Cell%20cycle%20analysis
Cell cycle analysis by DNA content measurement is a method that most frequently employs flow cytometry to distinguish cells in different phases of the cell cycle. Before analysis, the cells are usually permeabilised and treated with a fluorescent dye that stains DNA quantitatively, such as propidium iodide (PI) or 4,6-diamidino-2-phenylindole (DAPI). The fluorescence intensity of the stained cells correlates with the amount of DNA they contain. As the DNA content doubles during the S phase, the DNA content (and thereby intensity of fluorescence) of cells in the G0 phase and G1 phase (before S), in the S phase, and in the G2 phase and M phase (after S) identifies the cell cycle phase position in the major phases (G0/G1 versus S versus G2/M phase) of the cell cycle. The cellular DNA content of individual cells is often plotted as their frequency histogram to provide information about relative frequency (percentage) of cells in the major phases of the cell cycle. Cell cycle anomalies revealed on the DNA content frequency histogram are often observed after different types of cell damage, for example such DNA damage that interrupts the cell cycle progression at certain checkpoints. Such an arrest of the cell cycle progression can lead either to an effective DNA repair, which may prevent transformation of normal into a cancer cell (carcinogenesis), or to cell death, often by the mode of apoptosis. An arrest of cells in G0 or G1 is often seen as a result of lack of nutrients (growth factors), for example after serum deprivation. Cell cycle analysis was first described in 1969 at Los Alamos Scientific Laboratory by a group from the University of California using the Feulgen staining technique. The first protocol for cell cycle analysis using propidium iodide staining was presented in 1975 by Awtar Krishan from Harvard Medical School and is still widely cited today. Multiparameter analysis of the cell cycle includes, in addition to measurement of cellular DNA content, oth
https://en.wikipedia.org/wiki/Morphology%20%28biology%29
Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features. This includes aspects of the outward appearance (shape, structure, colour, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of the internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of gross structure of an organism or taxon and its component parts. History The etymology of the word "morphology" is from the Ancient Greek (), meaning "form", and (), meaning "word, study, research". While the concept of form in biology, opposed to function, dates back to Aristotle (see Aristotle's biology), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800). Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Karl Gegenbaur and Ernst Haeckel. In 1830, Cuvier and E.G.Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution. Divisions of morphology Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization. Functional morphology is the study of the relationship between the structure and function of morphological features. Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation. Anatomy is a "branch of morphology that deals with the structure of organisms". Molecular morphology is a rarely used term, usually r
https://en.wikipedia.org/wiki/Na%C3%AFve%20physics
Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings. Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism. Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate. Examples Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature: What goes up must come down A dropped object falls straight down A solid object cannot pass through another solid object A vacuum sucks things towards it An object is either at rest or moving, in an absolute sense Two events are either simultaneous or they are not Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it. Psychological research The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, becau
https://en.wikipedia.org/wiki/Scalability
Scalability is the property of a system to handle a growing amount of work. One definition for software systems specifies that this may be done by adding resources to the system. In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a package delivery system is scalable because more packages can be delivered by adding more delivery vehicles. However, if all packages had to first pass through a single warehouse for sorting, the system would not be as scalable, because one warehouse can handle only a limited number of packages. In computing, scalability is a characteristic of computers, networks, algorithms, networking protocols, programs and applications. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Webscale is a computer architectural approach that brings the capabilities of large-scale cloud computing companies into enterprise data centers. In distributed systems, there are several definitions according to the authors, some considering the concepts of scalability a sub-part of elasticity, others as being distinct. In mathematics, scalability mostly refers to closure under scalar multiplication. In industrial engineering and manufacturing, scalability refers to the capacity of a process, system, or organization to handle a growing workload, adapt to increasing demands, and maintain operational efficiency. A scalable system can effectively manage increased production volumes, new product lines, or expanding markets without compromising quality or performance. In this context, scalability is a vital consideration for businesses aiming to meet customer expectations, remain competitive, and achieve sustainable growth. Factors influencing scalability include the flexibility of the production process, the adaptability of the workforce, and the integration of advanced technologies. By implementing scalable solutions, c
https://en.wikipedia.org/wiki/Secure%20end%20node
A Secure End Node is a trusted, individual computer that temporarily becomes part of a trusted, sensitive, well-managed network and later connects to many other (un)trusted networks/clouds. SEN's cannot communicate good or evil data between the various networks (e.g. exfiltrate sensitive information, ingest malware, etc.). SENs often connect through an untrusted medium (e.g. the Internet) and thus require a secure connection and strong authentication (of the device, software, user, environment, etc.). The amount of trust required (and thus operational, physical, personnel, network, and system security applied) is commensurate with the risk of piracy, tampering, and reverse engineering (within a given threat environment). An essential characteristic of SENs is they cannot persist information as they change between networks (or domains). The remote, private, and secure network might be organization's in-house network or a cloud service. A Secure End Node typically involves authentication of (i.e. establishing trust in) the remote computer's hardware, firmware, software, and/or user. In the future, the device-user's environment (location, activity, other people, etc.) as communicated by means of its (or the network's) trusted sensors (camera, microphone, GPS, radio, etc.) could provide another factor of authentication. A Secure End Node solves/mitigates end node problem. The common, but expensive, technique to deploy SENs is for the network owner to issue known, trusted, unchangeable hardware to users. For example, and assuming apriori access, a laptop's TPM chip can authenticate the hardware (likewise a user's smartcard authenticates the user). A different example is the DoD Software Protection Initiative's Cross Fabric Internet Browsing System that provides browser-only, immutable, anti-tamper thin clients to users Internet browsing. Another example is a non-persistent, remote client that boots over the network. A less secure but very low cost approach i
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles
Physics (Greek: physis–φύσις meaning "nature") is the natural science which examines basic concepts such as mass, charge, matter and its motion and all that derives from these, such as energy, force and spacetime. More broadly, it is the general analysis of nature, conducted in order to understand how the world and universe behave. The index of physics articles is split into multiple pages due to its size. To navigate by individual letter use the table of contents below. See also List of basic physics topics
https://en.wikipedia.org/wiki/Im%20schwarzen%20Walfisch%20zu%20Askalon
"Im schwarzen Walfisch zu Askalon" ("In the Black Whale of Ascalon") is a popular academic commercium song. It was known as a beer-drinking song in many German speaking ancient universities. Joseph Victor von Scheffel provided the lyrics under the title Altassyrisch (Old Assyrian) 1854, the melody is from 1783 or earlier. Content The lyrics reflect an endorsement of the bacchanalian mayhem of student life, similar as in Gaudeamus igitur. The song describes an old Assyrian drinking binge of a man in an inn with some references to the Classics. The desks are made of marble and the large invoice is being provided in cuneiform on bricks. However the carouser has to admit that he left his money already in Nineveh. A Nubian house servant kicks him out then and the song closes with the notion, that (compare John 4:44) a prophet has no honor in his own country, if he doesn't pay cash for his consumption. Charles Godfrey Leland has translated the poems among other works of Scheffel. Each stanza begins with the naming verse "Im Schwarzen Walfisch zu Askalon", but varies the outcome. The "Im" is rather prolonged with the melody and increases the impact. Some of the stanzas: Im schwarzen Wallfisch zu Ascalon Da trank ein Mann drei Tag', Bis dass er steif wie ein Besenstiel Am Marmortische lag. 'In the Black Whale at Ascalon A man drank day by day, Till, stiff as any broom-handle, Upon the floor he lay. ... In the Black Whale at Ascalon The waiters brought the bill, In arrow-heads on six broad tiles To him who thus did swill. ... In the Black Whale at Ascalon No prophet hath renown; And he who there would drink in peace Must pay the money down. In typical manner of Scheffel, it contains an anachronistic mixture of various times and eras, parodistic notions on current science, as e.g. Historical criticism and interpretations of the Book of Jonah as a mere shipwrecking narrative. According to Scheffel, the guest didn't try to get back in the inn as „Aussi bin
https://en.wikipedia.org/wiki/Pacemaker%20failure
Pacemaker failure is the inability of an implanted artificial pacemaker to perform its intended function of regulating the beating of the heart. A pacemaker uses electrical impulses delivered by electrodes in order to contract the heart muscles. Failure of a pacemaker is defined by the requirement of repeat surgical pacemaker-related procedures after the initial implantation. Most implanted pacemakers are dual chambered and have two leads, causing the implantation time to take longer because of this more complicated pacemaker system. These factors can contribute to an increased rate of complications which can lead to pacemaker failure. Approximately 2.25 million pacemakers were implanted in the United States between 1990 and 2002, and of those pacemakers, about 8,834 were removed from patients because of device malfunction most commonly connected to generator abnormalities. In the 1970s, results of an Oregon study indicated that 10% of implanted pacemakers failed within the first month. Another study found that more than half of pacemaker complications occurred during the first 3 months after implantation. Causes of pacemaker failure include lead related failure, unit malfunction, problems at the insertion site, failures related to exposure to high voltage electricity or high intensity microwaves, and a miscellaneous category (one patient had ventricular tachycardia when using his electric razor and another patient had persistent pacing of the diaphragm muscle). Pacemaker malfunction has the ability to cause serious injury or death, but if detected early enough, patients can continue with their needed therapy once complications are resolved. Symptoms Moderate dizziness or lightheadedness Syncope Slow or fast heart rate Discomfort in chest area Palpitations Hiccups Causes Direct factors Lead dislodgement A Macro-dislodgement is radiographically visible. A Micro-dislodgement is a minimal displacement in the lead that is not visible in a chest X-ray, but h
https://en.wikipedia.org/wiki/Pharos%20network%20coordinates
Pharos is a hierarchical and decentralized network coordinate system. With the help of a simple two-level architecture, it achieves much better prediction accuracy then the representative Vivaldi coordinates, and it is incrementally deployable. Overview Network coordinate (NC) systems are an efficient mechanism for Internet latency prediction with scalable measurements. Vivaldi is the most common distributed NC system, and it is deployed in many well-known internet systems, such as Bamboo DHT (Distributed hash table), Stream-Based Overlay Network (SBON) and Azureus BitTorrent. Pharos is a fully decentralized NC system. All nodes in Pharos form two levels of overlays, namely a base overlay for long link prediction, and a local cluster overlay for short link prediction. The Vivaldi algorithm is applied to both the base overlay and the local cluster. As a result, each Pharos node has two sets of coordinates. The coordinates calculated in the base overlay, which are named global NC, are used for the global scale, and the coordinates calculated in the corresponding local cluster, which are named local NC, covers a smaller range of distance. To form the local cluster, Pharos uses a method similar to binning and chooses some nodes called anchors to help node clustering. This method only requires a one-time measurement (with possible periodic refreshes) by the client to a small, fixed set of anchors. Any stable nodes which are able to response ICMP ping message can serve as anchor, such as the existing DNS servers. The experimental results show that Pharos greatly outperforms Vivaldi in internet distance prediction without adding any significant overhead. Insights behind Pharos Simple and effective, obtain significant improvement in prediction accuracy by introducing a straightforward hierarchical distance prediction Fully compatible with Vivaldi, the most widely deployed NC system. For every host where the Vivaldi client has been deployed, it just needs to run
https://en.wikipedia.org/wiki/Nyquist%20stability%20criterion
In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system. Because it only looks at the Nyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes. The Nyquist stability criterion is widely used in electronics and control system engineering, as well as other fields, for designing and analyzing systems with feedback. While Nyquist is one of the most general stability tests, it is still restricted to linear time-invariant (LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as the circle criterion and the scaled relative graph of a nonlinear operator. Additionally, other stability criteria like Lyapunov methods can also be applied for non-linear systems. Although Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool. Nyquist plot A Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist p
https://en.wikipedia.org/wiki/Univariate%20%28statistics%29
Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed. Univariate data types Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types. Categorical univariate data Categorical univariate data consists of non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordinal scale of measurement. Numerical univariate data Numerical univariate data consists of observations that are numbers. They are obtained using either interval or ratio scale of measurement. This type of univariate data can be classified even further into two subcategories: discrete and continuous. A numerical univariate data is discrete if the set of all possible values is finite or countably infinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people). Data analysis and applications Univariate analysis is the simplest form of analyzing data. Uni means "one", so the data has only one variable (univariate). Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more s
https://en.wikipedia.org/wiki/Queuing%20delay
In telecommunication and computer engineering, the queuing delay or queueing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address. Router processing This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay. As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum que
https://en.wikipedia.org/wiki/Machine%20Check%20Architecture
In computing, Machine Check Architecture (MCA) is an Intel and AMD mechanism in which the CPU reports hardware errors to the operating system. Intel's P6 and Pentium 4 family processors, AMD's K7 and K8 family processors, as well as the Itanium architecture implement a machine check architecture that provides a mechanism for detecting and reporting hardware (machine) errors, such as: system bus errors, ECC errors, parity errors, cache errors, and translation lookaside buffer errors. It consists of a set of model-specific registers (MSRs) that are used to set up machine checking and additional banks of MSRs used for recording errors that are detected. See also Machine-check exception (MCE) High availability (HA) Reliability, availability and serviceability (RAS) Windows Hardware Error Architecture (WHEA)
https://en.wikipedia.org/wiki/Full-employment%20theorem
In computer science and mathematics, a full employment theorem is a term used, often humorously, to refer to a theorem which states that no algorithm can optimally perform a particular task done by some class of professionals. The name arises because such a theorem ensures that there is endless scope to keep discovering new techniques to improve the way at least some specific task is done. For example, the full employment theorem for compiler writers states that there is no such thing as a provably perfect size-optimizing compiler, as such a proof for the compiler would have to detect non-terminating computations and reduce them to a one-instruction infinite loop. Thus, the existence of a provably perfect size-optimizing compiler would imply a solution to the halting problem, which cannot exist. This also implies that there may always be a better compiler since the proof that one has the best compiler cannot exist. Therefore, compiler writers will always be able to speculate that they have something to improve. A similar example in practical computer science is the idea of no free lunch in search and optimization, which states that no efficient general-purpose solver can exist, and hence there will always be some particular problem whose best known solution might be improved. Similarly, Gödel's incompleteness theorems have been called full employment theorems for mathematicians. Tasks such as virus writing and detection, and spam filtering and filter-breaking are also subject to Rice's theorem.
https://en.wikipedia.org/wiki/Putrefaction
Putrefaction is the fifth stage of death, following pallor mortis, livor mortis, algor mortis, and rigor mortis. This process references the breaking down of a body of an animal post-mortem. In broad terms, it can be viewed as the decomposition of proteins, and the eventual breakdown of the cohesiveness between tissues, and the liquefaction of most organs. This is caused by the decomposition of organic matter by bacterial or fungal digestion, which causes the release of gases that infiltrate the body's tissues, and leads to the deterioration of the tissues and organs. The approximate time it takes putrefaction to occur is dependent on various factors. Internal factors that affect the rate of putrefaction include the age at which death has occurred, the overall structure and condition of the body, the cause of death, and external injuries arising before or after death. External factors include environmental temperature, moisture and air exposure, clothing, burial factors, and light exposure. Body farms are facilities that study the way various factors affect the putrefaction process. The first signs of putrefaction are signified by a greenish discoloration on the outside of the skin on the abdominal wall corresponding to where the large intestine begins, as well as under the surface of the liver. Certain substances, such as carbolic acid, arsenic, strychnine, and zinc chloride, can be used to delay the process of putrefaction in various ways based on their chemical make up. Description In thermodynamic terms, all organic tissues are composed of chemical energy, which, when not maintained by the constant biochemical maintenance of the living organism, begin to chemically break down due to the reaction with water into amino acids, known as hydrolysis. The breakdown of the proteins of a decomposing body is a spontaneous process. Protein hydrolysis is accelerated as the anaerobic bacteria of the digestive tract consume, digest, and excrete the cellular proteins of th
https://en.wikipedia.org/wiki/Tagged%20architecture
In computer science, a tagged architecture is a type of computer architecture where every word of memory constitutes a tagged union, being divided into a number of bits of data, and a tag section that describes the type of the data: how it is to be interpreted, and, if it is a reference, the type of the object that it points to. Architecture In contrast, program and data memory are indistinguishable in the von Neumann architecture, making the way the memory is referenced critical to interpret the correct meaning. Notable examples of American tagged architectures were the Lisp machines, which had tagged pointer support at the hardware and opcode level, the Burroughs large systems, which have a data-driven tagged and descriptor-based architecture, and the non-commercial Rice Computer. Both the Burroughs and Lisp machine are examples of high-level language computer architectures, where the tagging is used to support types from a high-level language at the hardware level. In addition to this, the original Xerox Smalltalk implementation used the least-significant bit of each 16-bit word as a tag bit: if it was clear then the hardware would accept it as an aligned memory address while if it was set it was treated as a (shifted) 15-bit integer. Current Intel documentation mentions that the lower bits of a memory address might be similarly used by some interpreter-based systems. In the Soviet Union, the Elbrus series of supercomputers pioneered the use of tagged architectures in 1973. See also Executable-space protection Harvard architecture
https://en.wikipedia.org/wiki/Quantum%20non-equilibrium
Quantum non-equilibrium is a concept within stochastic formulations of the De Broglie–Bohm theory of quantum physics. Overview In quantum mechanics, the Born rule states that the probability density of finding a system in a given state, when measured, is proportional to the square of the amplitude of the system's wavefunction at that state, and it constitutes one of the fundamental axioms of the theory. This is not the case for the De Broglie–Bohm theory, where the Born rule is not a basic law. Rather, in this theory the link between the probability density and the wave function has the status of a hypothesis, called the quantum equilibrium hypothesis, which is additional to the basic principles governing the wave function, the dynamics of the quantum particles and the Schrödinger equation. (For mathematical details, refer to the derivation by Peter R. Holland.) Accordingly, quantum non-equilibrium describes a state of affairs where the Born rule is not fulfilled; that is, the probability to find the particle in the differential volume at time t is unequal to Recent advances in investigations into properties of quantum non-equilibrium states have been performed mainly by theoretical physicist Antony Valentini, and earlier steps in this direction were undertaken by David Bohm, Jean-Pierre Vigier, Basil Hiley and Peter R. Holland. The existence of quantum non-equilibrium states has not been verified experimentally; quantum non-equilibrium is so far a theoretical construct. The relevance of quantum non-equilibrium states to physics lies in the fact that they can lead to different predictions for results of experiments, depending on whether the De Broglie–Bohm theory in its stochastic form or the Copenhagen interpretation is assumed to describe reality. (The Copenhagen interpretation, which stipulates the Born rule a priori, does not foresee the existence of quantum non-equilibrium states at all.) That is, properties of quantum non-equilibrium can make certain cla
https://en.wikipedia.org/wiki/Restricted%20isometry%20property
In linear algebra, the restricted isometry property (RIP) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence Tao and is used to prove many theorems in the field of compressed sensing. There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard, and is hard to approximate as well), but many random matrices have been shown to remain bounded. In particular, it has been shown that with exponentially high probability, random Gaussian, Bernoulli, and partial Fourier matrices satisfy the RIP with number of measurements nearly linear in the sparsity level. The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices. Web forms to evaluate bounds for the Gaussian ensemble are available at the Edinburgh Compressed Sensing RIC page. Definition Let A be an m × p matrix and let 1 ≤ s ≤ p be an integer. Suppose that there exists a constant such that, for every m × s submatrix As of A and for every s-dimensional vector y, Then, the matrix A is said to satisfy the s-restricted isometry property with restricted isometry constant . This condition is equivalent to the statement that for every m × s submatrix As of A we have where is the identity matrix and is the operator norm. See for example for a proof. Finally this is equivalent to stating that all eigenvalues of are in the interval . Restricted Isometric Constant (RIC) The RIC Constant is defined as the infimum of all possible for a given . It is denoted as . Eigenvalues For any matrix that satisfies the RIP property with a RIC of , the following condition holds: . The tightest upper bound on the RIC can be computed for Gaussian matrices. This can be achieved by computing the exact probability that all the eigenvalues of Wishart matrices lie within an interval. See also Compressed sensing Mutual coh
https://en.wikipedia.org/wiki/Video%20super-resolution
Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency. There are many approaches for this task, but this problem still remains to be popular and challenging. Mathematical explanation Most research considers the degradation process of frames as where: — original high-resolution frame sequence, — blur kernel, — convolution operation, — downscaling operation, — additive noise, — low-resolution frame sequence. Super-resolution is an inverse operation, so its problem is to estimate frame sequence from frame sequence so that is close to original . Blur kernel, downscaling operation and additive noise should be estimated for given input to achieve better results. Video super-resolution approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. Some most essential components for VSR are guided by four basic functionalities: Propagation, Alignment, Aggregation, and Upsampling. Propagation refers to the way in which features are propagated temporally Alignment concerns on the spatial transformation applied to misaligned images/features Aggregation defines the steps to combine aligned features Upsampling describes the method to transform the aggregated features to the final output image Methods When working with video, temporal information could be used to improve upscaling quality. Single image super-resolution methods could be used too, generating high-resolution frames independently from their neighbours, but it's less effective and introduces temporal instability. There are a few traditional methods, which consider the video super-resolution task as an optimization problem. Last years deep learning based methods
https://en.wikipedia.org/wiki/Haldane%27s%20rule
Haldane's rule is an observation about the early stage of speciation, formulated in 1922 by the British evolutionary biologist J. B. S. Haldane, that states that if — in a species hybrid — only one sex is inviable or sterile, that sex is more likely to be the heterogametic sex. The heterogametic sex is the one with two different sex chromosomes; in therian mammals, for example, this is the male. Overview Haldane himself described the rule as: Haldane's rule applies to the vast majority of heterogametic organisms. This includes the case where two species make secondary contact in an area of sympatry and form hybrids after allopatric speciation has occurred. The rule includes both male heterogametic (XY or XO-type sex determination, such as found in mammals and Drosophila fruit flies) and female heterogametic (ZW or Z0-type sex determination, as found in birds and butterflies), and some dioecious plants such as campions. Hybrid dysfunction (sterility and inviability) is a major form of post-zygotic reproductive isolation, which occurs in early stages of speciation. Evolution can produce a similar pattern of isolation in a vast array of different organisms. However, the actual mechanisms leading to Haldane's rule in different taxa remain largely undefined. Hypotheses Many different hypotheses have been advanced to address the evolutionary mechanisms to produce Haldane's rule. Currently, the most popular explanation for Haldane's rule is the composite hypothesis, which divides Haldane's rule into multiple subdivisions, including sterility, inviability, male heterogamety, and female heterogamety. The composite hypothesis states that Haldane's rule in different subdivisions has different causes. Individual genetic mechanisms may not be mutually exclusive, and these mechanisms may act together to cause Haldane's rule in any given subdivision. In contrast to these views that emphasize genetic mechanisms, another view hypothesizes that population dynamics during populat
https://en.wikipedia.org/wiki/Patch%20dynamics
Patch dynamics is an ecological perspective that the structure, function, and dynamics of ecological systems can be understood through studying their interactive patches. Patch dynamics, as a term, may also refer to the spatiotemporal changes within and among patches that make up a landscape. Patch dynamics is ubiquitous in terrestrial and aquatic systems across organizational levels and spatial scales. From a patch dynamics perspective, populations, communities, ecosystems, and landscapes may all be studied effectively as mosaics of patches that differ in size, shape, composition, history, and boundary characteristics. The idea of patch dynamics dates back to the 1940s when plant ecologists studied the structure and dynamics of vegetation in terms of the interactive patches that it comprises. A mathematical theory of patch dynamics was developed by Simon Levin and Robert Paine in the 1970s, originally to describe the pattern and dynamics of an intertidal community as a patch mosaic created and maintained by tidal disturbances. Patch dynamics became a dominant theme in ecology between the late 1970s and the 1990s. Patch dynamics is a conceptual approach to ecosystem and habitat analysis that emphasizes dynamics of heterogeneity within a system (i.e. that each area of an ecosystem is made up of a mosaic of small 'sub-ecosystems'). Diverse patches of habitat created by natural disturbance regimes are seen as critical to the maintenance of this diversity (ecology). A habitat patch is any discrete area with a definite shape, spatial and configuration used by a species for breeding or obtaining other resources. Mosaics are the patterns within landscapes that are composed of smaller elements, such as individual forest stands, shrubland patches, highways, farms, or towns. Patches and mosaics Historically, due to the short time scale of human observation, mosaic landscapes were perceived to be static patterns of human population mosaics. This focus centered o
https://en.wikipedia.org/wiki/Up%20tack
The up tack or falsum (⊥, \bot in LaTeX, U+22A5 in Unicode) is a constant symbol used to represent: The truth value 'false', or a logical constant denoting a proposition in logic that is always false (often called "falsum" or "absurdum"). The bottom element in wheel theory and lattice theory, which also represents absurdum when used for logical semantics The bottom type in type theory, which is the bottom element in the subtype relation. This may coincide with the empty type, which represents absurdum under the Curry–Howard correspondence The "undefined value" in quantum physics interpretations that reject counterfactual definiteness, as in (r0,⊥) as well as Mixed radix decoding in the APL programming language The glyph of the up tack appears as an upside-down tee symbol, and as such is sometimes called eet (the word "tee" in reverse). Tee plays a complementary or dual role in many of these theories. The similar-looking perpendicular symbol (⟂, \perp in LaTeX, U+27C2 in Unicode) is a binary relation symbol used to represent: Perpendicularity of lines in geometry Orthogonality in linear algebra Independence of random variables in probability theory Coprimality in number theory The double tack up symbol (⫫, U+2AEB in Unicode) is a binary relation symbol used to represent: Conditional independence of random variables in probability theory See also Alternative plus sign Contradiction List of mathematical symbols Tee (symbol) (⊤) Notes Mathematical notation Mathematical symbols Logic symbols
https://en.wikipedia.org/wiki/Ingredient-flavor%20network
In network science, ingredient-flavor networks are networks describing the sharing of flavor compounds of culinary ingredients. In the bipartite form, an ingredient-flavor network consist of two different types of nodes: the ingredients used in the recipes and the flavor compounds that contributes to the flavor of each ingredients. The links connecting different types of nodes are undirected, represent certain compound occur in each ingredients. The ingredient-flavor network can also be projected in the ingredient or compound space where nodes are ingredients or compounds, links represents the sharing of the same compounds to different ingredients or the coexistence in the same ingredient of different compounds. History In 2011, Yong-Yeol Ahn, Sebastian E. Ahnert, James P. Bagrow and Albert-László Barabási investigated the ingredient-flavor networks of North American, Latin American, Western European, Southern European and East Asian cuisines. Based on culinary repository epicurious.com, allrecipes.com and menupan.com, 56,498 recipes were included in the survey. The efforts to apply network analysis on foods also occurred in the work of Kinouchi and Chun-Yuen Teng, with the former examined the relationship between ingredients and recipes, and the latter derived the ingredient-ingredient networks of both compliments and substitutions. Yet Ahn's ingredient-flavor network was constructed based on the molecular level understanding of culinary networks and received wide attention Properties According to Ahn, in the total number of 56,498 recipes studied, 381 ingredients and 1021 flavor compounds were identified. On average, each ingredient connected to 51 flavor compounds. It was found that in comparison with random pairing of ingredients and flavor compounds, North American cuisines tend to share more compounds while East Asian cuisines tend to share fewer compounds. It was also shown that this tendency was mostly generated by the frequently used ingredients in e
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missi
https://en.wikipedia.org/wiki/Rasta%20filtering
RASTA-filtering and Mean Subtraction was introduced to support Perceptual Linear Prediction (PLP) preprocessing. It uses bandpass filtering in the log spectral domain. Rasta filtering then removes slow channel variations. It has also been applied to cepstrum feature-based preprocessing with both log spectral and cepstral domain filtering. In general a RASTA filter is defined by The numerator is a regression filter with N being the order (must be odd) and the denominator is an integrator with time decay. The pole controls the lower limit of frequency and is normally around 0.9. RASTA-filtering can be changed to use mean subtraction, implementing a moving average filter. Filtering is normally performed in the cepstral domain. The mean becomes the long term cepstrum and is typically computed on the speech part for each separate utterance. A silence is necessary to detect each utterance.
https://en.wikipedia.org/wiki/K%C3%BCpfm%C3%BCller%27s%20uncertainty%20principle
Küpfmüller's uncertainty principle by Karl Küpfmüller in the year 1924 states that the relation of the rise time of a bandlimited signal to its bandwidth is a constant. with either or Proof A bandlimited signal with fourier transform in frequency space is given by the multiplication of any signal with with a rectangular function of width as (applying the convolution theorem) Since the fourier transform of a rectangular function is a sinc function and vice versa, follows Now the first root of is at , which is the rise time of the pulse , now follows Equality is given as long as is finite. Regarding that a real signal has both positive and negative frequencies of the same frequency band, becomes , which leads to instead of See also Heisenberg's uncertainty principle
https://en.wikipedia.org/wiki/Lightweight%20Presentation%20Protocol
Lightweight Presentation Protocol (LPP) is a protocol used to provide ISO presentation services on top of TCP/IP based protocol stacks. It is defined in RFC 1085. The Lightweight Presentation Protocol describes an approach for providing "streamlined" support of OSI model-conforming application services on top of TCP/IP-based network for some constrained environments. It was initially derived from a requirement to run the ISO Common Management Information Protocol (CMIP) in TCP/IP-based networks.
https://en.wikipedia.org/wiki/Dot%20planimeter
A dot planimeter is a device used in planimetrics for estimating the area of a shape, consisting of a transparent sheet containing a square grid of dots. To estimate the area of a shape, the sheet is overlaid on the shape and the dots within the shape are counted. The estimate of area is the number of dots counted multiplied by the area of a single grid square. In some variations, dots that land on or near the boundary of the shape are counted as half of a unit. The dots may also be grouped into larger square groups by lines drawn onto the transparency, allowing groups that are entirely within the shape to be added to the count rather than requiring their dots to be counted one by one. The estimation of area by means of a dot grid has also been called the dot grid method or (particularly when the alignment of the grid with the shape is random) systematic sampling. Perhaps because of its simplicity, it has been repeatedly reinvented. Application In forestry, cartography, and geography, the dot planimeter has been applied to maps to estimate the area of parcels of land. In botany and horticulture, it has been applied directly to sampled leaves to estimate the average leaf area. In medicine, it has been applied to Lashley diagrams as an estimate of the size of brain lesions. In mineralogy, a similar technique of counting dots in a grid is applied to cross-sections of rock samples for a different purpose, estimating the relative proportions of different constituent minerals. Theory Greater accuracy can be achieved by using a dot planimeter with a finer grid of dots. Alternatively, repeatedly placing a dot planimeter with different irrational offsets from its previous placement, and averaging the resulting measurements, can lead to a set of sampled measurements whose average tends towards the true area of the measured shape. The method using a finer grid tends to have better statistical efficiency than repeated measurement with random placements. According to Pick'
https://en.wikipedia.org/wiki/Phase%20space
In dynamical systems theory and control theory, a phase space or state space is a space in which all possible "states" of a dynamical system or a control system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs. Principles In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and al
https://en.wikipedia.org/wiki/Square%20root%20of%207
The square root of 7 is the positive real number that, when multiplied by itself, gives the prime number 7. It is more precisely called the principal square root of 7, to distinguish it from the negative number with the same property. This number appears in various geometric and number-theoretic contexts. It can be denoted in surd form as: and in exponent form as: It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.646 to within about 99.99% accuracy (about 1 part in 10000); that is, it differs from the correct value by about . The approximation (≈ 2.645833...) is better: despite having a denominator of only 48, it differs from the correct value by less than , or less than one part in 33,000. More than a million decimal digits of the square root of seven have been published. Rational approximations The extraction of decimal-fraction approximations to square roots by various methods has used the square root of 7 as an example or exercise in textbooks, for hundreds of years. Different numbers of digits after the decimal point are shown: 5 in 1773 and 1852, 3 in 1835, 6 in 1808, and 7 in 1797. An extraction by Newton's method (approximately) was illustrated in 1922, concluding that it is 2.646 "to the nearest thousandth". For a family of good rational approximations, the square root of 7 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 3, 5, 8, 37, 45, 82, 127, 590, 717, 1307, 2024, 9403, 11427, 20830, 32257… , and their denominators are 1, 1, 2, 3, 14, 17, 31, 48, 223, 271, 494, 765, 3554, 4319, 7873, 12192,…. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Approximate decimal equivalents improve linearly (number of digits proportional to convergent number) at
https://en.wikipedia.org/wiki/MONA%20number
A MONA number (short for Moths of North America), or Hodges number after Ronald W. Hodges, is part of a numbering system for North American moths found north of Mexico in the Continental United States and Canada, as well as the island of Greenland. Introduced in 1983 by Hodges through the publication of Check List of the Lepidoptera of America North of Mexico, the system began an ongoing numeration process in order to compile a list of the over 12,000 moths of North America north of Mexico. The system numbers moths within the same family close together for identification purposes. For example, the species Epimartyria auricrinella begins the numbering system at 0001 while Epimartyria pardella is numbered 0002. The system has become somewhat out of date since its inception for several reasons: Some numbers no longer exist as the species bearing the number have been reclassified into other species. Some species have been regrouped into a different family and their MONA numbers are out of order taxonomically. New species have been discovered since the implementation of the MONA system, resulting in the usage of decimal numbers as to not disrupt the numbering of other species. Despite the issues above, the MONA system has remained popular with many websites and publications. It is the most popular numbering system used, largely replacing the older McDunnough Numbers system, while some published lists prefer to use other forms of compilation. The Moth Photographer's Group (MPG) at Mississippi State University actively monitors the expansive list of North American moths utilizing the MONA system and updates their checklists in accordance with publishings regarding changes and additions.
https://en.wikipedia.org/wiki/List%20of%20oldest%20fathers
This is a list of persons reported to have become father of a child at or after 75 years of age. These claims have not necessarily been verified. Medical considerations According to a 1969 study, there is a decrease in sperm concentration as men age. The study reported that 90% of seminiferous tubules in men in their 20s and 30s contained spermatids, whereas men in their 40s and 50s had spermatids in 50% of their seminiferous tubules. In the study, only 10% of seminiferous tubules from men aged > 80 years contained spermatids. In a random international sample of 11,548 men confirmed to be biological fathers by DNA paternity testing, the oldest father was found to be 66 years old at the birth of his child; the ratio of DNA-confirmed versus DNA-rejected paternity tests around that age is in agreement with the notion of general male infertility greater than age 65-66. List of claims See also List of oldest birth mothers List of people with the most children List of multiple births Pregnancy Abraham and his son Isaac Genealogies of Genesis including multiple accounts of super-aged fathers
https://en.wikipedia.org/wiki/Hardware%20architect
(In the automation and engineering environments, the hardware engineer or architect encompasses the electronics engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.) The hardware systems architect or hardware architect is responsible for: Interfacing with a systems architect or client stakeholders. It is extraordinarily rare nowadays for sufficiently large and/or complex hardware systems that require a hardware architect not to require substantial software and a systems architect. The hardware architect will therefore normally interface with a systems architect, rather than directly with user(s), sponsor(s), or other client stakeholders. However, in the absence of a systems architect, the hardware systems architect must be prepared to interface directly with the client stakeholders in order to determine their (evolving) needs to be realized in hardware. The hardware architect may also need to interface directly with a software architect or engineer(s), or with other mechanical or electrical engineers. Generating the highest level of hardware requirements, based on the user's needs and other constraints such as cost and schedule. Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined. Performing cost–benefit analyses to determine the best methods or approaches for meeting the hardware requirements; making maximum use of commercial off-the-shelf or already developed components. Developing partitioning algorithms (and other processes) to allocate all present and foreseeable (hardware) requirements into discrete hardware partitions such that a minimum of communications is needed among partitions, and between the user and the system. Partitioning large hardware systems into (successive layers of) subsystems and components each of which can be handled by a single hardware engineer or team of engineers. Ensuring that maximally robust hardware architec
https://en.wikipedia.org/wiki/Single%20instruction%2C%20multiple%20threads
Single instruction, multiple threads (SIMT) is an execution model used in parallel computing where single instruction, multiple data (SIMD) is combined with multithreading. It is different from SPMD in that all instructions in all "threads" are executed in lock-step. The SIMT execution model has been implemented on several GPUs and is relevant for general-purpose computing on graphics processing units (GPGPU), e.g. some supercomputers combine CPUs with GPUs. The processors, say a number of them, seem to execute many more than tasks. This is achieved by each processor having multiple "threads" (or "work-items" or "Sequence of SIMD Lane operations"), which execute in lock-step, and are analogous to SIMD lanes. The simplest way to understand SIMT is to imagine a multi-core system, where each core has its own register file, its own ALUs (both SIMD and Scalar) and its own data cache, but that unlike a standard multi-core system which has multiple independent instruction caches and decoders, as well as multiple independent Program Counter registers, the instructions are synchronously broadcast to all SIMT cores from a single unit with a single instruction cache and a single instruction decoder which reads instructions using a single Program Counter. The key difference between SIMT and SIMD lanes is that each of the SIMT cores may have a completely different Stack Pointer (and thus perform computations on completely different data sets), whereas SIMD lanes are simply part of an ALU that knows nothing about memory per se. History SIMT was introduced by Nvidia in the Tesla GPU microarchitecture with the G80 chip. ATI Technologies, now AMD, released a competing product slightly later on May 14, 2007, the TeraScale 1-based "R600" GPU chip. Description As access time of all the widespread RAM types (e.g. DDR SDRAM, GDDR SDRAM, XDR DRAM, etc.) is still relatively high, engineers came up with the idea to hide the latency that inevitably comes with each memory access. St
https://en.wikipedia.org/wiki/Structural%20synthesis%20of%20programs
Structural synthesis of programs (SSP) is a special form of (automatic) program synthesis that is based on propositional calculus. More precisely, it uses intuitionistic logic for describing the structure of a program in such a detail that the program can be automatically composed from pieces like subroutines or even computer commands. It is assumed that these pieces have been implemented correctly, hence no correctness verification of these pieces is needed. SSP is well suited for automatic composition of services for service-oriented architectures and for synthesis of large simulation programs. History Automatic program synthesis began in the artificial intelligence field, with software intended for automatic problem solving. The first program synthesizer was developed by Cordell Green in 1969. At about the same time, mathematicians including R. Constable, Z. Manna, and R. Waldinger explained the possible use of formal logic for automatic program synthesis. Practically applicable program synthesizers appeared considerably later. The idea of structural synthesis of programs was introduced at a conference on algorithms in modern mathematics and computer science organized by Andrey Ershov and Donald Knuth in 1979. The idea originated from G. Pólya’s well-known book on problem solving. The method for devising a plan for solving a problem in SSP was presented as a formal system. The inference rules of the system were restructured and justified in logic by G. Mints and E. Tyugu in 1982. A programming tool PRIZ that uses SSP was developed in the 1980s. A recent Integrated development environment that supports SSP is CoCoViLa — a model-based software development platform for implementing domain specific languages and developing large Java programs. The logic of SSP Structural synthesis of programs is a method for composing programs from already implemented components (e.g. from computer commands or software object methods) that can be considered as functions.
https://en.wikipedia.org/wiki/Proof%20%28play%29
Proof is a 2000 play by the American playwright David Auburn. Proof was developed at George Street Playhouse in New Brunswick, New Jersey, during the 1999 Next Stage Series of new plays. The play premiered Off-Broadway in May 2000 and transferred to Broadway in October 2000. The play won the 2001 Pulitzer Prize for Drama and the Tony Award for Best Play. Plot The play focuses on Catherine, the daughter of Robert, a recently deceased mathematical genius in his fifties and professor at the University of Chicago, and her struggle with mathematical genius and mental illness. Catherine had cared for her father through a lengthy mental illness. Upon Robert's death, his ex-graduate student Hal discovers a paradigm-shifting proof about prime numbers in Robert's office. The title refers both to that proof and to the play's central question: Can Catherine prove the proof's authorship? Along with demonstrating the proof's authenticity, Catherine also finds herself in a relationship with Hal. Throughout, the play explores Catherine's fear of following in her father's footsteps, both mathematically and mentally and her desperate attempts to stay in control. Act I The play opens with Catherine sitting in the backyard of her large, old house. Robert, her father, reveals a bottle of champagne to help celebrate her 25th birthday. Catherine complains that she hasn't done any worthwhile work in the field of mathematics, at least not to the same level as her father, a well-known math genius. He reassures her that she can still do good work as long as she stops sleeping until noon and wasting time reading magazines. Catherine confesses she is worried about inheriting Robert's inclination towards mental instability. He begins to comfort her but then alludes to a "bad sign" when he points out that he did, in fact, die a week ago. Robert disappears as Catherine dozes off. She awakens when Hal, one of Robert's students, exits the house. He has been studying the hundreds of notebooks Robe
https://en.wikipedia.org/wiki/List%20of%20mathematic%20operators
In mathematics, an operator or transform is a function from one space of functions to another. Operators occur commonly in engineering, physics and mathematics. Many are integral operators and differential operators. In the following L is an operator which takes a function to another function . Here, and are some unspecified function spaces, such as Hardy space, Lp space, Sobolev space, or, more vaguely, the space of holomorphic functions. See also List of transforms List of Fourier-related transforms Transfer operator Fredholm operator Borel transform Glossary of mathematical symbols Operators Operators Operators
https://en.wikipedia.org/wiki/Taxon%20in%20disguise
In bacteriology, a taxon in disguise is a species, genus or higher unit of biological classification whose evolutionary history reveals has evolved from another unit of a similar or lower rank, making the parent unit paraphyletic. That happens when rapid evolution makes a new species appear so radically different from the ancestral group that it is not (initially) recognised as belonging to the parent phylogenetic group, which is left as an evolutionary grade. While the term is from bacteriology, parallel examples are found throughout the tree of life. For example, four-footed animals have evolved from piscine ancestors but since they are not generally considered fish, they can be said to be "fish in disguise". In many cases, the paraphyly can be resolved by reclassifying the taxon in question under the parent group. However, in bacteriology, since renaming groups may have serious consequences since by causing confusion over the identity of pathogens, it is generally avoided for some groups. Examples Shigella The bacterial genus Shigella is the cause of bacillary dysentery, a potentially-severe infection that kills over a million people every year. The genus (S. dysenteriae, S. flexneri, S. boydii, S. sonnei) have evolved from the common intestinal bacterium Escherichia coli, which renders that species paraphyletic. E. coli itself can also cause serious dysentery, but differences in genetic makeup between E. coli and Shigella cause different medical conditions and symptoms. Escherichia coli is a badly-classified species since some strains share only 20% of their genome. It is so diverse that it should be given a higher taxonomic ranking. However, medical conditions associated with E. coli itself and Shigella make the current classification not to be changed to avoid confusion in medical context. Shigella will thus remain "E. coli in disguise". B. cereus-group Similarly, the Bacillus species of the B. cereus-group (B. anthracis, B. cereus, B . thuringiensis
https://en.wikipedia.org/wiki/Behavior-based%20robotics
Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links. Principles Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classic artificial intelligence typically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering. Most behavior-based systems are also reactive, which means they need no programming of a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment. Behavior-based robots (BBR) usually show more biological-appearing actions than their computing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs and insects are frequent because of these actions. BBRs are sometimes considered examples of weak artificial intelligence, although some have claimed they are models of all intelligence. Features Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than buil
https://en.wikipedia.org/wiki/Optical%20interconnect
In integrated circuits, optical interconnects refers to any system of transmitting signals from one part of an integrated circuit to another using light. Optical interconnects have been the topic of study due to the high latency and power consumption incurred by conventional metal interconnects in transmitting electrical signals over long distances, such as in interconnects classed as global interconnects. The International Technology Roadmap for Semiconductors (ITRS) has highlighted interconnect scaling as a problem for the semiconductor industry. In electrical interconnects, nonlinear signals (e.g. digital signals) are transmitted by copper wires conventionally, and these electrical wires all have resistance and capacitance which severely limits the rise time of signals when the dimension of the wires are scaled down. Optical solution are used to transmit signals through long distances to substitute interconnection between dies within the integrated circuit (IC) package. In order to control the optical signals inside the small IC package properly, microelectromechanical system (MEMS) technology can be used to integrate the optical components (i.e. optical waveguides, optical fibers, lens, mirrors, optical actuators, optical sensors etc.) and the electronic parts together effectively. Problems of the current interconnect in the package Conventional physical metal wires possess both resistance and capacitance, limiting the rise time of signals. Bits of information will overlap with each other when the frequency of signal is increased to a certain level. Benefits of using optical interconnection Optical interconnections can provide benefits over conventional metal wires which include: More predictable timing Reduction of power and area for clock distribution Distance independence of performance of optical interconnects No frequency-dependent Cross-talk Architectural advantages Reducing power dissipation in interconnects Voltage isolation Density of inte
https://en.wikipedia.org/wiki/NPL%20network
The NPL network, or NPL Data Communications Network, was a local area computer network operated by a team from the National Physical Laboratory in London that pioneered the concept of packet switching. Based on designs first conceived by Donald Davies in 1965, development work began in 1968. Elements of the first version of the network, the Mark I, became operational during 1969 then fully operational in January 1970, and the Mark II version operated from 1973 until 1986. The NPL network followed by the ARPANET in the United States were the first two computer networks that implemented packet switching and the NPL network was the first to use high-speed links. It was, along with the ARPANET project, laid down the technical foundations of modern internet. Origins In 1965, Donald Davies, who was later appointed to head of the NPL Division of Computer Science, proposed a commercial national data network based on packet switching in Proposal for the Development of a National Communications Service for On-line Data Processing. After the proposal was not taken up nationally, during 1966 he headed a team which produced a design for a local network to serve the needs of NPL and prove the feasibility of packet switching. The design was the first to describe the concept of an "Interface computer", today known as a router. The next year, a written version of the proposal entitled NPL Data Network was presented by Roger Scantlebury at the Symposium on Operating Systems Principles. It described how computers (nodes) used to transmit signals (packets) would be connected by electrical links to re-transmit the signals between and to the nodes, and interface computers would be used to link node networks to so-called time-sharing computers and other users. The interface computers would transmit multiplex signals between networks, and nodes would switch transmissions while connected to electrical circuitry functioning at a rate of processing amounting to mega-bits. In Scantlebury's
https://en.wikipedia.org/wiki/Digital%20signal%20processor
A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products. The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Overview Digital signal processing (DSP) algorithms typically require a large number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP applications have constraints on latency; that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred (or batch) processing is not viable. Most general-purpose microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints. A specialized DSP, however, will tend to provide a lower-cost solution, with better performance, lower latency, and no requirements for specialised cooling or large ba
https://en.wikipedia.org/wiki/Authenticated%20Identity%20Body
Authenticated Identity Body or AIB is a method allowing parties in a network to share authenticated identity thereby increasing the integrity of their SIP communications. AIBs extend other authentication methods like S/MIME to provide a more specific mechanism to introduce integrity to SIP transmissions. Parties transmitting AIBs cryptographically sign a subset of SIP message headers, and such signatures assert the message originator's identity. To meet requirements of reference integrity (for example in defending against replay attacks) additional SIP message headers such as 'Date' and 'Contact' may be optionally included in the AIB. AIB is described and discussed in RFC 3893: "For reasons of end-to-end privacy, it may also be desirable to encrypt AIBs [...]. While encryption of AIBs entails that only the holder of a specific key can decrypt the body, that single key could be distributed throughout a network of hosts that exist under common policies. The security of the AIB is therefore predicated on the secure distribution of the key. However, for some networks (in which there are federations of trusted hosts under a common policy), the widespread distribution of a decryption key could be appropriate. Some telephone networks, for example, might require this model. When an AIB is encrypted, the AIB should be encrypted before it is signed... Unless, of course, it is signed by Mrs. L in Rin, VA." See also Computer networks Cryptographic software VoIP protocols VoIP software
https://en.wikipedia.org/wiki/List%20of%20variational%20topics
This is a list of variational topics in from mathematics and physics. See calculus of variations for a general introduction. Action (physics) Averaged Lagrangian Brachistochrone curve Calculus of variations Catenoid Cycloid Dirichlet principle Euler–Lagrange equation cf. Action (physics) Fermat's principle Functional (mathematics) Functional derivative Functional integral Geodesic Isoperimetry Lagrangian Lagrangian mechanics Legendre transformation Luke's variational principle Minimal surface Morse theory Noether's theorem Path integral formulation Plateau's problem Prime geodesic Principle of least action Soap bubble Soap film Tautochrone curve Variations
https://en.wikipedia.org/wiki/Paprika%20oleoresin
Paprika oleoresin (also known as paprika extract and oleoresin paprika) is an oil-soluble extract from the fruits of Capsicum annuum or Capsicum frutescens, and is primarily used as a colouring and/or flavouring in food products. It is composed of vegetable oil (often in the range of 97% to 98%), capsaicin, the main flavouring compound giving pungency in higher concentrations, and capsanthin and capsorubin, the main colouring compounds (among other carotenoids). It is much milder than capsicum oleoresin, often containing no capsaicin at all. Extraction is performed by percolation with a variety of solvents, primarily hexane, which are removed prior to use. Vegetable oil is then added to ensure a uniform color saturation. Uses Foods colored with paprika oleoresin include cheese, orange juice, spice mixtures, sauces, sweets, ketchup, soups, fish fingers, chips, pastries, fries, dressings, seasonings, jellies, bacon, ham, ribs, and among other foods even cod fillets. In poultry feed, it is used to deepen the colour of egg yolks. In the United States, paprika oleoresin is listed as a color additive “exempt from certification”. In Europe, paprika oleoresin (extract), and the compounds capsanthin and capsorubin are designated by E160c. Names and CAS nos
https://en.wikipedia.org/wiki/Programmable%20logic%20array
A programmable logic array (PLA) is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output. It has 2N AND gates for N input variables, and for M outputs from PLA, there should be M OR gates, each with programmable inputs from all of the AND gates. This layout allows for many logic functions to be synthesized in the sum of products canonical forms. PLAs differ from programmable array logic devices (PALs and GALs) in that both the AND and OR gate planes are programmable.[PAL has programmable AND gates but fixed OR gates] History In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip-flops for memory. TI coined the term Programmable Logic Array for this device. Implementation procedure Preparation in SOP (sum of products) form. Obtain the minimum SOP form to reduce the number of product terms to a minimum. Decide the input connection of the AND matrix for generating the required product term. Then decide the input connections of OR matrix to generate the sum terms. Decide the connections of invert matrix. Program the PLA. PLA block diagram: Advantages over read-only memory The desired outputs for each combination of inputs could be programmed into a read-only memory, with the inputs being driven by the address bus and the outputs being read out as data. However, that would require a separate memory location for every possible combination of inputs, including combinations that are never supposed to occur, and also duplicating data for "don't care" conditions (for example, logic like "if input A is 1, then, as far as output X is concerned, w
https://en.wikipedia.org/wiki/Poisson%20wavelet
In mathematics, in functional analysis, several different wavelets are known by the name Poisson wavelet. In one context, the term "Poisson wavelet" is used to denote a family of wavelets labeled by the set of positive integers, the members of which are associated with the Poisson probability distribution. These wavelets were first defined and studied by Karlene A. Kosanovich, Allan R. Moser and Michael J. Piovoso in 1995–96. In another context, the term refers to a certain wavelet which involves a form of the Poisson integral kernel. In still another context, the terminology is used to describe a family of complex wavelets indexed by positive integers which are connected with the derivatives of the Poisson integral kernel. Wavelets associated with Poisson probability distribution Definition For each positive integer n the Poisson wavelet is defined by To see the relation between the Poisson wavelet and the Poisson distribution let X be a discrete random variable having the Poisson distribution with parameter (mean) t and, for each non-negative integer n, let Prob(X = n) = pn(t). Then we have The Poisson wavelet is now given by Basic properties is the backward difference of the values of the Poisson distribution: The "waviness" of the members of this wavelet family follows from The Fourier transform of is given The admissibility constant associated with is Poisson wavelet is not an orthogonal family of wavelets. Poisson wavelet transform The Poisson wavelet family can be used to construct the family of Poisson wavelet transforms of functions defined the time domain. Since the Poisson wavelets satisfy the admissibility condition also, functions in the time domain can be reconstructed from their Poisson wavelet transforms using the formula for inverse continuous-time wavelet transforms. If f(t) is a function in the time domain its n-th Poisson wavelet transform is given by In the reverse direction, given the n-th Poisson wavelet transform of
https://en.wikipedia.org/wiki/Safe%20operating%20area
For power semiconductor devices (such as BJT, MOSFET, thyristor or IGBT), the safe operating area (SOA) is defined as the voltage and current conditions over which the device can be expected to operate without self-damage. SOA is usually presented in transistor datasheets as a graph with VCE (collector-emitter voltage) on the abscissa and ICE (collector-emitter current) on the ordinate; the safe 'area' referring to the area under the curve. The SOA specification combines the various limitations of the device — maximum voltage, current, power, junction temperature, secondary breakdown — into one curve, allowing simplified design of protection circuitry. Often, in addition to the continuous rating, separate SOA curves are also plotted for short duration pulse conditions (1 ms pulse, 10 ms pulse, etc.). The safe operating area curve is a graphical representation of the power handling capability of the device under various conditions. The SOA curve takes into account the wire bond current carrying capability, transistor junction temperature, internal power dissipation and secondary breakdown limitations. Limits of the safe operating area Where both current and voltage are plotted on logarithmic scales, the borders of the SOA are straight lines: IC = ICmax — current limit VCE = VCEmax — voltage limit IC VCE = Pmax — dissipation limit, thermal breakdown IC VCEα = const — this is the limit given by the secondary breakdown (bipolar junction transistors only) SOA specifications are useful to the design engineer working on power circuits such as amplifiers and power supplies as they allow quick assessment of the limits of device performance, the design of appropriate protection circuitry, or selection of a more capable device. SOA curves are also important in the design of foldback circuits. Secondary breakdown For a device that makes use of the secondary breakdown effect see Avalanche transistor Secondary breakdown is a failure mode in bipolar power transistors.
https://en.wikipedia.org/wiki/Data-driven%20control%20system
Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant. In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by data-driven methods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The direct data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest. Overview The standard approach to control systems design is organized in two-steps: Model identification aims at estimating a nominal model of the system , where is the unit-delay operator (for discrete-time transfer functions representation) and is the vector of parameters of identified on a set of data. Then, validation consists in constructing the uncertainty set that contains the true system at a certain probability level. Controller design aims at finding a controller achieving closed-loop stability and meeting the required performance with . Typical objectives of system identification are to have as close as possible to , and to have as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model. One way to deal with unce
https://en.wikipedia.org/wiki/Assimilation%20%28biology%29
' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver. Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase. Examples of biological assimilation Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells. Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae. Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble
https://en.wikipedia.org/wiki/Lorentz%20scalar
In a relativistic theory of physics, a Lorentz scalar is an expression, formed from items of the theory, which evaluates to a scalar, invariant under any Lorentz transformation. A Lorentz scalar may be generated from e.g., the scalar product of vectors, or from contracting tensors of the theory. While the components of vectors and tensors are in general altered under Lorentz transformations, Lorentz scalars remain unchanged. A Lorentz scalar is not always immediately seen to be an invariant scalar in the mathematical sense, but the resulting scalar value is invariant under any basis transformation applied to the vector space, on which the considered theory is based. A simple Lorentz scalar in Minkowski spacetime is the spacetime distance ("length" of their difference) of two fixed events in spacetime. While the "position"-4-vectors of the events change between different inertial frames, their spacetime distance remains invariant under the corresponding Lorentz transformation. Other examples of Lorentz scalars are the "length" of 4-velocities (see below), or the Ricci curvature in a point in spacetime from General relativity, which is a contraction of the Riemann curvature tensor there. Simple scalars in special relativity The length of a position vector In special relativity the location of a particle in 4-dimensional spacetime is given by where is the position in 3-dimensional space of the particle, is the velocity in 3-dimensional space and is the speed of light. The "length" of the vector is a Lorentz scalar and is given by where is the proper time as measured by a clock in the rest frame of the particle and the Minkowski metric is given by This is a time-like metric. Often the alternate signature of the Minkowski metric is used in which the signs of the ones are reversed. This is a space-like metric. In the Minkowski metric the space-like interval is defined as We use the space-like Minkowski metric in the rest of this article. The length of a
https://en.wikipedia.org/wiki/Modern%20Arabic%20mathematical%20notation
Modern Arabic mathematical notation is a mathematical notation based on the Arabic script, used especially at pre-university levels of education. Its form is mostly derived from Western notation, but has some notable features that set it apart from its Western counterpart. The most remarkable of those features is the fact that it is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of the Greek and Latin alphabet letters for symbols with Arabic letters and the use of Arabic names for functions and relations. Features It is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of the Latin alphabet letters for symbols with Arabic letters and the use of Arabic names for functions and relations. The notation exhibits one of the very few remaining vestiges of non-dotted Arabic scripts, as dots over and under letters (i'jam) are usually omitted. Letter cursivity (connectedness) of Arabic is also taken advantage of, in a few cases, to define variables using more than one letter. The most widespread example of this kind of usage is the canonical symbol for the radius of a circle (), which is written using the two letters nūn and qāf. When variable names are juxtaposed (as when expressing multiplication) they are written non-cursively. Variations Notation differs slightly from region to another. In tertiary education, most regions use the Western notation. The notation mainly differs in numeral system used, and in mathematical symbol used. Numeral systems There are three numeral systems used in right to left mathematical notation. "Western Arabic numerals" (sometimes called European) are used in western Arabic regions (e.g. Morocco) "Eastern Arabic numerals" are used in middle and eastern Arabic regions (e.g. Egypt and Syria) "Eastern Arabic-Indic numerals" are used in Persian and Urdu speaking regions (e.g. Iran, Pakistan, India)
https://en.wikipedia.org/wiki/Functional%20testing%20%28manufacturing%29
In manufacturing, functional testing (FCT) is performed during the last phase of the production line. This is often referred to as a final quality control test, which is done to ensure that specifications are carried out by FCTs. The process of FCTs is entailed by the emulation or simulation of the environment in which a product is expected to operate. This is done so to check, and correct any issues with functionality. The environment involved with FCTs consists of any device that communicates with an DUT, the power supply of said DUT, and any loads needed to make the DUT function correctly. Functional tests are performed in an automatic fashion by production line operators using test software. In order for this to be completed, the software will communicate with any external programmable instruments such as I/O boards, digital multimeters, and communication ports. In conjunction with the test fixture, the software that interfaces with the DUT is what makes it possible for a FCT to be performed. Typical vendors Agilent Technologies Acculogic Keysight Circuit Check National Instruments Teradyne Flex (company) 6TL engineering See also Acceptance testing
https://en.wikipedia.org/wiki/Gold%E2%80%93aluminium%20intermetallic
A gold–aluminium intermetallic is an intermetallic compound of gold and aluminium that occurs at contacts between the two metals. These intermetallics have different properties from the individual metals, which can cause problems in wire bonding in microelectronics. The main compounds formed are Au5Al2 (white plague) and AuAl2 (purple plague), which both form at high temperatures. White plague is the name of the compound Au5Al2 as well as the problem it causes. It has low electrical conductivity, so its formation at the joint leads to an increase of electrical resistance which can lead to total failure. Purple plague (sometimes known as purple death or Roberts-Austen's purple gold) is a brittle, bright-purple compound, AuAl2, or about 78.5% Au and 21.5% Al by mass. AuAl2 is the most stable thermally of the Au–Al intermetallic compounds, with a melting point of 1060°C (see phase diagram), similar to that of pure gold. The process of the growth of the intermetallic layers causes reduction in volume, and hence creates cavities in the metal near the interface between gold and aluminium. Other gold–aluminium intermetallics can cause problems as well. Below 624°C, purple plague is replaced by Au2Al, a tan-colored substance. It is a poor conductor and can cause electrical failure of the joint that can lead to mechanical failure. At lower temperatures, about 400–450°C, an interdiffusion process takes place at the junction. This leads to formation of layers of several intermetallic compounds with different compositions, from gold-rich to aluminium-rich, with different growth rates. Cavities form as the denser, faster-growing layers consume the slower-growing ones. This process, known as Kirkendall voiding, leads to both increased electrical resistance and mechanical weakening of the wire bond. When the voids are collected along the diffusion front, a process aided by contaminants present in the lattice, it is known as Horsting voiding, a process similar to and often con
https://en.wikipedia.org/wiki/Scientific%20notation
Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators it is usually known as "SCI" display mode. In scientific notation, nonzero numbers are written in the form or m times ten raised to the power of n, where n is an integer, and the coefficient m is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer n is called the exponent and the real number m is called the significand or mantissa. The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes m, as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand m is at least 1 but less than 10. Decimal floating point is a computer arithmetic system closely related to scientific notation. History Normalized notation Any real number can be written in the form in many ways: for example, 350 can be written as or or . In normalized scientific notation (called "standard form" in the United Kingdom), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as . This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It i
https://en.wikipedia.org/wiki/Stepping%20level
In integrated circuits, the stepping level or revision level is a version number that refers to the introduction or revision of one or more photolithographic photomasks within the set of photomasks that is used to pattern an integrated circuit. The term originated from the name of the equipment ("steppers") that exposes the photoresist to light. Integrated circuits have two primary classes of mask sets: firstly, "base" layers that are used to build the structures, such as transistors, that comprise circuit logic and, secondly, "metal" layers that connect the circuit logic. Typically, when an integrated circuit manufacturer such as Intel or AMD produces a new stepping (i.e. a revision to the masks), it is because it has found bugs in the logic, has made improvements to the design that permit faster processing, has found a way to increase yield or improve the "bin splits" (i.e. create faster transistors and thus faster CPUs), has improved maneuverability to more easily identify marginal circuits, or has reduced the circuit testing time, which can in turn reduce the cost of testing. Many integrated circuits allow interrogation to reveal information about their features, including stepping level. For example, executing CPUID instruction with the EAX register set to '1' on x86 CPUs will result in values being placed in other registers that show the CPU's stepping level. Stepping identifiers commonly comprise a letter followed by a number, for example B2. Usually, the letter indicates the revision level of a CPU's base layers and the number indicates the revision level of the metal layers. A change of letter indicates a change to both the base layer mask revision and metal layers whereas a change in the number indicates a change in the metal layer mask revision only. This is analogous to the major/minor revision numbers in software versioning. Base layer revision changes are time consuming and more expensive for the manufacturer, but some fixes are difficult or imposs
https://en.wikipedia.org/wiki/Relative%20rate%20test
The relative rate test is a genetic comparative test between two ingroups (somewhat closely related species) and an outgroup or “reference species” to compare mutation and evolutionary rates between the species. Each ingroup species is compared independently to the outgroup to determine how closely related the two species are without knowing the exact time of divergence from their closest common ancestor. If more change has occurred on one lineage relative to another lineage since their shared common ancestor, then the outgroup species will be more different from the faster-evolving lineage's species than it is from the slower-evolving lineage's species. This is because the faster-evolving lineage will, by definition, have accumulated more differences since the common ancestor than the slower-evolving lineage. This method can be applied to averaged data (i.e., groups of molecules), or individual molecules. It is possible for individual molecules to show evidence of approximately constant rates of change in different lineages even while the rates differ between different molecules. The relative rate test is a direct internal test of the molecular clock, for a given molecule and a given set of species, and shows that the molecular clock does not need to be (and should never be) assumed: It can be directly assessed from the data itself. Note that the logic can also be applied to any kind of data for which a distance measure can be defined (e.g., even morphological features). Uses The initial use of this method was to assess whether or not there was evidence for different rates of molecular change in different lineages for particular molecules. If there was no evidence of significantly different rates, this would be direct evidence of a molecular clock, and (only) then would allow for a phylogeny to be constructed based on relative branch points (absolute dates for branch points in the phylogeny would require further calibration with the best-attested fossil evidence
https://en.wikipedia.org/wiki/Almost%20everywhere
In measure theory (a branch of mathematical analysis), a property holds almost everywhere if, in a technical sense, the set for which the property holds takes up nearly all possibilities. The notion of "almost everywhere" is a companion notion to the concept of measure zero, and is analogous to the notion of almost surely in probability theory. More specifically, a property holds almost everywhere if it holds for all elements in a set except a subset of measure zero, or equivalently, if the set of elements for which the property holds is conull. In cases where the measure is not complete, it is sufficient that the set be contained within a set of measure zero. When discussing sets of real numbers, the Lebesgue measure is usually assumed unless otherwise stated. The term almost everywhere is abbreviated a.e.; in older literature p.p. is used, to stand for the equivalent French language phrase presque partout. A set with full measure is one whose complement is of measure zero. In probability theory, the terms almost surely, almost certain and almost always refer to events with probability 1 not necessarily including all of the outcomes. These are exactly the sets of full measure in a probability space. Occasionally, instead of saying that a property holds almost everywhere, it is said that the property holds for almost all elements (though the term almost all can also have other meanings). Definition If is a measure space, a property is said to hold almost everywhere in if there exists a set with , and all have the property . Another common way of expressing the same thing is to say that "almost every point satisfies ", or that "for almost every , holds". It is not required that the set has measure 0; it may not belong to . By the above definition, it is sufficient that be contained in some set that is measurable and has measure 0. Properties If property holds almost everywhere and implies property , then property holds almost everywhere. This
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Moser%20notation
In mathematics, Steinhaus–Moser notation is a notation for expressing certain large numbers. It is an extension (devised by Leo Moser) of Hugo Steinhaus's polygon notation. Definitions a number in a triangle means nn. a number in a square is equivalent to "the number inside triangles, which are all nested." a number in a pentagon is equivalent with "the number inside squares, which are all nested." etc.: written in an ()-sided polygon is equivalent with "the number inside nested -sided polygons". In a series of nested polygons, they are associated inward. The number inside two triangles is equivalent to nn inside one triangle, which is equivalent to nn raised to the power of nn. Steinhaus defined only the triangle, the square, and the circle , which is equivalent to the pentagon defined above. Special values Steinhaus defined: mega is the number equivalent to 2 in a circle: megiston is the number equivalent to 10 in a circle: ⑩ Moser's number is the number represented by "2 in a megagon". Megagon is here the name of a polygon with "mega" sides (not to be confused with the polygon with one million sides). Alternative notations: use the functions square(x) and triangle(x) let be the number represented by the number in nested -sided polygons; then the rules are: and mega =  megiston =  moser = Mega A mega, ②, is already a very large number, since ② = square(square(2)) = square(triangle(triangle(2))) = square(triangle(22)) = square(triangle(4)) = square(44) = square(256) = triangle(triangle(triangle(...triangle(256)...))) [256 triangles] = triangle(triangle(triangle(...triangle(256256)...))) [255 triangles] ~ triangle(triangle(triangle(...triangle(3.2317 × 10616)...))) [255 triangles] ... Using the other notation: mega = M(2,1,5) = M(256,256,3) With the function we have mega = where the superscript denotes a functional power, not a numerical power. We have (note the convention that powers are evaluated from right to left): M(25
https://en.wikipedia.org/wiki/UniPro
UniPro (or Unified Protocol) is a high-speed interface technology for interconnecting integrated circuits in mobile and mobile-influenced electronics. The various versions of the UniPro protocol are created within the MIPI Alliance (Mobile Industry Processor Interface Alliance), an organization that defines specifications targeting mobile and mobile-influenced applications. The UniPro technology and associated physical layers aim to provide high-speed data communication (gigabits/second), low-power operation (low swing signaling, standby modes), low pin count (serial signaling, multiplexing), small silicon area (small packet sizes), data reliability (differential signaling, error recovery) and robustness (proven networking concepts, including congestion management). UniPro version 1.6 concentrates on enabling high-speed point to point communication between chips in mobile electronics. UniPro has provisions for supporting networks consisting of up to 128 UniPro devices (integrated circuit, modules, etc.). Network features are planned in future UniPro releases. In such a networked environment, pairs of UniPro devices are interconnected via so-called links while data packets are routed toward their destination by UniPro switches. These switches are analogous to the routers used in wired LAN based on gigabit Ethernet. But unlike a LAN, the UniPro technology was designed to connect chips within a mobile terminal, rather than to connect computers within a building. History and aims The initiative to develop the UniPro protocol came forth out of a pair of research projects at respectively Nokia Research Center and Philips Research. Both teams independently arrived at the conclusion that the complexity of mobile systems could be reduced by splitting the system design into well-defined functional modules interconnected by a network. The key assumptions were thus that the networking paradigm gave modules well-structured, layered interfaces and that it was time to improve
https://en.wikipedia.org/wiki/Jungle%20chip
A jungle chip, or jungle IC, is an integrated circuit (IC or "chip") found in most analog televisions of the 1990s. It takes a composite video signal from the radio frequency receiver electronics and turns it into separate RGB outputs that can be sent to the cathode ray tube to produce a display. This task had previously required separate analog circuits. Advanced versions generally had a second set of inputs in RGB format that were used to overlay on-screen display imagery. These would be connected to a microcontroller that would handle operations like tuning, sleep mode and running the remote control. A separate input called "blanking" switched the jungle outputs between the two inputs on the fly. This was normally triggered at a fixed location on the screen, creating rectangular areas with the digital data overlaying the television signal. This was used for on-screen channel displays, closed captioning support, and similar duties. The internal RGB inputs have led to such televisions having a revival in the retrocomputing market. By running connectors from the RGB pins on the jungle chip to connectors added by the user, typically RCA jacks on the back of the television case, and then turning on the blanking switch permanently, the system is converted to an RGB monitor. Since early computers output signals with television timings, NTSC or PAL, using a jungle chip television avoids the need to provide separate timing signals. This contrasts with multisync monitors or similar designs that do not have any "built-in" timing and have separate inputs for these signals. Examples of jungle chips include the Motorola MC65585, Phillips RDA6361 and Sony CXA1870.
https://en.wikipedia.org/wiki/Flotation%20of%20flexible%20objects
Flotation of flexible objects is a phenomenon in which the bending of a flexible material allows an object to displace a greater amount of fluid than if it were completely rigid. This ability to displace more fluid translates directly into an ability to support greater loads, giving the flexible structure an advantage over a similarly rigid one. Inspiration to study the effects of elasticity are taken from nature, where plants, such as black pepper, and animals living at the water surface have evolved to take advantage of the load-bearing benefits elasticity imparts. History In his work "On Floating Bodies", Archimedes famously stated: While this basic idea carried enormous weight and has come to form the basis of understanding why objects float, it is best applied for objects with a characteristic length scale greater than the capillary length. What Archimedes had failed to predict was the influence of surface tension and its impact at small length scales. More recent works, such as that of Keller, have extended these principles by considering the role of surface tension forces on partially submerged bodies. Keller, for instance, demonstrated analytically that the weight of water displaced by a meniscus is equal to the vertical component of the surface tension force. Nonetheless, the role of flexibility and its impact on an object's load-bearing potential is one that did receive attention until the mid-2000s and onward. In an initial study, Vella studied the load supported by a raft composed of thin, rigid strips. Specifically, he compared the case of floating individual strips to floating an aggregation of strips, wherein the aggregate structure causes portions of the meniscus (and hence, resulting surface tension force) to disappear. By extending his analysis to consider a similar system composed of thin strips of some finite bending stiffness, he found that this later case in fact was able support a greater load. A well known work in the area of surface t
https://en.wikipedia.org/wiki/Innumeracy%20%28book%29
Innumeracy: Mathematical Illiteracy and its Consequences is a 1988 book by mathematician John Allen Paulos about innumeracy (deficiency of numeracy) as the mathematical equivalent of illiteracy: incompetence with numbers rather than words. Innumeracy is a problem with many otherwise educated and knowledgeable people. While many people would be ashamed to admit they are illiterate, there is very little shame in admitting innumeracy by saying things like "I'm a people person, not a numbers person", or "I always hated math", but Paulos challenges whether that widespread cultural excusing of innumeracy is truly worthy of acceptability. Paulos speaks mainly of the common misconceptions about, and inability to deal comfortably with, numbers, and the logic and meaning that they represent. He looks at real-world examples in stock scams, psychics, astrology, sports records, elections, sex discrimination, UFOs, insurance and law, lotteries, and drug testing. Paulos discusses innumeracy with quirky anecdotes, scenarios, and facts, encouraging readers in the end to look at their world in a more quantitative way. The book sheds light on the link between innumeracy and pseudoscience. For example, the fortune telling psychic's few correct and general observations are remembered over the many incorrect guesses. He also stresses the problem between the actual number of occurrences of various risks and popular perceptions of those risks happening. The problems of innumeracy come at a great cost to society. Topics include probability and coincidence, innumeracy in pseudoscience, statistics, and trade-offs in society. For example, the danger of getting killed in a car accident is much greater than terrorism and this danger should be reflected in how we allocate our limited resources. Background John Allen Paulos (born July 4, 1945) is an American professor of mathematics at Temple University in Pennsylvania. He is a writer and speaker on mathematics and the importance of mathematic
https://en.wikipedia.org/wiki/Reciprocal%20Fibonacci%20constant
The reciprocal Fibonacci constant, or ψ, is defined as the sum of the reciprocals of the Fibonacci numbers: The ratio of successive terms in this sum tends to the reciprocal of the golden ratio. Since this is less than 1, the ratio test shows that the sum converges. The value of ψ is known to be approximately . Gosper describes an algorithm for fast numerical approximation of its value. The reciprocal Fibonacci series itself provides O(k) digits of accuracy for k terms of expansion, while Gosper's accelerated series provides O(k&hairsp;2) digits. ψ is known to be irrational; this property was conjectured by Paul Erdős, Ronald Graham, and Leonard Carlitz, and proved in 1989 by Richard André-Jeannin. The continued fraction representation of the constant is: . See also List of sums of reciprocals
https://en.wikipedia.org/wiki/List%20of%20mathematical%20identities
This article lists mathematical identities, that is, identically true relations holding in mathematics. Bézout's identity (despite its usual name, it is not, properly speaking, an identity) Binomial inverse theorem Binomial identity Brahmagupta–Fibonacci two-square identity Candido's identity Cassini and Catalan identities Degen's eight-square identity Difference of two squares Euler's four-square identity Euler's identity Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities Heine's identity Hermite's identity Lagrange's identity Lagrange's trigonometric identities MacWilliams identity Matrix determinant lemma Newton's identity Parseval's identity Pfister's sixteen-square identity Sherman–Morrison formula Sophie Germain identity Sun's curious identity Sylvester's determinant identity Vandermonde's identity Woodbury matrix identity Identities for classes of functions Exterior calculus identities Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities Hypergeometric function identities List of integrals of logarithmic functions List of topics related to List of trigonometric identities Inverse trigonometric functions Logarithmic identities Summation identities Vector calculus identities See also External links A Collection of Algebraic Identities Matrix Identities Identities
https://en.wikipedia.org/wiki/Language-based%20security
In computer science, language-based security (LBS) is a set of techniques that may be used to strengthen the security of applications on a high level by using the properties of programming languages. LBS is considered to enforce computer security on an application-level, making it possible to prevent vulnerabilities which traditional operating system security is unable to handle. Software applications are typically specified and implemented in certain programming languages, and in order to protect against attacks, flaws and bugs an application's source code might be vulnerable to, there is a need for application-level security; security evaluating the applications behavior with respect to the programming language. This area is generally known as language-based security. Motivation The use of large software systems, such as SCADA, is taking place all around the world and computer systems constitute the core of many infrastructures. The society relies greatly on infrastructure such as water, energy, communication and transportation, which again all rely on fully functionally working computer systems. There are several well known examples of when critical systems fail due to bugs or errors in software, such as when shortage of computer memory caused LAX computers to crash and hundreds of flights to be delayed (April 30, 2014). Traditionally, the mechanisms used to control the correct behavior of software are implemented at the operating system level. The operating system handles several possible security violations such as memory access violations, stack overflow violations, access control violations, and many others. This is a crucial part of security in computer systems, however by securing the behavior of software on a more specific level, even stronger security can be achieved. Since a lot of properties and behavior of the software is lost in compilation, it is significantly more difficult to detect vulnerabilities in machine code. By evaluating the source code
https://en.wikipedia.org/wiki/QuRiNet
The Quail Ridge Wireless Mesh Network project is an effort to provide a wireless communications infrastructure to the Quail Ridge Reserve, a wildlife reserve in California in the United States. The network is intended to benefit on-site ecological research and provide a wireless mesh network tested for development and analysis. The project is a collaboration between the University of California Natural Reserve System and the Networks Lab at the Department of Computer Science, UC Davis. Project The large-scale wireless mesh network would consist of various sensor networks gathering temperature, visual, and acoustic data at certain locations. This information would then be stored at the field station or relayed further over Ethernet. The backbone nodes would also serve as access points enabling wireless access at their locations. The Quail Ridge Reserve would also be used for further research into wireless mesh networks. External links qurinet.cs.ucdavis.edu spirit.cs.ucdavis.edu nrs.ucdavis.edu/quail.html nrs.ucop.edu Computer networking
https://en.wikipedia.org/wiki/Level%20%28logarithmic%20quantity%29
In science and engineering, a power level and a field level (also called a root-power level) are logarithmic magnitudes of certain quantities referenced to a standard reference value of the same type. A power level is a logarithmic quantity used to measure power, power density or sometimes energy, with commonly used unit decibel (dB). A field level (or root-power level) is a logarithmic quantity used to measure quantities of which the square is typically proportional to power (for instance, the square of voltage is proportional to power by the inverse of the conductor's resistance), etc., with commonly used units neper (Np) or decibel (dB). The type of level and choice of units indicate the scaling of the logarithm of the ratio between the quantity and its reference value, though a logarithm may be considered to be a dimensionless quantity. The reference values for each type of quantity are often specified by international standards. Power and field levels are used in electronic engineering, telecommunications, acoustics and related disciplines. Power levels are used for signal power, noise power, sound power, sound exposure, etc. Field levels are used for voltage, current, sound pressure. Power level Level of a power quantity, denoted LP, is defined by where P is the power quantity; P0 is the reference value of P. Field (or root-power) level The level of a root-power quantity (also known as a field quantity), denoted LF, is defined by where F is the root-power quantity, proportional to the square root of power quantity; F0 is the reference value of F. If the power quantity P is proportional to F2, and if the reference value of the power quantity, P0, is in the same proportion to F02, the levels LF and LP are equal. The neper, bel, and decibel (one tenth of a bel) are units of level that are often applied to such quantities as power, intensity, or gain. The neper, bel, and decibel are related by ; . Standards Level and its units are define
https://en.wikipedia.org/wiki/List%20of%20mesons
This list is of all known and predicted scalar, pseudoscalar and vector mesons. See list of particles for a more detailed list of particles found in particle physics. This article contains a list of mesons, unstable subatomic particles composed of one quark and one antiquark. They are part of the hadron particle family—particles made of quarks. The other members of the hadron family are the baryons—subatomic particles composed of three quarks. The main difference between mesons and baryons is that mesons have integer spin (thus are bosons) while baryons are fermions (half-integer spin). Because mesons are bosons, the Pauli exclusion principle does not apply to them. Because of this, they can act as force mediating particles on short distances, and thus play a part in processes such as the nuclear interaction. Since mesons are composed of quarks, they participate in both the weak and strong interactions. Mesons with net electric charge also participate in the electromagnetic interaction. They are classified according to their quark content, total angular momentum, parity, and various other properties such as C-parity and G-parity. While no meson is stable, those of lower mass are nonetheless more stable than the most massive mesons, and are easier to observe and study in particle accelerators or in cosmic ray experiments. They are also typically less massive than baryons, meaning that they are more easily produced in experiments, and will exhibit higher-energy phenomena sooner than baryons would. For example, the charm quark was first seen in the J/Psi meson () in 1974, and the bottom quark in the upsilon meson () in 1977. The top quark (the last and heaviest quark to be discovered to date) was first observed at Fermilab in 1995. Each meson has a corresponding antiparticle (antimeson) where quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding anti
https://en.wikipedia.org/wiki/Higher-order%20sinusoidal%20input%20describing%20function
Definition The higher-order sinusoidal input describing functions (HOSIDF) were first introduced by dr. ir. P.W.J.M. Nuij. The HOSIDFs are an extension of the sinusoidal input describing function which describe the response (gain and phase) of a system at harmonics of the base frequency of a sinusoidal input signal. The HOSIDFs bear an intuitive resemblance to the classical frequency response function and define the periodic output of a stable, causal, time invariant nonlinear system to a sinusoidal input signal: This output is denoted by and consists of harmonics of the input frequency: Defining the single sided spectra of the input and output as and , such that yields the definition of the k-th order HOSIDF: Advantages and applications The application and analysis of the HOSIDFs is advantageous both when a nonlinear model is already identified and when no model is known yet. In the latter case the HOSIDFs require little model assumptions and can easily be identified while requiring no advanced mathematical tools. Moreover, even when a model is already identified, the analysis of the HOSIDFs often yields significant advantages over the use of the identified nonlinear model. First of all, the HOSIDFs are intuitive in their identification and interpretation while other nonlinear model structures often yield limited direct information about the behavior of the system in practice. Furthermore, the HOSIDFs provide a natural extension of the widely used sinusoidal describing functions in case nonlinearities cannot be neglected. In practice the HOSIDFs have two distinct applications: Due to their ease of identification, HOSIDFs provide a tool to provide on-site testing during system design. Finally, the application of HOSIDFs to (nonlinear) controller design for nonlinear systems is shown to yield significant advantages over conventional time domain based tuning. Electrical engineering Control theory Signal processing
https://en.wikipedia.org/wiki/Glossary%20of%20industrial%20automation
This glossary of industrial automation is a list of definitions of terms and illustrations related specifically to the field of industrial automation. For a more general view on electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering. A See also Glossary of engineering Glossary of power electronics Glossary of civil engineering Glossary of mechanical engineering Glossary of structural engineering Notes
https://en.wikipedia.org/wiki/List%20of%20common%20physics%20notations
This is a list of common physical constants and variables, and their notations. Note that bold text indicates that the quantity is a vector. Latin characters Greek characters Other characters See also List of letters used in mathematics and science Glossary of mathematical symbols List of mathematical uses of Latin letters Greek letters used in mathematics, science, and engineering Physical constant Physical quantity International System of Units ISO 31
https://en.wikipedia.org/wiki/Cisco%20Certified%20Entry%20Networking%20Technician
The Cisco Certified Entry Networking Technician (CCENT) certification was the first stage of Cisco's certification system. The certification was retired on 24 February 2020. The CCENT certification was an interim step to Associate level or directly with CCNA and CCDA certifications. While the CCENT covered basic networking knowledge; it did not get involved with the more intricate technical aspects of the Cisco routing and switching and network design. The certification validated the skills essential for entry-level network support positions. CCENT qualified individuals have the knowledge and skill to install, manage, maintain and troubleshoot a small enterprise branch network, including network security. The CCENT curriculum covered networking fundamentals, WAN technologies, basic security, routing and switching fundamentals, and configuring simple networks. The applicable training was the Cisco ICND1 ("Interconnecting Cisco Network Devices, Part 1") and the exam was ("100-105" ICND1), costing $165 retail. The certification was valid for 3 years. The CCENT qualifying exam, ICND1 was retired on 24 February 2020. Existing CCENT holders will continue to have active and valid CCENT certification 3 years from issue date. See also CCNA Cisco CCDA certification
https://en.wikipedia.org/wiki/Biological%20imaging
Biological imaging may refer to any imaging technique used in biology. Typical examples include: Bioluminescence imaging, a technique for studying laboratory animals using luminescent protein Calcium imaging, determining the calcium status of a tissue using fluorescent light Diffuse optical imaging, using near-infrared light to generate images of the body Diffusion-weighted imaging, a type of MRI that uses water diffusion Fluorescence lifetime imaging, using the decay rate of a fluorescent sample Gallium imaging, a nuclear medicine method for the detection of infections and cancers Imaging agent, a chemical designed to allow clinicians to determine whether a mass is benign or malignant Imaging studies, which includes many medical imaging techniques Magnetic resonance imaging (MRI), a non-invasive method to render images of living tissues Magneto-acousto-electrical tomography (MAET), is an imaging modality to image the electrical conductivity of biological tissues Medical imaging, creating images of the human body or parts of it, to diagnose or examine disease Microscopy, creating images of objects or features too small to be detectable by the naked human eye Molecular imaging, used to study molecular pathways inside organisms Non-contact thermography, is the field of thermography that derives diagnostic indications from infrared images of the human body. Nuclear medicine, uses administered radioactive substances to create images of internal organs and their function. Optical imaging, using light as an investigational tool for biological research and medical diagnosis Optoacoustic imaging, using the photothermal effect, for the accuracy of spectroscopy with the depth resolution of ultrasound Photoacoustic Imaging, a technique to detect vascular disease and cancer using non-ionizing laser pulses Ultrasound imaging, using very high frequency sound to visualize muscles and internal organs
https://en.wikipedia.org/wiki/Feller%27s%20coin-tossing%20constants
Feller's coin-tossing constants are a set of numerical constants which describe asymptotic probabilities that in n independent tosses of a fair coin, no run of k consecutive heads (or, equally, tails) appears. William Feller showed that if this probability is written as p(n,k) then where αk is the smallest positive real root of and Values of the constants For the constants are related to the golden ratio, , and Fibonacci numbers; the constants are and . The exact probability p(n,2) can be calculated either by using Fibonacci numbers, p(n,2) =  or by solving a direct recurrence relation leading to the same result. For higher values of , the constants are related to generalizations of Fibonacci numbers such as the tribonacci and tetranacci numbers. The corresponding exact probabilities can be calculated as p(n,k) = . Example If we toss a fair coin ten times then the exact probability that no pair of heads come up in succession (i.e. n = 10 and k = 2) is p(10,2) =  = 0.140625. The approximation gives 1.44721356...×1.23606797...−11 = 0.1406263...
https://en.wikipedia.org/wiki/Starch%20production
Starch production is an isolation of starch from plant sources. It takes place in starch plants. Starch industry is a part of food processing which is using starch as a starting material for production of starch derivatives, hydrolysates, dextrins. At first, the raw material for the preparation of the starch was wheat. Currently main starch sources are: maize (in America, China and Europe) – 70%, potatoes (in Europe) – 12%, wheat - 8% (in Europe and Australia), tapioca - 9% (South East Asia and South America), rice, sorghum and other - 1%. Potato starch production The production of potato starch comprises the steps such as delivery and unloading potatoes, cleaning, rasping of tubers, potato juice separation, starch extraction, starch milk refination, dewatering of refined starch milk and starch drying. The potato starch production supply chain varies significantly by region. For example, potato starch in Europe is produced from potatoes grown specifically for this purpose. However, in the US, potatoes are not grown for starch production and manufacturers must source raw material from food processor waste streams. The characteristics of these waste streams can vary significantly and require further processing by the US potato starch manufacturer to ensure the end-product functionality and specifications are acceptable. Delivery and unloading potatoes Potatoes are delivered to the starch plants via road or rail transport. Unloading of potatoes could be done in two ways: dry - using elevators and tippers, wet - using strong jet of water. Cleaning Coarsely cleaning of potatoes takes place during the transport of potatoes to the scrubber by channel. In addition, before the scrubber, straw and stones separators are installed. The main cleaning is conducted in scrubber (different kinds of high specialized machines are used). The remaining stones, sludge and light wastes are removed at this step. Water used for washing is then purified and recycled back into th
https://en.wikipedia.org/wiki/List%20of%20manifolds
This is a list of particular manifolds, by Wikipedia page. See also list of geometric topology topics. For categorical listings see :Category:Manifolds and its subcategories. Generic families of manifolds Euclidean space, Rn n-sphere, Sn n-torus, Tn Real projective space, RPn Complex projective space, CPn Quaternionic projective space, HPn Flag manifold Grassmann manifold Stiefel manifold Lie groups provide several interesting families. See Table of Lie groups for examples. See also: List of simple Lie groups and List of Lie group topics. Manifolds of a specific dimension 1-manifolds Circle, S1 Long line Real line, R Real projective line, RP1 ≅ S1 2-manifolds Cylinder, S1 × R Klein bottle, RP2 # RP2 Klein quartic (a genus 3 surface) Möbius strip Real projective plane, RP2 Sphere, S2 Surface of genus g Torus Double torus 3-manifolds 3-sphere, S3 3-torus, T3 Poincaré homology sphere SO(3) ≅ RP3 Solid Klein bottle Solid torus Whitehead manifold Meyerhoff manifold Weeks manifold For more examples see 3-manifold. 4-manifolds Complex projective plane Del Pezzo surface E8 manifold Enriques surface Exotic R4 Hirzebruch surface K3 surface For more examples see 4-manifold. Special types of manifolds Manifolds related to spheres Brieskorn manifold Exotic sphere Homology sphere Homotopy sphere Lens space Spherical 3-manifold Special classes of Riemannian manifolds Einstein manifold Ricci-flat manifold G2 manifold Kähler manifold Calabi–Yau manifold Hyperkähler manifold Quaternionic Kähler manifold Riemannian symmetric space Spin(7) manifold Categories of manifolds Manifolds definable by a particular choice of atlas Affine manifold Analytic manifold Complex manifold Differentiable (smooth) manifold Piecewise linear manifold Lipschitz manifold Topological manifold Manifolds with additional structure Almost complex manifold Almost symplectic manifold Calibrated manifold Complex manifold Contac
https://en.wikipedia.org/wiki/Transport%20theorem
The transport theorem (or transport equation, rate of change transport theorem or basic kinematic equation) is a vector equation that relates the time derivative of a Euclidean vector as evaluated in a non-rotating coordinate system to its time derivative in a rotating reference frame. It has important applications in classical mechanics and analytical dynamics and diverse fields of engineering. A Euclidean vector represents a certain magnitude and direction in space that is independent of the coordinate system in which it is measured. However, when taking a time derivative of such a vector one actually takes the difference between two vectors measured at two different times t and t+dt. In a rotating coordinate system, the coordinate axes can have different directions at these two times, such that even a constant vector can have a non-zero time derivative. As a consequence, the time derivative of a vector measured in a rotating coordinate system can be different from the time derivative of the same vector in a non-rotating reference system. For example, the velocity vector of an airplane as evaluated using a coordinate system that is fixed to the earth (a rotating reference system) is different from its velocity as evaluated using a coordinate system that is fixed in space. The transport theorem provides a way to relate time derivatives of vectors between a rotating and non-rotating coordinate system, it is derived and explained in more detail in rotating reference frame and can be written as: Here f is the vector of which the time derivative is evaluated in both the non-rotating, and rotating coordinate system. The subscript r designates its time derivative in the rotating coordinate system and the vector Ω is the angular velocity of the rotating coordinate system. The Transport Theorem is particularly useful for relating velocities and acceleration vectors between rotating and non-rotating coordinate systems. Reference states: "Despite of its importance in cla
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20topos%20theory
In mathematics, The fundamental theorem of topos theory states that the slice of a topos over any one of its objects is itself a topos. Moreover, if there is a morphism in then there is a functor which preserves exponentials and the subobject classifier. The pullback functor For any morphism f in there is an associated "pullback functor" which is key in the proof of the theorem. For any other morphism g in which shares the same codomain as f, their product is the diagonal of their pullback square, and the morphism which goes from the domain of to the domain of f is opposite to g in the pullback square, so it is the pullback of g along f, which can be denoted as . Note that a topos is isomorphic to the slice over its own terminal object, i.e. , so for any object A in there is a morphism and thereby a pullback functor , which is why any slice is also a topos. For a given slice let denote an object of it, where X is an object of the base category. Then is a functor which maps: . Now apply to . This yields so this is how the pullback functor maps objects of to . Furthermore, note that any element C of the base topos is isomorphic to , therefore if then and so that is indeed a functor from the base topos to its slice . Logical interpretation Consider a pair of ground formulas and whose extensions and (where the underscore here denotes the null context) are objects of the base topos. Then implies if and only if there is a monic from to . If these are the case then, by theorem, the formula is true in the slice , because the terminal object of the slice factors through its extension . In logical terms, this could be expressed as so that slicing by the extension of would correspond to assuming as a hypothesis. Then the theorem would say that making a logical assumption does not change the rules of topos logic. See also Timeline of category theory and related mathematics Deduction Theorem
https://en.wikipedia.org/wiki/Food%20browning
Browning is the process of food turning brown due to the chemical reactions that take place within. The process of browning is one of the chemical reactions that take place in food chemistry and represents an interesting research topic regarding health, nutrition, and food technology. Though there are many different ways food chemically changes over time, browning in particular falls into two main categories: enzymatic versus non-enzymatic browning processes. Browning has many important implications on the food industry relating to nutrition, technology, and economic cost. Researchers are especially interested in studying the control (inhibition) of browning and the different methods that can be employed to maximize this inhibition and ultimately prolong the shelf life of food. Enzymatic browning Enzymatic browning is one of the most important reactions that takes place in most fruits and vegetables as well as in seafood. These processes affect the taste, color, and value of such foods. Generally, it is a chemical reaction involving polyphenol oxidase (PPO), catechol oxidase, and other enzymes that create melanins and benzoquinone from natural phenols. Enzymatic browning (also called oxidation of foods) requires exposure to oxygen. It begins with the oxidation of phenols by polyphenol oxidase into quinones, whose strong electrophilic state causes high susceptibility to a nucleophilic attack from other proteins. These quinones are then polymerized in a series of reactions, eventually resulting in the formation of brown pigments (melanosis) on the surface of the food. The rate of enzymatic browning is reflected by the amount of active polyphenol oxidases present in the food. Hence, most research into methods of preventing enzymatic browning has been directed towards inhibiting polyphenol oxidase activity. However, not all browning of food produces negative effects. Examples of beneficial enzymatic browning: Developing color and flavor in coffee, cocoa beans, a
https://en.wikipedia.org/wiki/Cache%20hierarchy
Cache hierarchy, or multi-level caches, refers to a memory architecture that uses a hierarchy of memory stores based on varying access speeds to cache data. Highly requested data is cached in high-speed access memory stores, allowing swifter access by central processing unit (CPU) cores. Cache hierarchy is a form and part of memory hierarchy and can be considered a form of tiered storage. This design was intended to allow CPU cores to process faster despite the memory latency of main memory access. Accessing main memory can act as a bottleneck for CPU core performance as the CPU waits for data, while making all of main memory high-speed may be prohibitively expensive. High-speed caches are a compromise allowing high-speed access to the data most-used by the CPU, permitting a faster CPU clock. Background In the history of computer and electronic chip development, there was a period when increases in CPU speed outpaced the improvements in memory access speed. The gap between the speed of CPUs and memory meant that the CPU would often be idle. CPUs were increasingly capable of running and executing larger amounts of instructions in a given time, but the time needed to access data from main memory prevented programs from fully benefiting from this capability. This issue motivated the creation of memory models with higher access rates in order to realize the potential of faster processors. This resulted in the concept of cache memory, first proposed by Maurice Wilkes, a British computer scientist at the University of Cambridge in 1965. He called such memory models "slave memory". Between roughly 1970 and 1990, papers and articles by Anant Agarwal, Alan Jay Smith, Mark D. Hill, Thomas R. Puzak, and others discussed better cache memory designs. The first cache memory models were implemented at the time, but even as researchers were investigating and proposing better designs, the need for faster memory models continued. This need resulted from the fact that although ear
https://en.wikipedia.org/wiki/Cache%20control%20instruction
In computing, a cache control instruction is a hint embedded in the instruction stream of a processor intended to improve the performance of hardware caches, using foreknowledge of the memory access pattern supplied by the programmer or compiler. They may reduce cache pollution, reduce bandwidth requirement, bypass latencies, by providing better control over the working set. Most cache control instructions do not affect the semantics of a program, although some can. Examples Several such instructions, with variants, are supported by several processor instruction set architectures, such as ARM, MIPS, PowerPC, and x86. Prefetch Also termed data cache block touch, the effect is to request loading the cache line associated with a given address. This is performed by the PREFETCH instruction in the x86 instruction set. Some variants bypass higher levels of the cache hierarchy, which is useful in a 'streaming' context for data that is traversed once, rather than held in the working set. The prefetch should occur sufficiently far ahead in time to mitigate the latency of memory access, for example in a loop traversing memory linearly. The GNU Compiler Collection intrinsic function __builtin_prefetch can be used to invoke this in the programming languages C or C++. Instruction prefetch A variant of prefetch for the instruction cache. Data cache block allocate zero This hint is used to prepare cache lines before overwriting the contents completely. In this example, the CPU needn't load anything from main memory. The semantic effect is equivalent to an aligned memset of a cache-line sized block to zero, but the operation is effectively free. Data cache block invalidate This hint is used to discard cache lines, without committing their contents to main memory. Care is needed since incorrect results are possible. Unlike other cache hints, the semantics of the program are significantly modified. This is used in conjunction with allocate zero for managing temporary data.
https://en.wikipedia.org/wiki/Sampling%20%28medicine%29
In medicine, sampling is gathering of matter from the body to aid in the process of a medical diagnosis and/or evaluation of an indication for treatment, further medical tests or other procedures. In this sense, the sample is the gathered matter, and the sampling tool or sampler is the person or material to collect the sample. Sampling is a prerequisite for many medical tests, but generally not for medical history, physical examination and radiologic tests. By sampling technique Obtaining excretions or materials that leave the body anyway, such as urine, stool, sputum, or vomitus, by direct collection as they exit. A sample of saliva can also be collected from the mouth. Excision (cutting out), a surgical method for the removal of solid or soft tissue samples. Puncture (also called centesis) followed by aspiration is the main method used for sampling of many types of tissues and body fluids. Examples are thoracocentesis to sample pleural fluid, and amniocentesis to sample amniotic fluid. The main method of centesis, in turn, is fine needle aspiration, but there are also somewhat differently designed needles, such as for bone marrow aspiration. Puncture without aspiration may suffice in, for example, capillary blood sampling. Scraping or swiping. In a Pap test, cells are scraped off a uterine cervix with a special spatula and brush or a special broom device that is inserted through a vagina without having to puncture any tissue. Epithelial cells for DNA testing can be obtained by swiping the inside of a cheek in a mouth with a swab. Biopsy or cytopathology In terms of sampling technique, a biopsy generally refers to a preparation where the normal tissue structure is preserved, availing for examination of both individual cells and their organization for the study of histology, while a sample for cytopathology is prepared primarily for the examination of individual cells, not necessarily preserving the tissue structure. Examples of biopsy procedures are bone ma