source
stringlengths
31
207
text
stringlengths
12
1.5k
https://en.wikipedia.org/wiki/Housekeeping%20gene
In molecular biology, housekeeping genes are typically constitutive genes that are required for the maintenance of basic cellular function, and are expressed in all cells of an organism under normal and patho-physiological conditions. Although some housekeeping genes are expressed at relatively constant rates in most non-pathological situations, the expression of other housekeeping genes may vary depending on experimental conditions. The origin of the term "housekeeping gene" remains obscure. Literature from 1976 used the term to describe specifically tRNA and rRNA. For experimental purposes, the expression of one or multiple housekeeping genes is used as a reference point for the analysis of expression levels of other genes. The key criterion for the use of a housekeeping gene in this manner is that the chosen housekeeping gene is uniformly expressed with low variance under both control and experimental conditions. Validation of housekeeping genes should be performed before their use in gene expression experiments such as RT-PCR. Recently a web-based database of human and mouse housekeeping genes and reference genes/transcripts, named Housekeeping and Reference Transcript Atlas (HRT Atlas), was developed to offer updated list of housekeeping genes and reliable candidate reference genes/transcripts for RT-qPCR data normalization. This database can be accessed at http://www.housekeeping.unicamp.br. Housekeeping gene regulation Housekeeping genes account for majority of t
https://en.wikipedia.org/wiki/Histone%20octamer
In molecular biology, a histone octamer is the eight-protein complex found at the center of a nucleosome core particle. It consists of two copies of each of the four core histone proteins (H2A, H2B, H3, and H4). The octamer assembles when a tetramer, containing two copies of H3 and two of H4, complexes with two H2A/H2B dimers. Each histone has both an N-terminal tail and a C-terminal histone-fold. Each of these key components interacts with DNA in its own way through a series of weak interactions, including hydrogen bonds and salt bridges. These interactions keep the DNA and the histone octamer loosely associated, and ultimately allow the two to re-position or to separate entirely. History of research Histone post-translational modifications were first identified and listed as having a potential regulatory role on the synthesis of RNA in 1964. Since then, over several decades, chromatin theory has evolved. Chromatin subunit models as well as the notion of the nucleosome were established in 1973 and 1974, respectively. Richmond and his research group has been able to elucidate the crystal structure of the histone octamer with DNA wrapped up around it at a resolution of 7 Å in 1984. The structure of the octameric core complex was revisited seven years later and a resolution of 3.1 Å was elucidated for its crystal at a high salt concentration. Though sequence similarity is low between the core histones, each of the four have a repeated element consisting of a helix-loop-h
https://en.wikipedia.org/wiki/NBIA
NBIA may refer to: Niels Bohr International Academy, a Center for Theoretical Physics at the Niels Bohr Institute, Copenhagen Neurodegeneration with brain iron accumulation, a group of degenerative diseases of the brain (New) Bangkok International Airport (Suvarnabhumi Airport), an international airport serving Bangkok, Thailand National Biomedical Imaging Archive, a National Cancer Institute repository of medical images for researchers and imaging tool developers Normandy Beach Improvement Association North Bali International Airport
https://en.wikipedia.org/wiki/Chemical%20oxygen%20demand
In environmental chemistry, the chemical oxygen demand (COD) is an indicative measure of the amount of oxygen that can be consumed by reactions in a measured solution. It is commonly expressed in mass of oxygen consumed over volume of solution which in SI units is milligrams per litre (mg/L). A COD test can be used to easily quantify the amount of organics in water. The most common application of COD is in quantifying the amount of oxidizable pollutants found in surface water (e.g. lakes and rivers) or wastewater. COD is useful in terms of water quality by providing a metric to determine the effect an effluent will have on the receiving body, much like biochemical oxygen demand (BOD). Overview The basis for the COD test is that nearly all organic compounds can be fully oxidized to carbon dioxide with a strong oxidizing agent under acidic conditions. The amount of oxygen required to oxidize an organic compound to carbon dioxide, ammonia, and water is given by: This expression does not include the oxygen demand caused by nitrification, the oxidation of ammonia into nitrate: Dichromate, the oxidizing agent for COD determination, does not oxidize ammonia into nitrate, so nitrification is not included in the standard COD test. The International Organization for Standardization describes a standard method for measuring chemical oxygen demand in ISO 6060 . Using potassium dichromate Potassium dichromate is a strong oxidizing agent under acidic conditions. Acidity is usually ac
https://en.wikipedia.org/wiki/Phosphite%20anion
A phosphite anion or phosphite in inorganic chemistry usually refers to [HPO3]2− but includes [H2PO3]− ([HPO2(OH)]−). These anions are the conjugate bases of phosphorous acid (H3PO3). The corresponding salts, e.g. sodium phosphite (Na2HPO3) are reducing in character. Nomenclature The IUPAC recommended name for phosphorous acid is phosphonic acid. Correspondingly, the IUPAC-recommended name for the ion is phosphonate. In the US the IUPAC naming conventions for inorganic compounds are taught at high school, but not as a 'required' part of the curriculum. A well-known university-level textbook follows the IUPAC recommendations. In practice any reference to "phosphite" should be investigated to determine the naming convention being employed. Salts containing HPO32−, called phosphonates or phosphites From the commercial perspective, the most important phosphite salt is basic lead phosphite. Many salts containing the phosphite ion have been investigated structurally, these include sodium phosphite pentahydrate (Na2HPO3·5H2O). (NH4)2HPO3·H2O, CuHPO3·H2O, SnHPO3 and Al2(HPO3)3·4H2O. The structure of is approximately tetrahedral. has a number of canonical resonance forms making it isoelectronic with bisulfite ion, , which has a similar structure. Salts containing HP(O)2OH− Acid or hydrogen phosphites are called hydrogenphosphonates or acid phosphites. IUPAC recommends the name hydrogenphosphonates). They are anions HP(O)2OH−. Aypical derivative is the salt [NH4][HP(O)2
https://en.wikipedia.org/wiki/Subsequential%20limit
In mathematics, a subsequential limit of a sequence is the limit of some subsequence. Every subsequential limit is a cluster point, but not conversely. In first-countable spaces, the two concepts coincide. In a topological space, if every subsequence has a subsequential limit to the same point, then the original sequence also converges to that limit. This need not hold in more generalized notions of convergence, such as the space of almost everywhere convergence. The supremum of the set of all subsequential limits of some sequence is called the limit superior, or limsup. Similarly, the infimum of such a set is called the limit inferior, or liminf. See limit superior and limit inferior. If is a metric space and there is a Cauchy sequence such that there is a subsequence converging to some then the sequence also converges to See also References Limits (mathematics) Sequences and series
https://en.wikipedia.org/wiki/Comparison%20%28disambiguation%29
Comparison is the act of examining the similarities and differences between things. Comparison may also refer to: Computer science and technology Comparison (computer programming), a code that makes decisions and selects alternatives based on them Comparison microscope, a dual microscope for analyzing side-by-side specimens Comparison sort, a type of data sort algorithm File comparison, the automatic comparison of data such as files and texts by computer programs Price comparison service, an Internet service Language Comparison (grammar), the modification of adjectives and adverbs to express the relative degree Mass comparison, a test for the relatedness of languages Mathematics Comparison (mathematics), a notation for comparing variable values Comparison of topologies, an order relation on the set of all topologies on one and the same set Multiple comparisons, a procedure of statistics a synonym for co-transitivity, in constructive mathematics Psychology Pairwise comparison, a test of psychology Social comparison theory, a branch of social psychology Other uses Compare: A Journal of Comparative and International Education Cross-cultural studies, which involve cross-cultural comparisons See also Comparability, a mathematical definition Comparative (disambiguation) Comparator (disambiguation)
https://en.wikipedia.org/wiki/Nonholonomic%20system
A nonholonomic system in physics and mathematics is a physical system whose state depends on the path taken in order to achieve it. Such a system is described by a set of parameters subject to differential constraints and non-linear constraints, such that when the system evolves along a path in its parameter space (the parameters varying continuously in values) but finally returns to the original set of parameter values at the start of the path, the system itself may not have returned to its original state. Nonholonomic mechanics is autonomous division of Newtonian mechanics. Details More precisely, a nonholonomic system, also called an anholonomic system, is one in which there is a continuous closed circuit of the governing parameters, by which the system may be transformed from any given state to any other state. Because the final state of the system depends on the intermediate values of its trajectory through parameter space, the system cannot be represented by a conservative potential function as can, for example, the inverse square law of the gravitational force. This latter is an example of a holonomic system: path integrals in the system depend only upon the initial and final states of the system (positions in the potential), completely independent of the trajectory of transition between those states. The system is therefore said to be integrable, while the nonholonomic system is said to be nonintegrable. When a path integral is computed in a nonholonomic system, t
https://en.wikipedia.org/wiki/Thailand%20National%20Nanotechnology%20Center
National Nanotechnology Center (NANOTEC) is one of Thailand's National Research Centers, directed by National Science and Technology Development Agency (NSTDA), Ministry of Higher Education, Science, Research and Innovation. See also Nanotechnology National Science and Technology Development Agency (NSTDA) External links Research institutes in Thailand Nanotechnology institutions National Science and Technology Development Agency
https://en.wikipedia.org/wiki/Oblivious%20transfer
In cryptography, an oblivious transfer (OT) protocol is a type of protocol in which a sender transfers one of potentially many pieces of information to a receiver, but remains oblivious as to what piece (if any) has been transferred. The first form of oblivious transfer was introduced in 1981 by Michael O. Rabin. In this form, the sender sends a message to the receiver with probability 1/2, while the sender remains oblivious as to whether or not the receiver received the message. Rabin's oblivious transfer scheme is based on the RSA cryptosystem. A more useful form of oblivious transfer called 1–2 oblivious transfer or "1 out of 2 oblivious transfer", was developed later by Shimon Even, Oded Goldreich, and Abraham Lempel, in order to build protocols for secure multiparty computation. It is generalized to "1 out of n oblivious transfer" where the user gets exactly one database element without the server getting to know which element was queried, and without the user knowing anything about the other elements that were not retrieved. The latter notion of oblivious transfer is a strengthening of private information retrieval, in which the database is not kept private. Claude Crépeau showed that Rabin's oblivious transfer is equivalent to 1–2 oblivious transfer. Further work has revealed oblivious transfer to be a fundamental and important problem in cryptography. It is considered one of the critical problems in the field, because of the importance of the applications that
https://en.wikipedia.org/wiki/Non-blocking%20algorithm
In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003. The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls" (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection" (see nonblocking minimal spanning switch). Motivation The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free. Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish any
https://en.wikipedia.org/wiki/List%20of%20effects
This is a list of names for non-observable phenomena that contain the word “effect”, amplified by reference(s) to their respective fields of study. A Abscopal effect (cancer treatments) (immune system) (medical treatments) (radiation therapy) Accelerator effect (economics) Accordion effect (physics) (waves) Acousto-optic effect (nonlinear optics) (waves) Additive genetic effects (genetics) Aharonov–Bohm effect (quantum mechanics) Al Jazeera effect (Al Jazeera) (media issues) Alienation effect (acting techniques) (Bertolt Brecht theories and techniques) (film theory) (metafictional techniques) (theatre) Allais effect (fringe physics) Allee effect (biology) Ambiguity effect (cognitive biases) Anrep effect (cardiology) (medicine) Antenna effect (digital electronics) (electronic design automation) Anti-greenhouse effect (atmospheric dynamics) (atmospheric science) (astronomy) (planetary atmospheres) Askaryan effect (particle physics) Asymmetric blade effect (aerodynamics) Audience effect (psychology) (social psychology) Auger effect (atomic physics) (foundational quantum physics) Aureole effect (atmospheric optical phenomena) (scientific terminology) Autler–Townes effect (atomic, molecular, and optical physics) (atomic physics) (quantum optics) Autokinetic effect (vision) Avalanche effect (cryptography) Averch–Johnson effect (economics) B Baader-Meinhof effect / Baader-Meinhof phenomenon (psychology) Balassa–Samuelson effect (economics) Baldwin effect (evolutionary biology)
https://en.wikipedia.org/wiki/Magic%20%28cryptography%29
Magic was an Allied cryptanalysis project during World War II. It involved the United States Army's Signals Intelligence Service (SIS) and the United States Navy's Communication Special Unit. Codebreaking Magic was set up to combine the US government's cryptologic capabilities in one organization dubbed the Research Bureau. Intelligence officers from the Army and Navy (and later civilian experts and technicians) were all under one roof. Although they worked on a series of codes and cyphers, their most important successes involved RED, BLUE, and PURPLE. RED In 1923, a US Navy officer acquired a stolen copy of the Secret Operating Code codebook used by the Japanese Navy during World War I. Photographs of the codebook were given to the cryptanalysts at the Research Desk and the processed code was kept in red-colored folders (to indicate its Top Secret classification). This code was called "RED". BLUE In 1930, the Japanese government created a more complex code that was codenamed BLUE, although RED was still being used for low-level communications. It was quickly broken by the Research Desk no later than 1932. US Military Intelligence COMINT listening stations began monitoring command-to-fleet, ship-to-ship, and land-based communications. PURPLE After Japan's ally Germany declared war in the fall of 1939, the German government began sending technical assistance to upgrade their communications and cryptography capabilities. One part was to send them modified Enigma machines
https://en.wikipedia.org/wiki/Type%20conversion
In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted. Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware. In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (li
https://en.wikipedia.org/wiki/Schwarzian%20derivative
In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz. Definition The Schwarzian derivative of a holomorphic function of one complex variable is defined by The same formula also defines the Schwarzian derivative of a function of one real variable. The alternative notation is frequently used. Properties The Schwarzian derivative of any Möbius transformation is zero. Conversely, the Möbius transformations are the only functions with this property. Thus, the Schwarzian derivative precisely measures the degree to which a function fails to be a Möbius transformation. If is a Möbius transformation, then the composition has the same Schwarzian derivative as ; and on the other hand, the Schwarzian derivative of is given by the chain rule More generally, for any sufficiently differentiable functions and When and are smooth real-valued functions, this implies that all iterations of a function with negative (or positive) Schwarzian will remain negative (resp. positive), a fact of use in the study of one-dimensional dynamics. Introducing the function of two complex variables its s
https://en.wikipedia.org/wiki/List%20of%20chemical%20compounds%20with%20unusual%20names
Chemical nomenclature, replete as it is with compounds with very complex names, is a repository for some names that may be considered unusual. A browse through the Physical Constants of Organic Compounds in the CRC Handbook of Chemistry and Physics (a fundamental resource) will reveal not just the whimsical work of chemists, but the sometimes peculiar compound names that occur as the consequence of simple juxtaposition. Some names derive legitimately from their chemical makeup, from the geographic region where they may be found, the plant or animal species from which they are isolated or the name of the discoverer. Some are given intentionally unusual trivial names based on their structure, a notable property or at the whim of those who first isolate them. However, many trivial names predate formal naming conventions. Trivial names can also be ambiguous or carry different meanings in different industries, geographic regions and languages. Godly noted that "Trivial names having the status of INN or ISO are carefully tailor-made for their field of use and are internationally accepted". In his preface to Chemical Nomenclature, Thurlow wrote that "Chemical names do not have to be deadly serious". A website in existence since 1997 and maintained at the University of Bristol lists a selection of "molecules with silly or unusual names" strictly for entertainment. These so-called silly or funny trivial names (depending on culture) can also serve an educational purpose. In an articl
https://en.wikipedia.org/wiki/Mean-field%20theory
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. Origins The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in close
https://en.wikipedia.org/wiki/William%20Pugsley
William Pugsley (September 27, 1850 – March 3, 1925) was a politician and lawyer in New Brunswick, Canada. Biography He was born in Sussex, New Brunswick, the son of William Pugsley, of United Empire Loyalist descent, and Frances Jane Hayward. He was educated at the University of New Brunswick. He studied mathematics, classics, and English and was awarded many scholarships. In his junior year he was the gold medallist of his class. He went on to study law, was admitted to the bar in 1872 and set up practice in Saint John. The University of New Brunswick awarded him a BCL in 1879 and would confer honorary degrees of DCL in 1884 and LL.D in 1918. Pugsley was created a QC on 4 February 1891. Pugsley, a Liberal, served as Speaker of the Legislative Assembly of New Brunswick, Solicitor-General and Attorney-General in various Liberal governments before becoming the 11th premier of New Brunswick in 1907. He resigned in September of that year to become minister of public works in the federal Liberal government of Sir Wilfrid Laurier. He served in that position until the government's defeat in the 1911 federal election, but remained as a Member of Parliament (MP) until 1917 when he was appointed the 15th lieutenant governor of New Brunswick. When his term ended in 1923, he was appointed to a federal position in charge of settling war claims, and held that position until his death. Pugsley was staying at King Edward Hotel when he fell ill and died of pneumonia in Toronto in 1925. H
https://en.wikipedia.org/wiki/McFarland%20standards
In microbiology, McFarland standards are used as a reference to adjust the turbidity of bacterial suspensions so that the number of bacteria will be within a given range to standardize microbial testing. An example of such testing is antibiotic susceptibility testing by measurement of minimum inhibitory concentration which is routinely used in medical microbiology and research. If a suspension used is too heavy or too dilute, an erroneous result (either falsely resistant or falsely susceptible) for any given antimicrobial agent could occur. Original McFarland standards were made by mixing specified amounts of barium chloride and sulfuric acid together. Mixing the two compounds forms a barium sulfate precipitate, which causes turbidity in the solution. A 0.5 McFarland standard is prepared by mixing 0.05 mL of 1.175% barium chloride dihydrate (BaCl2•2H2O), with 9.95 mL of 1% sulfuric acid (H2SO4). Now there are McFarland standards prepared from suspensions of latex particles, which lengthens the shelf life and stability of the suspensions. The standard can be compared visually to a suspension of bacteria in sterile saline or nutrient broth. If the bacterial suspension is too turbid, it can be diluted with more diluent. If the suspension is not turbid enough, more bacteria can be added. McFarland nephelometer standards:{2} *at wavelength of 600 nm McFarland latex standards from Hardy Diagnostics (2014-12-10), measured at the UCSF DeRisi Lab: References THE NEPHELOMET
https://en.wikipedia.org/wiki/Oersted%20Medal
The Oersted Medal recognizes notable contributions to the teaching of physics. Established in 1936, it is awarded by the American Association of Physics Teachers. The award is named for Hans Christian Ørsted. It is the Association's most prestigious award. Well-known recipients include Nobel laureates Robert Andrews Millikan, Edward M. Purcell, Richard Feynman, Isidor I. Rabi, Norman F. Ramsey, Hans Bethe, and Carl Wieman; as well as Arnold Sommerfeld, George Uhlenbeck, Jerrold Zacharias, Philip Morrison, Melba Phillips, Victor Weisskopf, Gerald Holton, John A. Wheeler, Frank Oppenheimer, Robert Resnick, Carl Sagan, Freeman Dyson, Daniel Kleppner, and Lawrence Krauss, and Anthony French, David Hestenes, Robert Karplus, Robert Pohl, and Francis Sears. The 2008 medalist, Mildred S. Dresselhaus, is the third woman to win the award in its 70-plus-year history. Medalists William Suddards Franklin – 1936 Edwin Herbert Hall – 1937 Alexander Wilmer Duff – 1938 Benjamin Harrison Brown – 1939 Robert Andrews Millikan – 1940 Henry Crew – 1941 not awarded in 1942 George Walter Stewart – 1943 Roland Roy Tileston – 1944 Homer Levi Dodge – 1945 Ray Lee Edwards – 1946 Duane Roller – 1947 William Harley Barber – 1948 Arnold Sommerfeld – 1949 Orrin H. Smith – 1950 John Wesley Hornbeck – 1951 Ansel A. Knowlton – 1952 Richard M. Sutton – 1953 Clifford N. Wall – 1954 Vernet E. Eaton – 1955 George E. Uhlenbeck – 1956 Mark W. Zemansky – 1957 Jay William Buchta – 1958 Pau
https://en.wikipedia.org/wiki/Richard%20Adolf%20Zsigmondy
Richard Adolf Zsigmondy (; 1 April 1865 – 23 September 1929) was an Austrian-born chemist. He was known for his research in colloids, for which he was awarded the Nobel Prize in chemistry in 1925, as well as for co-inventing the slit-ultramicroscope, and different membrane filters. The crater Zsigmondy on the Moon is named in his honour. Biography Early years Zsigmondy was born in Vienna, Austrian Empire, to a Hungarian gentry family. His mother Irma Szakmáry, a poet born in Martonvásár, and his father, Adolf Zsigmondy Sr., a scientist from Pressburg (Pozsony, today's Bratislava) who invented several surgical instruments for use in dentistry. Zsigmondy family members were Lutherans. They originated from Johannes () Sigmondi (1686–1746, Bártfa, Kingdom of Hungary) and included teachers, priests and Hungarian freedom-fighters. Richard was raised by his mother after his father's early death in 1880, and received a comprehensive education. He enjoyed hobbies such as climbing and mountaineering with his siblings. His elder brothers, Otto (a dentist) and Emil (a physician), were well-known mountain climbers; his younger brother, Karl Zsigmondy, became a notable mathematician in Vienna. In high school Richard developed an interest in natural science, especially in chemistry and physics, and experimented in his home laboratory. He began his academic career at the University of Vienna Medical Faculty, but soon moved to the Technical University of Vienna, and later to the Universit
https://en.wikipedia.org/wiki/Paul%20Richard%20Heinrich%20Blasius
Paul Richard Heinrich Blasius (9 August 1883 – 24 April 1970) was a German fluid dynamics physicist. He was one of the first students of Prandtl. Blasius provided a mathematical basis for boundary-layer drag but also showed as early as 1911 that the resistance to flow through smooth pipes could be expressed in terms of the Reynolds number for both laminar and turbulent flow. After six years in science he changed to Ingenieurschule Hamburg (today: University of Applied Sciences Hamburg) and became a Professor. On 1 April 1962 Heinrich Blasius celebrated his 50th anniversary in teaching. He was active in his field until he died on 24 April 1970. One of his most notable contributions involves a description of the steady two-dimensional boundary-layer that forms on a semi-infinite plate that is held parallel to a constant unidirectional flow . Correlations First law of Blasius for turbulent Fanning friction factor: Second law of Blasius for turbulent Fanning friction factor: Law of Blasius for friction coefficient in turbulent pipe flow: See also Blasius function Notes References Hager, W.H., "Blasius: A life in research and education," Experiments in Fluids, 34: 566–571 (2003) Blasius, H., "Das Aehnlichkeitsgesetz bei Reibungsvorgängen in Flüssigkeiten", Mitteilungen über Forschungsarbeiten auf dem Gebiete des Ingenieurwesens, vol.134, VDI-Verlag Berlin (1913) External links 1883 births 1970 deaths 20th-century German physicists Fluid dynamicists
https://en.wikipedia.org/wiki/Graph%20%28abstract%20data%20type%29
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.). Operations The basic operations provided by a graph data structure G usually include: : tests whether there is an edge from the vertex x to the vertex y; : lists all vertices y such that there is an edge from the vertex x to the vertex y; : adds the vertex x, if it is not there; : removes the vertex x, if it is there; : adds the edge z from the vertex x to the vertex y, if it is not there; : removes the edge from the vertex x to the vertex y, if it is there; : returns the value associated with the vertex x; : sets the value associated with the vertex x to v. Structures that associate values to the edges usually also provide: : returns
https://en.wikipedia.org/wiki/David%20J.%20Stevenson
David John Stevenson (born 2 September 1948) is a professor of planetary science at Caltech. Originally from New Zealand, he received his Ph.D. from Cornell University in physics, where he proposed a model for the interior of Jupiter. He is well known for applying fluid mechanics and magnetohydrodynamics to understand the internal structure and evolution of planets and moons. Sending a probe into the Earth Stevenson's tongue-in-cheek idea about sending a probe into the earth includes the use of nuclear weapons to crack the Earth's crust, simultaneously melting and filling the crack with molten iron containing a probe. The iron, by the action of its weight, will propagate a crack into the mantle and would subsequently sink and reach the Earth's core in weeks. Communication with the probe would be achieved with modulated acoustic waves. This idea was used in the book Artemis Fowl: The Opal Deception. Honors and awards In 1984, he received the H. C. Urey Prize awarded by the Division for Planetary Sciences of the American Astronomical Society. Stevenson is a fellow of the Royal Society and a member of the United States National Academy of Sciences. Minor planet 5211 Stevenson is named in his honor. See also Travel to the Earth's center Theoretical planetology References and sources External links Web Site at Caltech 1948 births Living people Foreign associates of the National Academy of Sciences 21st-century American astronomers Cornell University alumni 20th-centur
https://en.wikipedia.org/wiki/Dihydroxyacetone
Dihydroxyacetone (; DHA), also known as glycerone, is a simple saccharide (a triose) with formula . DHA is primarily used as an ingredient in sunless tanning products. It is often derived from plant sources such as sugar beets and sugar cane, and by the fermentation of glycerin. Chemistry DHA is a hygroscopic white crystalline powder. It has a sweet cooling taste and a characteristic odor. It is the simplest of all ketoses and has no chiral center. The normal form is a dimer (2,5-bis(hydroxymethyl)-1,4-dioxane-2,5-diol). The dimer slowly dissolves in water, whereupon it converts to the monomer. These solutions are stable at pH's between 4 and 6. In more basic solution, it degrades to brown product. This skin browning effect is attributed to a Maillard reaction. DHA condenses with the amino acid residues in the protein keratin, the major component of the skin surface. When injected, no pigmentation occurs, consistent with a role for oxygen in color development. The resulting pigments, which can be removed by abrasion, are called melanoidins. These are similar in coloration to melanin, the natural substance in the deeper skin layers which brown or "tan" from exposure to UV rays. Preparation DHA may be prepared, along with glyceraldehyde, by the mild oxidation of glycerol, for example with hydrogen peroxide and a ferrous salt as catalyst. It can also be prepared in high yield and selectivity at room temperature from glycerol using cationic palladium-based catalysts with
https://en.wikipedia.org/wiki/Jan%20Conn
Jan E. Conn (born 1952) is a Canadian geneticist and poet. She resides in Great Barrington, Massachusetts where she does research on mosquito genetics at the Wadsworth Center, Division of Infectious Diseases, New York State Department of Health in Albany, New York. She has also written six books of poetry. Biography Conn was born in Asbestos, Quebec and moved to Montreal at the age of 17. She received her Ph.D. in genetics from the University of Toronto in 1987. She has traveled to Guatemala, Venezuela, Florida, Vermont and Massachusetts, conducting research on insects that transmit pathogens. Before taking up her current work on population genetics of malaria-carrying mosquitoes in South America and Africa, she was a recognized expert on the genetics of Black fly (Simulium) species vectoring river blindness (onchocerciasis) in Central America. Poetry Conn has written six books of poetry, including Jaguar Rain: the Margaret Mee poems, inspired by the diaries and botanical art of Margaret Mee. She has won numerous awards and major travel grants related to poetry. Her book South of the Tudo Bem Cafe, 1990, was shortlisted for the Pat Lowther Award. Bibliography Red Shoes in the Rain - 1980 The Fabulous Disguise of Ourselves - 1986 South of the Tudo Bem Cafe - 1992 What Dante Did With Loss - 1996 Beauties on Mad River - 2000 Jaguar Rain: the Margaret Mee poems - 2006 See also List of Canadian poets List of Canadian writers References External links Jan Conn poetry homep
https://en.wikipedia.org/wiki/Karl%20Ziegler
Karl Waldemar Ziegler (; 26 November 1898 – 12 August 1973) was a German chemist who won the Nobel Prize in Chemistry in 1963, with Giulio Natta, for work on polymers. The Nobel Committee recognized his "excellent work on organometallic compounds [which]...led to new polymerization reactions and ... paved the way for new and highly useful industrial processes". He is also known for his work involving free-radicals, many-membered rings, and organometallic compounds, as well as the development of Ziegler–Natta catalyst. One of many awards Ziegler received was the Werner von Siemens Ring in 1960 jointly with Otto Bayer and Walter Reppe, for expanding the scientific knowledge of and the technical development of new synthetic materials. Biography Early life and education Karl Ziegler was born on 26 November 1898 in Helsa near Kassel, Germany and was the second son of Karl Ziegler, a Lutheran minister, and Luise Rall Ziegler. He attended Kassel-Bettenhausen in elementary school. An introductory physics textbook first sparked Ziegler's interest in science. It drove him to perform experiments in his home and to read extensively beyond his high school curriculum. He was also introduced to many notable individuals through his father, including Emil Adolf von Behring, recognized for the diphtheria vaccine. His extra study and experimentation help explain why he received an award for most outstanding student in his final year at high school in Kassel, Germany. He studied at the Unive
https://en.wikipedia.org/wiki/Pore
Pore may refer to: Biology Animal biology and microbiology Sweat pore, an anatomical structure of the skin of humans (and other mammals) used for secretion of sweat Hair follicle, an anatomical structure of the skin of humans (and other mammals) used for secretion of sebum Canal pore, an anatomical structure that is part of the lateral line sense system of some aquatic organisms Gonopore, a genital pore present in some invertebrates, particularly insects Ozopore, the external discharge site of defensive glands in millipedes and some arachnids An opening across both inner and outer bacterial membranes, a part of many Gram-negative bacterial secretion systems One of the openings communicating with the skin surface at the terminus of lactiferous ducts in milk-producing mammals Plant and fungal biology Germ pore, a small pore in the outer wall of a fungal spore through which the germ tube exits upon germination Stoma, a small opening on a plant leaf used for gas exchange An anatomical feature of the anther in some plant species, the opening through which pollen is released A characteristic surface feature of porate pollen An opening in a poricidal fruit capsule Cell and molecular biology Nuclear pore, a large protein complex that penetrates the nuclear envelope in eukaryotic cells Ion channel pore, the ion-selective opening in the membrane of a eukaryotic cell formed by members of the ion channel family of proteins A water-selective opening (water channel) i
https://en.wikipedia.org/wiki/Level%20set
In mathematics, a level set of a real-valued function of real variables is a set where the function takes on a given constant value , that is: When the number of independent variables is two, a level set is called a level curve, also known as contour line or isoline; so a level curve is the set of all real-valued solutions of an equation in two variables and . When , a level set is called a level surface (or isosurface); so a level surface is the set of all real-valued roots of an equation in three variables , and . For higher values of , the level set is a level hypersurface, the set of all real-valued roots of an equation in variables. A level set is a special case of a fiber. Alternative names Level sets show up in many applications, often under different names. For example, an implicit curve is a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by an implicit equation. Analogously, a level surface is sometimes called an implicit surface or an isosurface. The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such as isobar, isotherm, isogon, isochrone, isoquant and indifference curve. Examples Consider the 2-dimensional Euclidean distance: A level set of this function consists of those points that lie at a distance of from the
https://en.wikipedia.org/wiki/CML
CML may refer to: Computing Chemical Markup Language, a representation of chemistry using XML Column Managed Lengths, a representation of data in columns Concurrent Mapping and Localization, a technique for building and utilizing maps by autonomous robots Concurrent ML, a high-level language for concurrent programming Configuration Menu Language, a language and system for compiling the Linux kernel Conversation Markup Language, a language for building chatbots Coupled Map Lattices, an extended method of cellular automaton Electronics Current mode logic, a differential digital logic family Commercial microwave link, a communication channel between neighbouring towers in mobile networks Organizations Centre for Missional Leadership, the Watford campus of the London School of Theology Cinematography Mailing List, a long established and renowned website for professional cinematographers Classical Marimba League, an organization promoting the marimba, a percussion instrument CML - Institute of Environmental Sciences, an institute at Leiden University - the Netherlands Colorado Municipal League, see NLC members Columbus Metropolitan Library, one of the most used library systems in the United States Corpul Muncitoresc Legionar, a Romanian fascist workers' association Council of Mortgage Lenders, a trade association for the British mortgage lending industry Science and medicine Chronic myelogenous leukemia, a blood cancer N(6)-Carboxymethyllysine, an advanc
https://en.wikipedia.org/wiki/Biopython
The Biopython project is an open-source collection of non-commercial Python tools for computational biology and bioinformatics, created by an international association of developers. It contains classes to represent biological sequences and sequence annotations, and it is able to read and write to a variety of file formats. It also allows for a programmatic means of accessing online databases of biological information, such as those at NCBI. Separate modules extend Biopython's capabilities to sequence alignment, protein structure, population genetics, phylogenetics, sequence motifs, and machine learning. Biopython is one of a number of Bio* projects designed to reduce code duplication in computational biology. History Biopython development began in 1999 and it was first released in July 2000. It was developed during a similar time frame and with analogous goals to other projects that added bioinformatics capabilities to their respective programming languages, including BioPerl, BioRuby and BioJava. Early developers on the project included Jeff Chang, Andrew Dalke and Brad Chapman, though over 100 people have made contributions to date. In 2007, a similar Python project, namely PyCogent, was established. The initial scope of Biopython involved accessing, indexing and processing biological sequence files. While this is still a major focus, over the following years added modules have extended its functionality to cover additional areas of biology (see Key features and exampl
https://en.wikipedia.org/wiki/Chromosomal%20translocation
In genetics, chromosome translocation is a phenomenon that results in unusual rearrangement of chromosomes. This includes balanced and unbalanced translocation, with two main types: reciprocal, and Robertsonian translocation. Reciprocal translocation is a chromosome abnormality caused by exchange of parts between non-homologous chromosomes. Two detached fragments of two different chromosomes are switched. Robertsonian translocation occurs when two non-homologous chromosomes get attached, meaning that given two healthy pairs of chromosomes, one of each pair "sticks" and blends together homogeneously. A gene fusion may be created when the translocation joins two otherwise-separated genes. It is detected on cytogenetics or a karyotype of affected cells. Translocations can be balanced (in an even exchange of material with no genetic information extra or missing, and ideally full functionality) or unbalanced (where the exchange of chromosome material is unequal resulting in extra or missing genes). Reciprocal translocations Reciprocal translocations are usually an exchange of material between non-homologous chromosomes and occur in about 1 in 491 live births. Such translocations are usually harmless, as they do not result in a gain or loss of genetic material, though they may be detected in prenatal diagnosis. However, carriers of balanced reciprocal translocations may create gametes with unbalanced chromosome translocations during meiotic chromosomal segregation. This can lead
https://en.wikipedia.org/wiki/Johann%20Karl%20Friedrich%20Z%C3%B6llner
Johann Karl Friedrich Zöllner (8 November 1834, Berlin25 April 1882, Leipzig) was a German astrophysicist who studied optical illusions. He was also an early psychical investigator. Biography From 1872 he held the chair of astrophysics at Leipzig University. He wrote numerous papers on photometry and spectrum analysis in Poggendorff's Annalen and Berichte der k. sächsischen Gesellschaft der Wissenschaften, two works on celestial photometry (Grundzüge einer allgemeinen Photometrie des Himmels, Berlin, 1861, 4to, and Photometrische Untersuchungen, Leipzig, 1865, 8vo), and a curious book, Ueber die Natur der Cometen (Leipzig, 1872, 3rd ed. 1883). He discovered the Zöllner illusion where lines that are parallel appear diagonal. He also successfully proved Christian Doppler's theory on the effect of motion of the color of stars, and the resulting shift of absorption lines, via the invention of a very sensitive spectroscope which he named "Reversionspectroscope". He had shown also that the red-shift was in addition caused by variation in the stars' lights intensities with the help of his "Astrophotometer". In 1867 he made the first measurement of the Sun's apparent magnitude, using a particular "telescope / photometer" he designed. The instrument was able to superimpose two images, one from a small telescope and the second from a reference lamp. During daytime he dimmed the image of the Sun (using polarizers and diaphragms) and compared it to the lamp. During nighttime, the lamp
https://en.wikipedia.org/wiki/Rosie%20Boycott%2C%20Baroness%20Boycott
Rosel Marie "Rosie" Boycott, Baroness Boycott (born 13 May 1951) is a British journalist and feminist. Early life The daughter of Major Charles Boycott and Betty Le Sueur Boycott, Rosel Marie "Rosie" Boycott was born in Saint Helier, Jersey. She was privately educated at the independent Cheltenham Ladies' College and read mathematics at the University of Kent. Journalism career Boycott worked for a year or so with Frendz radical magazine and in 1972, she co-founded the feminist magazine Spare Rib with Marsha Rowe. Later, both women became directors of Virago Press, a publisher committed to women's writing, with Carmen Callil, who had founded the company in 1973. From 1992 to 1996, Boycott was editor of the UK edition of the men's magazine Esquire. From 1996 to 1998, she headed The Independent and its sister publication the Independent on Sunday. Later, she edited the Daily Express (May 1998–January 2001), leaving soon after the newspaper was bought by Richard Desmond, who replaced her with Chris Williams. Boycott is currently the travel editor for The Oldie magazine and hosts The Oldie Travel Awards each year. Outside journalism Boycott has presented the BBC Radio 4 programme A Good Read. She has sat on judging panels for literary awards, including chairing the panel responsible for choosing the 2001 Orange Prize for Fiction. She is also a media advisor for the Council of Europe. Boycott is a trustee of the Hay Festival in Wales and in Cartagena, Colombia. In March 20
https://en.wikipedia.org/wiki/Cyanohydrin
In organic chemistry, a cyanohydrin or hydroxynitrile is a functional group found in organic compounds in which a cyano and a hydroxy group are attached to the same carbon atom. The general formula is , where R is H, alkyl, or aryl. Cyanohydrins are industrially important precursors to carboxylic acids and some amino acids. Cyanohydrins can be formed by the cyanohydrin reaction, which involves treating a ketone or an aldehyde with hydrogen cyanide (HCN) in the presence of excess amounts of sodium cyanide (NaCN) as a catalyst: In this reaction, the nucleophilic ion attacks the electrophilic carbonyl carbon in the ketone, followed by protonation by HCN, thereby regenerating the cyanide anion. Cyanohydrins are also prepared by displacement of sulfite by cyanide salts: Cyanohydrins are intermediates in the Strecker amino acid synthesis. In aqueous acid, they are hydrolyzed to the α-hydroxy acid. Acetone cyanohydrins Acetone cyanohydrin, (CH3)2C(OH)CN is the cyanohydrin of acetone. It is generated as an intermediate in the industrial production of methyl methacrylate. In the laboratory, this liquid serves as a source of HCN, which is inconveniently volatile. Thus, acetone cyanohydrin can be used for the preparation of other cyanohydrins, for the transformation of HCN to Michael acceptors, and for the formylation of arenes. Treatment of this cyanohydrin with lithium hydride affords anhydrous lithium cyanide: Preparative methods Cyanohydrins were first prepared by th
https://en.wikipedia.org/wiki/Tweaker
Tweak or tweaker may refer to: Tweak, a slang name for methamphetamine Tweaker, or alternate spelling tweeker, an individual addicted to methamphetamine Tweak or I am a tweaker may also refer to: Computing Tweaking, the act of making small mechanical or electronic improvements In cryptography, particularly disk encryption, tweakable refers to a group of modes of operation for block ciphers Tweak (programming environment) Tweakers, a Dutch technology website TweakVista / Tweak7 Tweak UI Other Tweak (band) Tweaker (band) Tweaking (behavior), see Stereotypy, slang term for someone exhibiting compulsive or repetitive behaviour Tweek Tweak, a character from the animated television series South Park TWEAK, a cytokine encoded by the gene TNFSF12 Tweak, a character filmed in Octonauts
https://en.wikipedia.org/wiki/Kim%20Maltman
Kim Maltman (born 1951) is a Canadian poet and physicist who lives in Toronto, Ontario. He is a professor of applied mathematics at York University and pursues research in theoretical nuclear/particle physics. He is serving as a judge for the 2019 Griffin Poetry Prize. Works The Country of the Mapmakers (1977), The Sicknesses of Hats (1982), Branch Lines (1982), Softened Violence (1984), The Transparence of November / Snow (1985), (with Roo Borson) Technologies/Installations (1990), Introduction to the Introduction to Wang Wei (2000), (by Pain Not Bread) External links Archives of Kim Maltman (Roo Robson and Kim Maltman fonds, (R12759) are held at Library and Archives Canada 1951 births Living people 20th-century Canadian poets Scientists from Toronto Poets from Toronto Particle physicists Academic staff of York University Canadian male poets 20th-century Canadian male writers 21st-century Canadian male writers 21st-century Canadian poets Canadian nuclear physicists Canadian particle physicists 20th-century Canadian scientists 21st-century Canadian scientists
https://en.wikipedia.org/wiki/Hyperelliptic%20curve%20cryptography
Hyperelliptic curve cryptography is similar to elliptic curve cryptography (ECC) insofar as the Jacobian of a hyperelliptic curve is an abelian group in which to do arithmetic, just as we use the group of points on an elliptic curve in ECC. Definition An (imaginary) hyperelliptic curve of genus over a field is given by the equation where is a polynomial of degree not larger than and is a monic polynomial of degree . From this definition it follows that elliptic curves are hyperelliptic curves of genus 1. In hyperelliptic curve cryptography is often a finite field. The Jacobian of , denoted , is a quotient group, thus the elements of the Jacobian are not points, they are equivalence classes of divisors of degree 0 under the relation of linear equivalence. This agrees with the elliptic curve case, because it can be shown that the Jacobian of an elliptic curve is isomorphic with the group of points on the elliptic curve. The use of hyperelliptic curves in cryptography came about in 1989 from Neal Koblitz. Although introduced only 3 years after ECC, not many cryptosystems implement hyperelliptic curves because the implementation of the arithmetic isn't as efficient as with cryptosystems based on elliptic curves or factoring (RSA). The efficiency of implementing the arithmetic depends on the underlying finite field , in practice it turns out that finite fields of characteristic 2 are a good choice for hardware implementations while software is usually faster in odd charact
https://en.wikipedia.org/wiki/Ka%E1%B9%87%C4%81da%20%28philosopher%29
Kaṇāda (), also known as Ulūka, Kashyapa, Kaṇabhaksha, Kaṇabhuj was an ancient Indian natural scientist and philosopher who founded the Vaisheshika school of Indian philosophy that also represents the earliest Indian physics. Estimated to have lived sometime between 6th century to 2nd century BCE, little is known about his life. His traditional name "Kaṇāda" means "atom eater", and he is known for developing the foundations of an atomistic approach to physics and philosophy in the Sanskrit text Vaiśeṣika Sūtra. His text is also known as Kaṇāda Sutras, or "Aphorisms of Kaṇāda". The school founded by Kaṇāda explains the creation and existence of the universe by proposing an atomistic theory, applying logic and realism, and is one of the earliest known systematic realist ontology in human history. Kaṇāda suggested that everything can be subdivided, but this subdivision cannot go on forever, and there must be smallest entities (paramanu) that cannot be divided, that are eternal, that aggregate in different ways to yield complex substances and bodies with unique identity, a process that involves heat, and this is the basis for all material existence. He used these ideas with the concept of Atman (soul, Self) to develop a non-theistic means to moksha. If viewed from the prism of physics, his ideas imply a clear role for the observer as independent of the system being studied. Kaṇāda's ideas were influential on other schools of Hinduism, and over its history became closely associa
https://en.wikipedia.org/wiki/Bandlimiting
Bandlimiting refers to a process which reduces the energy of a signal to an acceptably low level outside of a desired frequency range. Bandlimiting is an essential part of many applications in signal processing and communications. Examples include controlling interference between radio frequency communications signals, and managing aliasing distortion associated with sampling for digital signal processing. Bandlimited signals A bandlimited signal is, strictly speaking, a signal with zero energy outside of a defined frequency range. In practice, a signal is considered bandlimited if its energy outside of a frequency range is low enough to be considered negligible in a given application. A bandlimited signal may be either random (stochastic) or non-random (deterministic). In general, infinitely many terms are required in a continuous Fourier series representation of a signal, but if a finite number of Fourier series terms can be calculated from that signal, that signal is considered to be band-limited. In mathematic terminology, a bandlimited signal has a Fourier transform or spectral density with bounded support. Sampling bandlimited signals A bandlimited signal can be fully reconstructed from its samples, provided that the sampling rate exceeds twice the bandwidth of the signal. This minimum sampling rate is called the Nyquist rate associated with the Nyquist–Shannon sampling theorem. Real world signals are not strictly bandlimited, and signals of interest typically h
https://en.wikipedia.org/wiki/Sorting%20network
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks. Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea. Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithm
https://en.wikipedia.org/wiki/Brillouin%20zone
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone. The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice. There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bra
https://en.wikipedia.org/wiki/Butterworth%20filter
The Butterworth filter is a type of signal processing filter designed to have a frequency response that is as flat as possible in the passband. It is also referred to as a maximally flat magnitude filter. It was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled "On the Theory of Filter Amplifiers". Original paper Butterworth had a reputation for solving very complex mathematical problems thought to be 'impossible'. At the time, filter design required a considerable amount of designer experience due to limitations of the theory then in use. The filter was not in common use for over 30 years after its publication. Butterworth stated that: Such an ideal filter cannot be achieved, but Butterworth showed that successively closer approximations were obtained with increasing numbers of filter elements of the right values. At the time, filters generated substantial ripple in the passband, and the choice of component values was highly interactive. Butterworth showed that a low-pass filter could be designed whose cutoff frequency was normalized to 1 radian per second and whose frequency response (gain) was where is the angular frequency in radians per second and is the number of poles in the filter—equal to the number of reactive elements in a passive filter. If  = 1, the amplitude response of this type of filter in the passband is 1/ ≈ 0.7071, which is half power or −3 dB. Butterworth only dealt with filters with an even num
https://en.wikipedia.org/wiki/128%20%28number%29
128 (one hundred [and] twenty-eight) is the natural number following 127 and preceding 129. In mathematics 128 is the seventh power of 2. It is the largest number which cannot be expressed as the sum of any number of distinct squares. However, it is divisible by the total number of its divisors, making it a refactorable number. The sum of Euler's totient function φ() over the first twenty integers is 128. 128 can be expressed by a combination of its digits with mathematical operators, thus 128 28 − 1, making it a Friedman number in base 10. A hepteract has 128 vertices. 128 is the only 3-digit number that is a 7th power (27). In bar codes Code 128 is a Uniform Symbology Specification (USS Code 128) alphanumeric bar code that encodes text, numbers, numerous functions, and designed to encode all 128 ASCII characters (ASCII 0 to ASCII 127), as used in the shipping industry. Subdivisions include: 128A (0–9, A–Z, ASCII control codes, special characters) 128B (0–9, A–Z, a–z, special characters) 128C (00–99 numeric characters) GS1-128 application standard of the GS1 implementation using the Code 128 barcode specification ISBT 128 system for blood product labeling for the International Society of Blood Transfusion In computing 128-bit key size encryption for secure communications over the Internet IPv6 uses 128-bit (16-byte) addresses Any bit with a binary prefix is 128 bytes of a lesser binary prefix value, such as 1 gibibit is 128 mebibytes 128-bit integers, memo
https://en.wikipedia.org/wiki/175%20%28number%29
175 (one hundred [and] seventy-five) is the natural number following 174 and preceding 176. In mathematics Raising the decimal digits of 175 to the powers of successive integers produces 175 back again: 175 is a figurate number for a rhombic dodecahedron, the difference of two consecutive fourth powers: It is also a decagonal number and a decagonal pyramid number, the smallest number after 1 that has both properties. In other fields In the Book of Genesis 25:7-8, Abraham is said to have lived to be 175 years old. 175 is the fire emergency number in Lebanon. See also The year AD 175 or 175 BC List of highways numbered 175 References Integers
https://en.wikipedia.org/wiki/MathWorks
MathWorks is an American privately held corporation that specializes in mathematical computing software. Its major products include MATLAB and Simulink, which support data analysis and simulation. History The company's key product, MATLAB, was created in the 1970s by Cleve Moler, who was chairman of the computer science department at the University of New Mexico at the time. It was a free tool for academics. Jack Little, who would eventually set up the company, came across the tool while he was a graduate student in electrical engineering at Stanford University. Little and Steve Bangert rewrote the code for MATLAB in C while they were colleagues at an engineering firm. They founded MathWorks along with Moler in 1984, with Little running it out of his house in Portola Valley, California. Little would mail diskettes in baggies (food storage bags) to the first customers. The company sold its first order, 10 copies of MATLAB, for $500 to the Massachusetts Institute of Technology (MIT) in February 1985. A few years later, Little and the company moved to Massachusetts. There, Little hired Jeanne O'Keefe, an experienced computer executive, to help formalize the business. By 1997, MathWorks was profitable, claiming revenue of around $50 million, and had around 380 employees. In 1999, MathWorks relocated to the Apple Hill office complex in Natick, Massachusetts, purchasing additional buildings in the complex in 2008 and 2009, ultimately occupying the entire campus. MathWorks expa
https://en.wikipedia.org/wiki/Vertex%20cover
In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph. In computer science, the problem of finding a minimum vertex cover is a classical optimization problem. It is NP-hard, so it cannot be solved by a polynomial-time algorithm if P ≠ NP. Moreover, it is hard to approximate – it cannot be approximated up to a factor smaller than 2 if the unique games conjecture is true. On the other hand, it has several simple 2-factor approximations. It is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory. The minimum vertex cover problem can be formulated as a half-integral, linear program whose dual linear program is the maximum matching problem. Vertex cover problems have been generalized to hypergraphs, see Vertex cover in hypergraphs. Definition Formally, a vertex cover of an undirected graph is a subset of such that , that is to say it is a set of vertices where every edge has at least one endpoint in the vertex cover . Such a set is said to cover the edges of . The upper figure shows two examples of vertex covers, with some vertex cover marked i
https://en.wikipedia.org/wiki/POP-11
POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham, which hosts the main Poplog website. POP-11 is an evolution of the language POP-2, developed in Edinburgh University, and features an open stack model (like Forth, among others). It is mainly procedural, but supports declarative language constructs, including a pattern matcher, and is mostly used for research and teaching in artificial intelligence, although it has features sufficient for many other classes of problems. It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions. POP-11 is the core language of the Poplog system. The availability of the compiler and compiler subroutines at run-time (a requirement for incremental compilation) gives it the ability to support a far wider range of extensions (including run-time extensions, such as adding new data-types) than would be possible using only a macro facility. This made it possible for (optional) incremental compilers to be added for Prolog, Common Lisp and Standard ML, which could be added as required to support either mixed langua
https://en.wikipedia.org/wiki/Higher-order%20logic
In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic. The term "higher-order logic" is commonly used to mean higher-order simple predicate logic. Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types. Quantification scope First-order logic quantifies only variables that range over individuals; second-order logic, also quantifies over sets; third-order logic also quantifies over sets of sets, and so on. Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply. Semantics There are two possible semantics for higher-order logic. In the standard or full semantics, quantifiers over higher-type objects range over all possible objects of that type. For example, a quantifier over sets of individuals r
https://en.wikipedia.org/wiki/Paracrine%20signaling
In cellular biology, paracrine signaling is a form of cell signaling, a type of cellular communication in which a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain. Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses. Paracrine factors induce competent responders In order for
https://en.wikipedia.org/wiki/Approximation%20algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines. The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as ann
https://en.wikipedia.org/wiki/Biological%20rhythm
Biological rhythms are repetitive biological processes. Some types of biological rhythms have been described as biological clocks. They can range in frequency from microseconds to less than one repetitive event per decade. Biological rhythms are studied by chronobiology. In the biochemical context biological rhythms are called biochemical oscillations. The variations of the timing and duration of biological activity in living organisms occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). Circadian rhythm The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks. The circadian rhythm can further be broken down into routine cycles during the 24-hour day: Diurnal, which describes organisms active during daytime Nocturnal, which describes organisms active in the night Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: white-tailed deer, some bats) While cir
https://en.wikipedia.org/wiki/KCL
KCL or KCl may refer to: Science and technology Potassium chloride (KCl), a metal halide salt Keycode lookup, keycode log, or keycode list Kirchhoff's current law, in physics Kyoto Common Lisp, an implementation of Common Lisp Other uses King's College London, a public research university in London, UK and a constituent college of the University of London
https://en.wikipedia.org/wiki/Torus-based%20cryptography
Torus-based cryptography involves using algebraic tori to construct a group for use in ciphers based on the discrete logarithm problem. This idea was first introduced by Alice Silverberg and Karl Rubin in 2003 in the form of a public key algorithm by the name of CEILIDH. It improves on conventional cryptosystems by representing some elements of large finite fields compactly and therefore transmitting fewer bits. See also Torus References Karl Rubin, Alice Silverberg: Torus-Based Cryptography. CRYPTO 2003: 349–365 External links Torus-Based Cryptography — the paper introducing the concept (in PDF). Public-key cryptography
https://en.wikipedia.org/wiki/369%20%28number%29
Three hundred sixty-nine is the natural number following three hundred sixty-eight and preceding three hundred seventy. In mathematics 369 is the magic constant of the 9 × 9 magic square and the n-Queens Problem for n = 9. There are 369 free octominoes (polyominoes of order 8). 369 is a Ruth-Aaron Pair with 370. The sum of their prime factors are equivalent. References Integers
https://en.wikipedia.org/wiki/Biogenic%20substance
A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways. The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks. Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro
https://en.wikipedia.org/wiki/Perimeter%20Institute%20for%20Theoretical%20Physics
Perimeter Institute for Theoretical Physics (PI, Perimeter, PITP) is an independent research centre in foundational theoretical physics located in Waterloo, Ontario, Canada. It was founded in 1999. The institute's founding and major benefactor is Canadian entrepreneur and philanthropist Mike Lazaridis. The original building, designed by Saucier + Perrotte, opened in 2004 and was awarded a Governor General's Medal for Architecture in 2006. The Stephen Hawking Centre, designed by Teeple Architects, was opened in 2011 and was LEED Silver certified in 2015. In addition to research, Perimeter also provides scientific training and educational outreach activities to the general public. This is done in part through Perimeter's Educational Outreach team. History In 1999, Howard Burton—who had a PhD in theoretical physics from the University of Waterloo—emailed Mike Lazaridis along with 20 CEO's in an attempt to leave his Wall Street job. Lazaridis then pitched the idea of the Perimeter Institute to Burton as he wanted to use his BlackBerry wealth for a philanthropic endeavour. Lazaridis' initial donation of $100 million was announced on October 23, 2000, believed to be the biggest private donation in Canadian history to that point. Jim Balsille and Doug Fregin each donated $10 million. The city of Waterloo offered four sites of land for free; Lazaridis chose the former site of the Waterloo Memorial Arena (near Uptown Waterloo). Research operations began in 2001, in a temporary si
https://en.wikipedia.org/wiki/Bob%20and%20George
Bob and George was a sprite-based webcomic which parodied the fictional universe of Mega Man. It was written by David Anez, who at the time was a physics instructor living in the American Midwest. The comic first appeared on April 1, 2000, and ran until July 28, 2007. It was updated daily, with there being only 29 days without a comic in its seven years of production and with 2568 comics being made altogether. Most Bob and George strips are still images. The initial strips were mostly done in GIF format (occasionally using JPEG for more graphic-intensive comics) before converting to PNG in May 2004. In addition, occasional comics are animated using either animated GIFs or Macromedia Flash. Some of the Flash comics have the characters speaking, voiced by Anez and others (often forum members). Animated comics are generally used for the annual week-long anniversary parties (usually culminating in a brief animated comic that recaps the events of the past year in a matter of seconds), for especially climactic scenes, and for a series of videos depicting an in-comic event known as "the Cataclysm". The comic's plot is mostly made up of story arcs of varying lengths. Amongst past story arcs there have been retellings of various Mega Man games (which often play out quite differently from the originals), as well as battles against powerful foes. In addition, many of the story arcs involve time travel, dimensional travel, and villains who want to kill all the characters. History Bob
https://en.wikipedia.org/wiki/Howard%20P.%20Robertson
Howard Percy "Bob" Robertson (January 27, 1903 – August 26, 1961) was an American mathematician and physicist known for contributions related to physical cosmology and the uncertainty principle. He was Professor of Mathematical Physics at the California Institute of Technology and Princeton University. Robertson made important contributions to the mathematics of quantum mechanics, general relativity and differential geometry. Applying relativity to cosmology, he independently developed the concept of an expanding universe. His name is most often associated with the Poynting–Robertson effect, the process by which solar radiation causes a dust mote orbiting a star to lose angular momentum, which he also described in terms of general relativity. During World War II, Robertson served with the National Defense Research Committee (NDRC) and the Office of Scientific Research and Development (OSRD). He served as technical consultant to the Secretary of War, the OSRD Liaison Officer in London, and the Chief of the Scientific Intelligence Advisory Section at Supreme Headquarters Allied Expeditionary Force. After the war Robertson was director of the Weapons Systems Evaluation Group in the Office of the Secretary of Defense from 1950 to 1952, chairman of the Robertson Panel on UFOs in 1953 and scientific advisor to the NATO Supreme Allied Commander Europe (SACEUR) in 1954 and 1955. He was chairman of the Defense Science Board from 1956 to 1961, and a member of the President's Science
https://en.wikipedia.org/wiki/Gell-Mann%20matrices
The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics. They span the Lie algebra of the SU(3) group in the defining representation. Matrices {| border="0" cellpadding="8" cellspacing="0" | | | |- | | | |- | | | |} Properties These matrices are traceless, Hermitian, and obey the extra trace orthonormality relation (so they can generate unitary matrix group elements of SU(3) through exponentiation). These properties were chosen by Gell-Mann because they then naturally generalize the Pauli matrices for SU(2) to SU(3), which formed the basis for Gell-Mann's quark model. Gell-Mann's generalization further extends to general SU(n). For their connection to the standard basis of Lie algebras, see the Weyl–Cartan basis. Trace orthonormality In mathematics, orthonormality typically implies a norm which has a value of unity (1). Gell-Mann matrices, however, are normalized to a value of 2. Thus, the trace of the pairwise product results in the ortho-normalization condition where is the Kronecker delta. This is so the embedded Pauli matrices corresponding to the three embedded subalgebras of SU(2) are conventionally normalized. In this three-dimensional matrix representation, the Cartan subalgebra is the set of linear combinations (with real coefficients) of the two matrices and , which commute with each other. There are three s
https://en.wikipedia.org/wiki/Global%20optimization
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function . Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in , the standard minimization problem can be given as that is, finding and a global minimizer in ; where is a (not necessarily convex) compact set defined by inequalities . Global optimization is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges. Applications Typical examples of global optimization applications include: Protein structure prediction (minimize the energy/free energy function) Computational phylogenetics (e.g., minimize the number of character transformations in the tree) Traveling salesman problem and electrical circuit design (minimize the path length) Chemical engineering (e.g., analyzing the
https://en.wikipedia.org/wiki/BSGS
The initialism BSGS has two meanings, both related to group theory in mathematics: Baby-step giant-step, an algorithm for solving the discrete logarithm problem The combination of a base and strong generating set (SGS) for a permutation group
https://en.wikipedia.org/wiki/Baby-step%20giant-step
In group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm for computing the discrete logarithm or order of an element in a finite abelian group by Daniel Shanks. The discrete log problem is of fundamental importance to the area of public key cryptography. Many of the most commonly used cryptography systems are based on the assumption that the discrete log is extremely difficult to compute; the more difficult it is, the more security it provides a data transfer. One way to increase the difficulty of the discrete log problem is to base the cryptosystem on a larger group. Theory The algorithm is based on a space–time tradeoff. It is a fairly simple modification of trial multiplication, the naive method of finding discrete logarithms. Given a cyclic group of order , a generator of the group and a group element , the problem is to find an integer such that The baby-step giant-step algorithm is based on rewriting : Therefore, we have: The algorithm precomputes for several values of . Then it fixes an and tries values of in the right-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of , using the precomputed values of . The algorithm Input: A cyclic group G of order n, having a generator α and an element β. Output: A value x satisfying . m ← Ceiling() For all j where 0 ≤ j < m: Compute αj and store the pair (j, αj) in a table. (See
https://en.wikipedia.org/wiki/Look-and-say%20sequence
In mathematics, the look-and-say sequence is the sequence of integers beginning as follows: 1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... . To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example: 1 is read off as "one 1" or 11. 11 is read off as "two 1s" or 21. 21 is read off as "one 2, one 1" or 1211. 1211 is read off as "one 1, one 2, two 1s" or 111221. 111221 is read off as "three 1s, two 2s, one 1" or 312211. The look-and-say sequence was analyzed by John Conway after he was introduced to it by one of his students at a party. The idea of the look-and-say sequence is similar to that of run-length encoding. If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows: d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, … Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see ) Basic properties Growth The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ... Digits presence limitation No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the
https://en.wikipedia.org/wiki/Graph%20reduction
In computer science, graph reduction implements an efficient version of non-strict evaluation, an evaluation strategy where the arguments to a function are not immediately evaluated. This form of non-strict evaluation is also known as lazy evaluation and used in functional programming languages. The technique was first developed by Chris Wadsworth in 1971. Motivation A simple example of evaluating an arithmetic expression follows: The above reduction sequence employs a strategy known as outermost tree reduction. The same expression can be evaluated using innermost tree reduction, yielding the reduction sequence: Notice that the reduction order is made explicit by the addition of parentheses. This expression could also have been simply evaluated right to left, because addition is an associative operation. Represented as a tree, the expression above looks like this: This is where the term tree reduction comes from. When represented as a tree, we can think of innermost reduction as working from the bottom up, while outermost works from the top down. The expression can also be represented as a directed acyclic graph, allowing sub-expressions to be shared: As for trees, outermost and innermost reduction also applies to graphs. Hence we have graph reduction. Now evaluation with outermost graph reduction can proceed as follows: Notice that evaluation now only requires four steps. Outermost graph reduction is referred to as lazy evaluation and innermost graph reduction is
https://en.wikipedia.org/wiki/Tun
TUN or tun may refer to: Biology Tun shells, large sea snails of the family Tonnidae Tun, a tardigrade in its cryptobiotic state Tun or Toon, common name for trees of the genus Toona Places Tun, Sweden, a locality in Västra Götaland County Tūn or Toon, the former name of Ferdows, a city in Iran Touro University Nevada, a private university in Henderson, Nevada, United States Tunisia, ISO 3166-1 alpha-3 country code Tunis–Carthage International Airport, (IATA airport code: TUN) Old English meaning town. Often used as a suffix in its Romanised form (~ton) e.g.: Southampton Measurement and time Tun (Maya calendar), a unit of 360 days on the Maya calendar Tun (unit), an antiquated measurement of liquid Science and technology TUN/TAP, a computer network device driver TUN (product standard), Danish building materials numbering system Other uses Brilliance Tun, a 2014–2015 Chinese city car Tun, an honorific Malay title Tun, a type of cask (barrel) with a capacity of 252 wine gallons (954 litres) Lauter tun, a vessel used in brewing Mash tun, a vessel used in brewing See also Ton (disambiguation)
https://en.wikipedia.org/wiki/Site-directed%20mutagenesis
Site-directed mutagenesis is a molecular biology method that is used to make specific and intentional mutating changes to the DNA sequence of a gene and any gene products. Also called site-specific mutagenesis or oligonucleotide-directed mutagenesis, it is used for investigating the structure and biological activity of DNA, RNA, and protein molecules, and for protein engineering. Site-directed mutagenesis is one of the most important laboratory techniques for creating DNA libraries by introducing mutations into DNA sequences. There are numerous methods for achieving site-directed mutagenesis, but with decreasing costs of oligonucleotide synthesis, artificial gene synthesis is now occasionally used as an alternative to site-directed mutagenesis. Since 2013, the development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has also allowed for the editing of the genome, and mutagenesis may be performed in vivo with relative ease. History Early attempts at mutagenesis using radiation or chemical mutagens were non-site-specific, generating random mutations. Analogs of nucleotides and other chemicals were later used to generate localized point mutations, examples of such chemicals are aminopurine, nitrosoguanidine, and bisulfite. Site-directed mutagenesis was achieved in 1974 in the laboratory of Charles Weissmann using a nucleotide analogue N4-hydroxycytidine, which induces transition of GC to AT. These methods of mutagenesis, however, are limited by
https://en.wikipedia.org/wiki/Catalan%20solid
In mathematics, a Catalan solid, or Archimedean dual, is a polyhedron that is dual to an Archimedean solid. There are 13 Catalan solids. They are named for the Belgian mathematician Eugène Catalan, who first described them in 1865. The Catalan solids are all convex. They are face-transitive but not vertex-transitive. This is because the dual Archimedean solids are vertex-transitive and not face-transitive. Note that unlike Platonic solids and Archimedean solids, the faces of Catalan solids are not regular polygons. However, the vertex figures of Catalan solids are regular, and they have constant dihedral angles. Being face-transitive, Catalan solids are isohedra. Additionally, two of the Catalan solids are edge-transitive: the rhombic dodecahedron and the rhombic triacontahedron. These are the duals of the two quasi-regular Archimedean solids. Just as prisms and antiprisms are generally not considered Archimedean solids, bipyramids and trapezohedra are generally not considered Catalan solids, despite being face-transitive. Two of the Catalan solids are chiral: the pentagonal icositetrahedron and the pentagonal hexecontahedron, dual to the chiral snub cube and snub dodecahedron. These each come in two enantiomorphs. Not counting the enantiomorphs, bipyramids, and trapezohedra, there are a total of 13 Catalan solids. List of Catalan solids and their duals Symmetry The Catalan solids, along with their dual Archimedean solids, can be grouped in those with tetrahedral, octa
https://en.wikipedia.org/wiki/Protease%20inhibitor
Protease inhibitor can refer to: Protease inhibitor (pharmacology): a class of medication that inhibits viral protease Protease inhibitor (biology): molecules that inhibit proteases
https://en.wikipedia.org/wiki/Density%20matrix%20renormalization%20group
The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low-energy physics of quantum many-body systems with high accuracy. As a variational method, DMRG is an efficient algorithm that attempts to find the lowest-energy matrix product state wavefunction of a Hamiltonian. It was invented in 1992 by Steven R. White and it is nowadays the most efficient method for 1-dimensional systems. History The first application of the DMRG, by Steven R. White and Reinhard Noack, was a toy model: to find the spectrum of a spin 0 particle in a 1D box. This model had been proposed by Kenneth G. Wilson as a test for any new renormalization group method, because they all happened to fail with this simple problem. The DMRG overcame the problems of previous renormalization group methods by connecting two blocks with the two sites in the middle rather than just adding a single site to a block at each step as well as by using the density matrix to identify the most important states to be kept at the end of each step. After succeeding with the toy model, the DMRG method was tried with success on the quantum Heisenberg model. Principle The main problem of quantum many-body physics is the fact that the Hilbert space grows exponentially with size. In other words if one considers a lattice, with some Hilbert space of dimension on each site of the lattice, then the total Hilbert space would have dimension , where is the number of sites on the lattic
https://en.wikipedia.org/wiki/Protease%20inhibitor%20%28biology%29
In biology and biochemistry, protease inhibitors, or antiproteases, are molecules that inhibit the function of proteases (enzymes that aid the breakdown of proteins). Many naturally occurring protease inhibitors are proteins. In medicine, protease inhibitor is often used interchangeably with alpha 1-antitrypsin (A1AT, which is abbreviated PI for this reason). A1AT is indeed the protease inhibitor most often involved in disease, namely in alpha-1 antitrypsin deficiency. Classification Protease inhibitors may be classified either by the type of protease they inhibit, or by their mechanism of action. In 2004 Rawlings and colleagues introduced a classification of protease inhibitors based on similarities detectable at the level of amino acid sequence. This classification initially identified 48 families of inhibitors that could be grouped into 26 related superfamily (or clans) by their structure. According to the MEROPS database there are now 81 families of inhibitors. These families are named with an I followed by a number, for example, I14 contains hirudin-like inhibitors. By protease Classes of proteases are: Aspartic protease inhibitors Cysteine protease inhibitors Metalloprotease inhibitors Serine protease inhibitors Threonine protease inhibitors Trypsin inhibitors Kunitz STI protease inhibitor By mechanism Classes of inhibitor mechanisms of action are: Suicide inhibitor Transition state inhibitor Protein protease inhibitor (see serpins) Chelating agents
https://en.wikipedia.org/wiki/Larry%20Trask
Robert Lawrence Trask (10 November 1944 – 27 March 2004) was an American-British professor of linguistics at the University of Sussex, and an authority on the Basque language and the field of historical linguistics. Biography Born in Olean, New York, he initially studied chemistry in his home country, but after a brief stint in the Peace Corps he took an interest in linguistics. He received his doctorate in linguistics from the University of London, and thereafter taught at various universities in the United Kingdom. He became a professor of linguistics at the University of Sussex. He was considered an authority on the Basque language: his book The History of Basque (1997) is an essential reference on diachronic Basque linguistics and probably the best introduction to Basque linguistics as a whole. He was at work compiling an etymological dictionary of that language when he died; the unfinished work was posthumously published on the Internet by Max W. Wheeler. He was also an authority on historical linguistics, and had written about the problem of the origin of language. He also published two introductory books to linguistics: Language: The basics (1995) and Introducing Linguistics (coauthored with Bill Mayblin) (2000), and several dictionaries on different topics of this science: A dictionary of grammatical terms in linguistics (1993), A dictionary of phonetics and phonology (1996), A student's dictionary of language and linguistics (1997), Key concepts in language and li
https://en.wikipedia.org/wiki/Discrete%20system
In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals. See also Digital control Finite-state machine Frequency spectrum Mathematical model Sample and hold Sample rate Sample time Z-transform References Automata (computation) Models of computation Signal processing
https://en.wikipedia.org/wiki/Dilaton
In particle physics, the hypothetical dilaton particle is a particle of a scalar field that appears in theories with extra dimensions when the volume of the compactified dimensions varies. It appears as a radion in Kaluza–Klein theory's compactifications of extra dimensions. In Brans–Dicke theory of gravity, Newton's constant is not presumed to be constant but instead 1/G is replaced by a scalar field and the associated particle is the dilaton. Exposition In Kaluza–Klein theories, after dimensional reduction, the effective Planck mass varies as some power of the volume of compactified space. This is why volume can turn out as a dilaton in the lower-dimensional effective theory. Although string theory naturally incorporates Kaluza–Klein theory that first introduced the dilaton, perturbative string theories such as type I string theory, type II string theory, and heterotic string theory already contain the dilaton in the maximal number of 10 dimensions. However, M-theory in 11 dimensions does not include the dilaton in its spectrum unless compactified. The dilaton in type IIA string theory parallels the radion of M-theory compactified over a circle, and the dilaton in    string theory parallels the radion for the Hořava–Witten model. (For more on the M-theory origin of the dilaton, see Berman & Perry (2006).) In string theory, there is also a dilaton in the worldsheet CFT – two-dimensional conformal field theory. The exponential of its vacuum expectation value determine
https://en.wikipedia.org/wiki/Antimonite
In chemistry, antimonite refers to a salt of antimony(III), such as NaSb(OH)4 and NaSbO2 (meta-antimonite), which can be prepared by reacting alkali with antimony trioxide, Sb2O3. These are formally salts of antimonous acid, Sb(OH)3, whose existence in solution is dubious. Attempts to isolate it generally form Sb2O3·xH2O, antimony(III) oxide hydrate, which slowly transforms into Sb2O3. In geology, the mineral stibnite, Sb2S3, is sometimes called antimonite. Antimonites can be compared to antimonates, which contain antimony in the +5 oxidation state. References Antimony(III) compounds Oxyanions th:สติบไนท์
https://en.wikipedia.org/wiki/John%20N.%20Little
John N. "Jack" Little is an American electrical engineer and the CEO and co-founder of MathWorks and a co-author of early versions of the company's MATLAB product. He is a Fellow of the IEEE and a Trustee of the Massachusetts Technology Leadership Council. He holds a Bachelor's degree in Electrical Engineering from the Massachusetts Institute of Technology (1978), and a Master's degree from Stanford University (1980). He is the son of the academic John D. C. Little. External links His biography on mathworks.com. References Fellow Members of the IEEE MIT School of Engineering alumni Stanford University School of Engineering alumni Living people Year of birth missing (living people)
https://en.wikipedia.org/wiki/Symbolic%20method
In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it. Symbolic notation The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties. Example: the discriminant of a binary quadratic form These symbols can be explained by the following example from Gordan. Suppose that is a binary quadratic form with an invariant given by the discriminant The symbolic representation of the discriminant is where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so Squaring this we get Next we pretend that so that and we ignore the fact that this does not seem to make sense if f is not a power of a linear form. Substituting these values gives Higher degrees More generally if is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties What thi
https://en.wikipedia.org/wiki/Lookahead
Lookahead or Look Ahead may refer to: A parameter of some combinatorial search algorithms, describing how deeply the graph representing the problem is explored A parameter of some parsing algorithms; the maximum number of tokens that a parser can use to decide which rule to use In dynamic range compression, a signal processing design to avoid compromise between slow attack rates that produce smooth-sounding gain changes, and fast attack rates capable of catching transients Look-ahead (backtracking), a subprocedure that attempts to predict the effects of choosing a branching variable to evaluate or one of its values Lookahead carry unit, a logical unit in digital circuit design used to decrease calculation time in adder units Look Ahead, a charitable housing association in London In regular expressions, an assertion to match characters after the current position Education Look Ahead, 1990s English as a foreign language multimedia classroom project by BBC English and other organisations Music Look Ahead (Pat Boone album), 1968 Look Ahead, 1992 album by Gerald Veasley Look Ahead, 1995 album by Danny Tenaglia "Look Ahead", 1992 song by Pat Metheny on the album Secret Story "Look Ahead", 2014 song by rapper Future, on the album Honest (Future album) See also Looking Ahead (disambiguation)
https://en.wikipedia.org/wiki/Jean-Marie%20Souriau
Jean-Marie Souriau (3 June 1922, Paris – 15 March 2012, Aix-en-Provence) was a French mathematician. He was one of the pioneers of modern symplectic geometry. Education and career Souriau started studying mathematics in 1942 at École Normale Supérieure in Paris. In 1946 he was a research fellow of CNRS and an engineer at ONERA. His PhD thesis, defended in 1952 under the supervision of Joseph Pérès and André Lichnerowicz, was entitled "Sur la stabilité des avions" (On the stability of planes). Between 1952 and 1958 he worked at Institut des Hautes Études in Tunis, and since 1958 he was Professor of Mathematics at the University of Provence in Marseille. In 1981 he was awarded the Prix Jaffé of the French Academy of Sciences. Research Souriau contributed to the introduction and the development of many important concepts in symplectic geometry, arising from classical and quantum mechanics. In particular, he introduced the notion of moment map, gave a classification of the homogeneous symplectic manifolds (now known as the Kirillov-Kostant-Souriau theorem), and investigated the coadjoint action of a Lie group, which led to the first geometric interpretation of spin at a classical level. He also suggested a program of geometric quantization and developed a more general approach to differentiable manifolds by means of diffeologies. Souriau published more than 50 papers in peer-review scientific journals, as well as three monographs, on linear algebra, on relativity and on ge
https://en.wikipedia.org/wiki/Radiate
Radiate may refer to: Biology Radiata, a taxon of jellyfish and allies Radiate carpal ligament, a group of fibrous bands in the hand Radiate ligament of head of rib Radiate sternocostal ligaments, fibrous bands in the sternum Coins Antoninianus, or "pre-reform radiate", a Roman silver coin issued in the 3rd century Post-reform radiate, a Roman bronze coin issued in the 4th century Music Radiate (album), by Tricia, 2013 "Radiate" (Enter Shikari song), 2013 "Radiate" (Jack Johnson song), 2013 "Radiate", a song by Chemical Brothers from Born in the Echoes, 2015 "Radiate", a song by Ex Hex from It's Real, 2019 "Radiate", a song by Puddle of Mudd from Famous, 2007 Radiate FM or WRGP, a student-run radio station of Florida International University in Miami, Florida Other uses Radiation, a process by which energetic particles or energetic waves travel Radiate (app), a mobile social networking app Radiate crown, headgear symbolizing the sun See also Radial (disambiguation)
https://en.wikipedia.org/wiki/Henry%20Cogswell%20College
Henry Cogswell College is a former private institution of higher learning that was based in Washington state from 1979 to 2006. The college offered bachelor's degrees in business administration, computer science, digital arts, electrical engineering, mechanical engineering, mechanical engineering technology, and professional management. It was named after temperance movement crusader Henry D. Cogswell. Historically, the college had an enrollment of 300 students that relied mainly on Boeing-related tuition. History Henry Cogswell College was founded in 1979 in Kirkland, Washington as Cogswell College North (at the time, an affiliate of Cogswell College in Sunnyvale, California), largely to provide engineering education to local Boeing employees. The college also operated night and summer classes at Shoreline Community College before permanently moving to south Kirkland. The college moved to Everett, located near Boeing's largest assembly plant, in 1996, leasing space in a former Bon Marché department store. In 2000, the college moved into the historic Federal Building in downtown Everett, spending $2 million to renovate the 1917-built office building. Off-campus classes were also held at a Boeing facility in the south Puget Sound (about 30 miles from Everett) to accommodate students living in that area. Limited classes continued to be offered at the Boeing facility even after the main campus moved to Everett. The institution closed on September 1, 2006, due to a decline in
https://en.wikipedia.org/wiki/Stuart%20J.%20Russell
Stuart Jonathan Russell (born 1962) is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries. Education and early life Russell was born in Portsmouth, England. He attended St Paul's School, London, where he was 1st scholar. He studied physics at Wadham College, Oxford, and was awarded his Bachelor of Arts degree with first-class honours in 1982. He moved to the United States to complete his PhD in computer science at Stanford University in 1986 for research on inductive reasoning and analogical reasoning supervised by Michael Genesereth. His PhD was supported by a NATO studentship from the UK Science and Engineering Research Council. Career and research After his 1986 PhD, he joined the faculty of the University of California, Berkeley as a professor of computer science. From 2008 to 2011 he also held an appointment as adjunct professor of Neurological Surgery at the Univers
https://en.wikipedia.org/wiki/Quasi-arithmetic%20mean
In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function . It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean. Definition If f is a function which maps an interval of the real line to the real numbers, and is both continuous and injective, the f-mean of numbers is defined as , which can also be written We require f to be injective in order for the inverse function to exist. Since is defined over an interval, lies within the domain of . Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple nor smaller than the smallest number in . Examples If = ℝ, the real line, and , (or indeed any linear function , not equal to 0) then the f-mean corresponds to the arithmetic mean. If = ℝ+, the positive real numbers and , then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1. If = ℝ+ and , then the f-mean corresponds to the harmonic mean. If = ℝ+ and , then the f-mean corresponds to the power mean with exponent . If = ℝ and , then the f-mean is the mean in th
https://en.wikipedia.org/wiki/Standard%20electrode%20potential
In electrochemistry, standard electrode potential , or , is a measure of the reducing power of any element or compound. The IUPAC "Gold Book" defines it as; "the value of the standard emf (electromotive force) of a cell in which molecular hydrogen under standard pressure is oxidized to solvated protons at the left-hand electrode". Background The basis for an electrochemical cell, such as the galvanic cell, is always a redox reaction which can be broken down into two half-reactions: oxidation at anode (loss of electron) and reduction at cathode (gain of electron). Electricity is produced due to the difference of electric potential between the individual potentials of the two metal electrodes with respect to the electrolyte. Although the overall potential of a cell can be measured, there is no simple way to accurately measure the electrode/electrolyte potentials in isolation. The electric potential also varies with temperature, concentration and pressure. Since the oxidation potential of a half-reaction is the negative of the reduction potential in a redox reaction, it is sufficient to calculate either one of the potentials. Therefore, standard electrode potential is commonly written as standard reduction potential. At each electrode-electrolyte interface there is a tendency of metal ions from the solution to deposit on the metal electrode trying to make it positively charged. At the same time, metal atoms of the electrode have a tendency to go into the solution as ions and
https://en.wikipedia.org/wiki/Erect-crested%20penguin
The erect-crested penguin (Eudyptes sclateri) is a penguin endemic to the New Zealand region and only breeds on the Bounty and Antipodes Islands. It has black upper parts, white underparts and a yellow eye stripe and crest. It spends the winter at sea and little is known about its biology and breeding habits. Populations are believed to have declined during the last few decades of the twentieth century, and the International Union for Conservation of Nature has listed it as being "endangered". Description This is a small-to-medium-sized, yellow-crested, black-and-white penguin, at and weighing . The male is slightly larger than the female and as in most crested penguins has a larger bill. It has bluish-black to jet black upper parts and white underparts, and a broad, bright yellow eyebrow-stripe which extends over the eye to form a short, erect crest. With a mean body mass in males of (sample size 22) and females of (sample size 22), the erect-crested penguin is the largest of the crested penguin species and as the fourth heaviest extant penguin, being nearly as heavy on average as the gentoo penguin. Its biology is poorly studied and only little information about the species has emerged in the past decades. The only recent study conducted on the Antipodes Islands focused on aspects of the mate choice. Research on the species is hampered by logistics and restrictive permitting by the New Zealand Department of Conservation. It presumably feeds on small fish, krill and
https://en.wikipedia.org/wiki/PMAC
PMAC may refer to: Permanent Magnet AC Motor, a type of electric motor that uses permanent magnets in addition to windings on its field, rather than windings only. PMAC (cryptography), a message authentication code algorithm Pete Maravich Assembly Center, Louisiana State University Provisional Military Administrative Council or Derg, a military junta that ruled Ethiopia 1974—1987 Prevention and Management of Abortion and Its Complications, a Philippine government policy concerning abortion in the Philippines Purchasing Management Association of Canada, a co-sponsor of the Ivey Index "Problem exists between monitor and chair", a term describing computer user error
https://en.wikipedia.org/wiki/Message%20authentication%20code
In cryptography, a message authentication code (MAC), sometimes known as an authentication tag, is a short piece of information used for authenticating and integrity checking a message. In other words, to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content. Terminology The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications to distinguish it from the use of the latter as media access control address (MAC address). However, some authors use MIC to refer to a message digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the term message integrity code (MIC), and instead using checksum, error detection code, hash, keyed hash, message authentication code, or protected checksum. Definitions Informally, a message authentication code system consists of three algorithms: A key generation algorithm selects a key from the key space uniformly at random. A signing algorithm efficiently returns a tag given the key and the message. A verifying algorithm efficiently verifies the authenticity of the message given the same key and the tag. That is, return accepted when the message and tag are not tampered with or forged, and otherwise return rejected. A secure message authentication code must resist attempts by an a
https://en.wikipedia.org/wiki/Whirlpool%20%28hash%20function%29
In computer science and cryptography, Whirlpool (sometimes styled WHIRLPOOL) is a cryptographic hash function. It was designed by Vincent Rijmen (co-creator of the Advanced Encryption Standard) and Paulo S. L. M. Barreto, who first described it in 2000. The hash has been recommended by the NESSIE project. It has also been adopted by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as part of the joint ISO/IEC 10118-3 international standard. Design features Whirlpool is a hash designed after the Square block cipher, and is considered to be in that family of block cipher functions. Whirlpool is a Miyaguchi-Preneel construction based on a substantially modified Advanced Encryption Standard (AES). Whirlpool takes a message of any length less than 2256 bits and returns a 512-bit message digest. The authors have declared that "WHIRLPOOL is not (and will never be) patented. It may be used free of charge for any purpose." Version changes The original Whirlpool will be called Whirlpool-0, the first revision of Whirlpool will be called Whirlpool-T and the latest version will be called Whirlpool in the following test vectors. In the first revision in 2001, the S-box was changed from a randomly generated one with good cryptographic properties to one which has better cryptographic properties and is easier to implement in hardware. In the second revision (2003), a flaw in the diffusion matrix was found that low
https://en.wikipedia.org/wiki/Lexicographic%20order
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements. Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied. A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered. Definition The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols. The formal notion starts with a finite set , often called the alphabet, which is totally ordered. That is, for any two symbols and in that are not the same symbol, either or . The words of are the finite sequences of symbols from , including words
https://en.wikipedia.org/wiki/Zero%20crossing
A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing. In electronics In alternating current, the zero-crossing is the instantaneous point at which there is no voltage present. In a sine wave or other simple waveform, this normally occurs twice during each cycle. It is a device for detecting the point where the voltage crosses zero in either direction. The zero-crossing is important for systems that send digital data over AC circuits, such as modems, X10 home automation control systems, and Digital Command Control type systems for Lionel and other AC model trains. Counting zero-crossings is also a method used in speech processing to estimate the fundamental frequency of speech. In a system where an amplifier with digitally controlled gain is applied to an input signal, artifacts in the non-zero output signal occur when the gain of the amplifier is abruptly switched between its discrete gain settings. At audio frequencies, such as in modern consumer electronics like digital audio players, these effects are clearly audible, resulting in a 'zipping' sound when rapidly ramping the gain or a soft 'click' when a single gain change is made. Artifacts are disconcerting and clearly not desirable. If changes are made only at zero-crossings of the input sign
https://en.wikipedia.org/wiki/Riemann%E2%80%93Liouville%20integral
In mathematics, the Riemann–Liouville integral associates with a real function another function of the same kind for each value of the parameter . The integral is a manner of generalization of the repeated antiderivative of in the sense that for positive integer values of , is an iterated antiderivative of of order . The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential. Definition The Riemann–Liouville integral is defined by where is the gamma function and is an arbitrary but fixed base point. The integral is well-defined provided is a locally integrable function, and is a complex number in the half-plane . The dependence on the base-point is often suppressed, and represents a freedom in constant of integration. Clearly is an antiderivative of (of first order), and for positive integer values of , is an antiderivative of order by Cauchy formula for repeated integration. Another notation, which emphasizes the base point, is This also makes sense if , with suitable restrictions on . The fundamental relations hold the latter of which is a semigroup property. These properties make possible not only the definition of fractional integration, but a
https://en.wikipedia.org/wiki/Vedic%20Mathematics
Vedic Mathematics is a book written by the Indian monk Bharati Krishna Tirtha, and first published in 1965. It contains a list of mathematical techniques, which were falsely claimed to have been retrieved from the Vedas and to contain advanced mathematical knowledge. Krishna Tirtha failed to produce the sources, and scholars unanimously note it to be a mere compendium of tricks for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. However, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education by right-wing Hindu nationalist governments. Contents The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics. STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of tricks that can be applied in elementary, middle and hi
https://en.wikipedia.org/wiki/Solution%20set
In mathematics, a solution set is the set of values that satisfy a given set of equations or inequalities. For example, for a set of polynomials over a ring , the solution set is the subset of on which the polynomials all vanish (evaluate to 0), formally The feasible region of a constrained optimization problem is the solution set of the constraints. Examples The solution set of the single equation is the set {0}. For any non-zero polynomial over the complex numbers in one variable, the solution set is made up of finitely many points. However, for a complex polynomial in more than one variable the solution set has no isolated points. Remarks In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets. Other meanings More generally, the solution set to an arbitrary collection E of relations (Ei) (i varying in some index set I) for a collection of unknowns , supposed to take values in respective spaces , is the set S of all solutions to the relations E, where a solution is a family of values such that substituting by in the collection E makes all relations "true". (Instead of relations depending on unknowns, one should speak more correctly of predicates, the collection E is their logical conjunction, and the solution set is the inverse image of the boolean value true by the associated boolean-valued function.) The above meaning is a special case of th
https://en.wikipedia.org/wiki/Weakly%20harmonic%20function
In mathematics, a function is weakly harmonic in a domain if for all with compact support in and continuous second derivatives, where Δ is the Laplacian. This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition. See also Weak solution Weyl's lemma References Harmonic functions
https://en.wikipedia.org/wiki/Sulfur%20dye
Sulfur dyes are the most commonly used dyes manufactured for cotton in terms of volume. They are inexpensive, generally have good wash-fastness, and are easy to apply. Sulfur dyes are predominantly black, brown, and dark blue. Red sulfur dyes are unknown, although a pink or lighter scarlet color is available. Chemistry Sulfur linkages are the integral part of chromophore in sulfur dyes. They are organosulfur compounds consisting of sulfide (–S–), disulfide (–S–S–) and polysulfide (–Sn–) links in heterocyclic rings. They feature thiazoles, thiazone, thianthrene, and phenothiazonethioanthrone subunits. Being nonionic, sulfur dyes are insoluble in water. Process Dyeing includes a few stages, viz. reduction, dyeing, washing, oxidation, soaping, and final washing. The anion is developed on reducing and solubilising at boil when it shows affinity for cellulose. Sodium sulfide (Na2S), the reducing and solubilising agent, performs both reduction and solubilisation, producing thiols and then to sodium salt of thiols or thiolates, which are soluble in water and substantive towards cellulose. Higher rate of exhaustion occurs at 90-95 °C in presence of electrolyte. Dyed cellulosics exhibit a tendering effect on storage under humid atmosphere due to presence of excess free sulfur. Aftertreatment with sodium acetate is required to suppress that. H2S liberated during dyeing forms corrosive metal sulfide. This restricts use of metal vessels except those made of stainless steel: Fe + H2S →
https://en.wikipedia.org/wiki/Vent
Vent or vents may refer to: Science and technology Biology Vent, the cloaca region of an animal Vent DNA polymerase, a thermostable DNA polymerase Geology Hydrothermal vent, a fissure in a planet's surface from which geothermally heated water issues Volcano, a point where magma emerges from the Earth's surface and becomes lava Moving gases Vent (submarine), a valve on a submarine's ballast tanks Automatic bleeding valve, a plumbing valve used to automatically release trapped air from a heating system Drain-waste-vent system or plumbing drainage venting, pipes leading from fixtures to the outdoors Duct (flow), used to deliver and remove air Flue, a duct, pipe, or chimney for conveying exhaust gases from a furnace or water heater Gas venting, a safe vent in the hydrocarbon and chemical industries Medical ventilator, mechanical breathing machine Touch hole, a vent on a cannon Vent shaft or ventilation shaft People Vents (musician), Australian hip hop MC Vents Feldmanis (born 1977), Latvian ice hockey defenceman Vents Armands Krauklis (born 1964), Latvian politician and musician Vent., taxonomic abbreviation for Étienne Pierre Ventenat (1757–1808), French botanist Arts, entertainment, and media Music Albums and EPs Vent (album), a 2001 album by Caliban Vent (EP), a 2008 release by Sounds of Swami Songs "Vent" (song), by Collective Soul "The Vent", a song by Big K.R.I.T. from Return of 4Eva Other arts, entertainment, and media Vent (Mega Man), a character in Mega Man ZX
https://en.wikipedia.org/wiki/Dog%20years
Dog years may refer to: Biology Any of various measures used to describe a dog's age in human terms or vice versa: See . Arts and entertainment Dog Years (1997 film), an American action-comedy directed by Robert Loomis Dog Years (2017 film), or The Last Movie Star, an American drama directed by Adam Rifkin Dog Years (novel), a 1963 novel by Günter Grass Dog Years (EP), a 2017 EP by the Winery Dogs Dog Years, a 2004 comedy album by Mike Birbiglia Dog Years, a 2021 album by the Night Game "Dog Years", a 1996 song by Rush from Test for Echo See also Year of the dog (disambiguation)
https://en.wikipedia.org/wiki/Valence%20electron
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron. The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell. An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond. Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic
https://en.wikipedia.org/wiki/Constantine%20Papadakis
Constantine Papadakis (February 2, 1946 – April 5, 2009) was a Greek-American businessman and the president of Drexel University. Academic career Papadakis received his diploma in Civil engineering from the National Technical University of Athens in Greece. He came to the United States in 1969 to continue his studies in civil engineering and earn his master's degree from the University of Cincinnati. He then went on to earn his doctorate in civil engineering in 1973 from the University of Michigan. Papadakis served as head of the civil engineering department at Colorado State University and then dean of University of Cincinnati's College of Engineering prior to 1995. He was appointed President of Drexel University in Philadelphia, Pennsylvania in 1995 and held that position until his death in 2009. During his tenure, Papadakis doubled the full-time undergraduate enrollment, tripled freshman applications, quintupled the university's endowment, and quintupled research funding. His salary of $805,000 was the sixth highest among university presidents. After his death Papadakis' total earnings, including life insurance payout, was estimated at over $4 million. Other activities Papadakis sat on the Philadelphia Stock Exchange as chairman of the compensation committee. He also served on the board of trustees of the Hellenic College and Holy Cross Greek Orthodox School of Theology. Death Papadakis died at the Hospital of the University of Pennsylvania from pulmonary complication
https://en.wikipedia.org/wiki/Free-air%20gravity%20anomaly
In geophysics, the free-air gravity anomaly, often simply called the free-air anomaly, is the measured gravity anomaly after a free-air correction is applied to account for the elevation at which a measurement is made. It does so by adjusting these measurements of gravity to what would have been measured at a reference level, which is commonly taken as mean sea level or the geoid. Applications Studies of the subsurface structure and composition of the Earth's crust and mantle employ surveys using gravimeters to measure the departure of observed gravity from a theoretical gravity value to identify anomalies due to geologic features below the measurement locations. The computation of anomalies from observed measurements involves the application of corrections that define the resulting anomaly. The free-air anomaly can be used to test for isostatic equilibrium over broad regions. Survey methods The free-air correction adjusts measurements of gravity to what would have been measured at mean sea level, that is, on the geoid. The gravitational attraction of earth below the measurement point and above mean sea level is ignored and it is imagined that the observed gravity is measured in air, hence the name. The theoretical gravity value at a location is computed by representing the earth as an ellipsoid that approximates the more complex shape of the geoid. Gravity is computed on the ellipsoid surface using the International Gravity Formula. For studies of subsurface structure,