source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Expensive%20Desk%20Calculator
|
Expensive Desk Calculator by Robert A. Wagner is thought to be computing's first interactive calculation program.
The software first ran on the TX-0 computer loaned to the Massachusetts Institute of Technology (MIT) by Lincoln Laboratory. It was ported to the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation.
Friends from the MIT Tech Model Railroad Club, Wagner and a group of fellow students had access to these room-sized machines outside classes, signing up for time during off hours. Overseen by Jack Dennis, John McKenzie and faculty advisors, they were personal computer users as early as the late 1950s.
The calculators Wagner needed to complete his numerical analysis homework were across campus and in short supply so he wrote one himself. Although the program has about three thousand lines of code and took months to write, Wagner received a grade of zero on his homework. His professor's reaction was, "You used a computer! This can't be right." Steven Levy wrote, "The professor would learn in time, as would everyone, that the world opened up by the computer was a limitless one."
|
https://en.wikipedia.org/wiki/Noggin%20%28protein%29
|
Noggin, also known as NOG, is a protein that is involved in the development of many body tissues, including nerve tissue, muscles, and bones. In humans, noggin is encoded by the NOG gene. The amino acid sequence of human noggin is highly homologous to that of rat, mouse, and Xenopus (an aquatic frog genus).
Noggin is an inhibitor of several bone morphogenetic proteins (BMPs): it inhibits at least BMP2, 4, 5, 6, 7, 13, and 14.
The protein's name, which is a slang English-language word for "head", was coined in reference to its ability to produce embryos with large heads when exposed at high concentrations.
Function
Noggin is a signaling molecule that plays an important role in promoting somite patterning in the developing embryo. It is released from the notochord and regulates bone morphogenic protein 4 (BMP4) during development. The absence of BMP4 will cause the patterning of the neural tube and somites from the neural plate in the developing embryo. It also causes formation of the head and other dorsal structures.
Noggin function is required for correct nervous system, somite, and skeletal development. Experiments in mice have shown that noggin also plays a role in learning, cognition, bone development, and neural tube fusion. Heterozygous missense mutations in the noggin gene can cause deformities such as joint fusions and syndromes such as multiple synostosis syndrome (SYNS1) and proximal symphalangism (SIM1). SYNS1 is different from SYM1 by causing hip and vertebral fusions. The embryo may also develop shorter bones, miss any skeletal elements, or lack multiple articulating joints.
Increased plasma levels of Noggin have been observed in obese mice and in patients with a body mass index over 27. Additionally, it has been shown that Noggin depletion in adipose tissue leads to obesity.
Mechanism of action
The secreted polypeptide noggin, encoded by the NOG gene, binds and inactivates members of the transforming growth factor-beta (TGF-beta) superfamily
|
https://en.wikipedia.org/wiki/Subversive%20Proposal
|
The "Subversive Proposal" was an Internet posting by Stevan Harnad on June 27, 1994 (presented at the 1994 Network Services Conference in London) calling on all authors of "esoteric" research writings to archive their articles for free for everyone online (in anonymous FTP archives or websites). It initiated a series of online exchanges, many of which were collected and published as a book in 1995: Scholarly Journals at the Crossroads: A Subversive Proposal for Electronic Publishing. This led to the creation in 1997 of Cogprints, an open access archive for self-archived articles in the cognitive sciences and in 1998 to the creation of the American Scientist Open Access Forum (initially called the "September98 Forum" until the founding of the Budapest Open Access Initiative which first coined the term "open access"). The Subversive Proposal also led to the development of the GNU EPrints software used for creating OAI-compliant open access institutional repositories, and inspired CiteSeer, a tool to locate and index the resulting eprints.
The proposal was updated gradually across the years, as summarized in the American Scientist Open Access Forum on its 10th anniversary.
A retrospective was written by Richard Poynder.
A self-critique
was posted on its 15th anniversary in 2009. An online interview of Stevan Harnad was conducted by Richard Poynder on the occasion of the 20th anniversary of the subversive proposal.
|
https://en.wikipedia.org/wiki/Transfusion-related%20immunomodulation
|
Transfusion-related immunomodulation (TRIM) refers to the transient depression of the immune system following transfusion of blood products. This effect has been recognized in groups of individuals who have undergone kidney transplantation or have had multiple miscarriages. Some research studies have shown that, because of this immune depression, blood transfusions increase the risk of infections and cancer recurrence. However, other studies have not shown these differences and the degree of impact transfusion has on infection and tumor recurrence is not well understood. The Blood Products Advisory Committee of the Food and Drug Administration recommends that all transfused blood products undergo leukocyte reduction in order to offset the contribution of donor white blood cells to immune suppression.
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%207
|
Bone morphogenetic protein 7 or BMP7 (also known as osteogenic protein-1 or OP-1) is a protein that in humans is encoded by the BMP7 gene.
Function
The protein encoded by this gene is a member of the TGF-β superfamily. Like other members of the bone morphogenetic protein family of proteins, it plays a key role in the transformation of mesenchymal cells into bone and cartilage. It is inhibited by noggin and a similar protein, chordin, which are expressed in the Spemann-Mangold Organizer. BMP7 may be involved in bone homeostasis. It is expressed in the brain, kidneys and bladder.
BMP7 induces the phosphorylation of SMAD1 and SMAD5, which in turn induce transcription of numerous osteogenic genes. It has been demonstrated that BMP7 treatment is sufficient to induce all of the genetic markers of osteoblast differentiation in many cell types.
Role in vertebrate development
The role of BMP7 in mammalian kidney development is through induction of MET of the metanephrogenic blastema. The epithelial tissue emerging from this MET process eventually forms the tubules and glomeruli of the nephron. BMP-7 is also important in homeostasis of the adult kidney by inhibiting ephithelial-mesenchymal transition (EMT). BMP-7 expression is attenuated when the nephron is placed under inflammatory or ischemic stress, leading to EMT, which can result in fibrosis of the kidney. This type of fibrosis often leads to renal failure, and is predictive of end stage renal disease.
BMP7 has been discovered to be crucial in the determination of ventral-dorsal organization in zebrafish. BMP7 causes the expression of ventral phenotypes while its complete inhibition creates a dorsal phenotype. Moreover, BMP7 is eventually partially "turned off" in embryonic development in order to create the dorsal parts of the organism.
In many early developmental experiments using zebrafish, scientists used caBMPR (constitutively active) and tBMP (truncated receptor) to determine the effect of BMP7 in embryogen
|
https://en.wikipedia.org/wiki/Hydrological%20transport%20model
|
An hydrological transport model is a mathematical model used to simulate the flow of rivers, streams, groundwater movement or drainage front displacement, and calculate water quality parameters. These models generally came into use in the 1960s and 1970s when demand for numerical forecasting of water quality and drainage was driven by environmental legislation, and at a similar time widespread access to significant computer power became available. Much of the original model development took place in the United States and United Kingdom, but today these models are refined and used worldwide.
There are dozens of different transport models that can be generally grouped by pollutants addressed, complexity of pollutant sources, whether the model is steady state or dynamic, and time period modeled. Another important designation is whether the model is distributed (i.e. capable of predicting multiple points within a river) or lumped. In a basic model, for example, only one pollutant might be addressed from a simple point discharge into the receiving waters. In the most complex of models, various line source inputs from surface runoff might be added to multiple point sources, treating a variety of chemicals plus sediment in a dynamic environment including vertical river stratification and interactions of pollutants with in-stream biota. In addition watershed groundwater may also be included. The model is termed "physically based" if its parameters can be measured in the field.
Often models have separate modules to address individual steps in the simulation process. The most common module is a subroutine for calculation of surface runoff, allowing variation in land use type, topography, soil type, vegetative cover, precipitation and land management practice (such as the application rate of a fertilizer). The concept of hydrological modeling can be extended to other environments such as the oceans, but most commonly (and in this article) the subject of a river watershed is
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%202
|
Bone morphogenetic protein 2 or BMP-2 belongs to the TGF-β superfamily of proteins.
Function
BMP-2 like other bone morphogenetic proteins, plays an important role in the development of bone and cartilage. It is involved in the hedgehog pathway, TGF beta signaling pathway, and in cytokine-cytokine receptor interaction. It is also involved in cardiac cell differentiation and epithelial to mesenchymal transition.
Like many other proteins from the BMP family, BMP-2 has been demonstrated to potently induce osteoblast differentiation in a variety of cell types.
BMP-2 may be involved in white adipogenesis and may have metabolic effects.
Interactions
Bone morphogenetic protein 2 has been shown to interact with BMPR1A.
Clinical use and complications
Bone morphogenetic protein 2 is shown to stimulate the production of bone. Recombinant human protein (rhBMP-2) is currently available for orthopaedic usage in the United States. Implantation of BMP-2 is performed using a variety of biomaterial carriers ("metals, ceramics, polymers, and composites") and delivery systems ("hydrogel, microsphere, nanoparticles, and fibers"). While used primarily in orthopedic procedures such as spinal fusion, BMP-2 has also found its way into the field of dentistry.
The use of dual tapered threaded fusion cages and recombinant human bone morphogenetic protein-2 on an absorbable collagen sponge obtained and maintained intervertebral spinal fusion, improved clinical outcomes, and reduced pain after anterior lumbar interbody arthrodesis in patients with degenerative lumbar disc disease. As an adjuvant to allograft bone or as a replacement for harvested autograft, bone morphogenetic proteins (BMPs) appear to improve fusion rates after spinal arthrodesis in both animal models and humans, while reducing the donor-site morbidity previously associated with such procedures.
A study published in 2011 noted "reports of frequent and occasionally catastrophic complications associated with use of [BM
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%204
|
Bone morphogenetic protein 4 is a protein that in humans is encoded by BMP4 gene. BMP4 is found on chromosome 14q22-q23.
BMP4 is a member of the bone morphogenetic protein family which is part of the transforming growth factor-beta superfamily. The superfamily includes large families of growth and differentiation factors. BMP4 is highly conserved evolutionarily. BMP4 is found in early embryonic development in the ventral marginal zone and in the eye, heart blood and otic vesicle.
Discovery
Bone morphogenetic proteins were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site.
Function
BMP4 is a polypeptide belonging to the TGF-β superfamily of proteins. It, like other bone morphogenetic proteins, is involved in bone and cartilage development, specifically tooth and limb development and fracture repair. This particular family member plays an important role in the onset of endochondral bone formation in humans. It has been shown to be involved in muscle development, bone mineralization, and ureteric bud development.
BMP4 stimulates differentiation of overlying ectodermal tissue.
Bone morphogenetic proteins are known to stimulate bone formation in adult animals. This is thought that inducing osteoblastic commitment and differentiation of stem cells such as mesenchymal stem cells.BMPs are known to play a large role in embryonic development. In the embryo BMP4 helps establish dorsal-ventral axis formation in Xenopus frog through inducing ventral mesoderm. In mice targeted inactivation of BMP4 disrupts mesoderm from forming. As well establishes dorsal-ventral patterning of the developing neural tube with the help of BMP7, and inducing dorsal characters.
BMP4 also limits the extent to which neural differentiation in Xenopus embryos occurs by inducing epidermis formation rather than neural tissue. They can aid in inducing the lateral characteristics in somites. Somites are required fo
|
https://en.wikipedia.org/wiki/Greek%20ligatures
|
Greek ligatures are graphic combinations of the letters of the Greek alphabet that were used in medieval handwritten Greek and in early printing. Ligatures were used in the cursive writing style and very extensively in later minuscule writing. There were dozens of conventional ligatures. Some of them stood for frequent letter combinations, some for inflectional endings of words, and some were abbreviations of entire words.
History
In early printed Greek from around 1500, many ligatures fashioned after contemporary manuscript hands continued to be used. Important models for this early typesetting practice were the designs of Aldus Manutius in Venice, and those of Claude Garamond in Paris, who created the influential Grecs du roi typeface in 1541. However, the use of ligatures gradually declined during the 17th and 18th centuries and became mostly obsolete in modern typesetting. Among the ligatures that remained in use the longest are the Omicron-Upsilon ligature Ȣ for ου, which resembles an o with an u on top, and the abbreviation ϗ for ('and'), which resembles a κ with a downward stroke on the right. The ου ligature is still occasionally used in decorative writing, while the abbreviation has some limited usage in functions similar to the Latin ampersand (&). Another ligature that was relatively frequent in early modern printing is a ligature of Ο with ς (a small sigma inside an omicron) for a terminal ος.
The ligature for , now called stigma, survived in a special role besides its use as a ligature proper. It took on the function of a number sign for "6", having been visually conflated with the cursive form of the ancient letter digamma, which had this numeral function.
Computer encoding
In the modern computer encoding standard Unicode, the abbreviation has been encoded since version 3.0 of the standard (1999). An uppercase version was added in version 5.1 (2008). A lower and upper case "stigma", designed for its numeric use, is also encoded in Unicode. Le
|
https://en.wikipedia.org/wiki/Collaborative%20working%20environment
|
A collaborative working environment (CWE) supports people, such as e-professionals, in their individual and cooperative work. Research in CWE involves focusing on organizational, technical, and social issues.
Background
Working practices in a collaborative working environment evolved from the traditional or geographical co-location paradigm. In a CWE, professionals work together regardless of their geographical location. In this context, e-professionals use a collaborative working environment to provide and share information and exchange views in order to reach a common understanding. Such practices enable an effective and efficient collaboration among different proficiencies.
Description
The following applications or services are considered elements of a CWE:
E-mail
Instant messaging
Application sharing
Video conferencing, Web conferencing
Virtual workplace, document management and version control system
Task and workflow management(Task management and Workflow management)
Wiki group or community effort to edit wiki pages. (e.g. wiki pages describing concepts to enable a common understanding within a group or community)
Blogging where entries are categorized by groups or communities or other concepts supporting collaboration
Overview
The concept of CWE is derived from the idea of virtual work-spaces, and is related to the concept of remote work. It extends the traditional concept of the professional to include any type of knowledge worker who intensively uses information and communications technology (ICT) environments and tools in their working practices. Typically, a group of e-professionals conduct their collaborative work through the use of collaborative working environments (CWE).
CWE refers to online collaboration (such as virtual teams, mass collaboration, and massively distributed collaboration); online communities of practice (such as the open source community); and open innovation principles.
Collaborative work systems
A collaborative workin
|
https://en.wikipedia.org/wiki/Greyout
|
A greyout is a transient loss of vision characterized by a perceived dimming of light and color, sometimes accompanied by a loss of peripheral vision. It is a precursor to fainting or a blackout and is caused by hypoxia (low brain oxygen level), often due to a loss of blood pressure.
Greyouts have a variety of possible causes:
Shock, such as hypovolemia, even in mild form such as when drawing blood.
Standing up suddenly (see orthostatic hypotension), especially if sick, hungover, or experiencing low blood pressure.
Fatigue
Hyperventilation, paradoxically: self-induced hypocapnia, such as in the fainting game or in shallow water blackout.
Overexertion
Panic attack
Recovery is usually rapid. A greyout can be readily reversed by lying down as the cardiovascular system does not need to work against gravity for blood to reach the brain.
A greyout may be experienced by aircraft pilots pulling high positive g-forces as when pulling up into a loop or a tight turn, which forces blood to the lower extremities of the body and lowers blood pressure in the brain. This is the reverse of a redout, or a reddening of the vision, which is the result of negative g-forces caused by performing an outside loop, that is by pushing the nose of the aircraft down. Redouts are potentially dangerous and can cause retinal damage and hemorrhagic stroke. Pilots of high performance aircraft can increase their resistance to greyouts by using a g-suit, which controls the pooling of blood in the lower limbs, but there is no suit yet capable of controlling a redout. In both cases, symptoms may be remedied immediately by easing pressure on the flight controls. Continued or heavy g-force will rapidly progress to g-LOC (g-force induced Loss of Consciousness). Untrained individuals can withstand approximately 4g, while fighter pilots with g-suits are trained to perform 9g maneuvers.
Surprisingly, even during a heavy greyout, where the visual system is severely impaired, pilots can still hear, fe
|
https://en.wikipedia.org/wiki/Boilerplate%20code
|
In computer programming, boilerplate code, or simply boilerplate, are sections of code that are repeated in multiple places with little to no variation. When using languages that are considered verbose, the programmer must write a lot of boilerplate code to accomplish only minor functionality.
The need for boilerplate can be reduced through high-level mechanisms such as metaprogramming (which has the computer automatically write the needed boilerplate code or insert it at compile time), convention over configuration (which provides good default values, reducing the need to specify program details in every project) and model-driven engineering (which uses models and model-to-code generators, eliminating the need for manual boilerplate code).
Origin
The term arose from the newspaper business. Columns and other pieces that were distributed by print syndicates were sent to subscribing newspapers in the form of prepared printing plates. Because of their resemblance to the metal plates used in the making of boilers, they became known as "boiler plates", and their resulting text—"boilerplate text". As the stories that were distributed by boiler plates were usually "fillers" rather than "serious" news, the term became synonymous with unoriginal, repeated text.
A related term is bookkeeping code, referring to code that is not part of the business logic but is interleaved with it in order to keep data structures updated or handle secondary aspects of the program.
Preamble
One form of boilerplate consists of declarations which, while not part of the program logic or the language's essential syntax, are added to the start of a source file as a matter of custom. The following Perl example demonstrates boilerplate:
#!/usr/bin/perl
use warnings;
use strict;
The first line is a shebang, which identifies the file as a Perl script that can be executed directly on the command line on Unix/Linux systems. The other two are pragmas turning on warnings and strict mode, which are
|
https://en.wikipedia.org/wiki/Nokia%20Networks
|
Nokia Networks (formerly Nokia Solutions and Networks (NSN) and Nokia Siemens Networks (NSN)) is a multinational data networking and telecommunications equipment company headquartered in Espoo, Finland, and wholly owned subsidiary of Nokia Corporation. It started as a joint venture between Nokia of Finland and Siemens of Germany known as Nokia Siemens Networks. Nokia Networks has operations in around 120 countries. In 2013, Nokia acquired 100% of Nokia Networks, buying all of Siemens' shares. In April 2014, the NSN name was phased out as part of a rebranding process.
History
The company was created as the result of a joint venture between Siemens Communications (minus its Enterprise business unit) and Nokia's Network Business. The formation of the company was publicly announced on 19 June 2006. Nokia Siemens Networks was officially launched at the 3GSM World Congress in Barcelona in February 2007. Nokia Siemens Networks then began full operations on 1 April 2007 and has its headquarters in Espoo, Greater Helsinki, Finland.
In January 2008 Nokia Siemens Networks acquired Israeli company Atrica, a company that builds carrier-class Ethernet transport systems for metro networks. The official release did not disclose terms, however they are thought to be in the region of $100 million. In February 2008 Nokia Siemens Networks acquired Apertio, a Bristol, UK-based mobile network customer management tools provider, for €140 million. With this acquisition Nokia Siemens Networks gained customers in the subscriber management area including Orange, T-Mobile, O2, Vodafone, and Hutchison 3G.
In 2009, according to Siemens, Siemens only retained a non-controlling financial interest in NSN, with the day-to-day operations residing with Nokia.
On 19 July 2010, Nokia Siemens Networks announced it would acquire the wireless-network equipment of Motorola. The acquisition was completed on 29 April 2011 for $975 million in cash. As of the transaction approximately 6,900 employees trans
|
https://en.wikipedia.org/wiki/Hansen%20solubility%20parameter
|
Hansen solubility parameters were developed by Charles M. Hansen in his Ph.D thesis in 1967 as a way of predicting if one material will dissolve in another and form a solution. They are based on the idea that like dissolves like where one molecule is defined as being 'like' another if it bonds to itself in a similar way.
Specifically, each molecule is given three Hansen parameters, each generally measured in MPa0.5:
The energy from dispersion forces between molecules
The energy from dipolar intermolecular forces between molecules
The energy from hydrogen bonds between molecules.
These three parameters can be treated as co-ordinates for a point in three dimensions also known as the Hansen space. The nearer two molecules are in this three-dimensional space, the more likely they are to dissolve into each other. To determine if the parameters of two molecules (usually a solvent and a polymer) are within range, a value called interaction radius () is given to the substance being dissolved. This value determines the radius of the sphere in Hansen space and its center is the three Hansen parameters. To calculate the distance () between Hansen parameters in Hansen space the following formula is used:
Combining this with the interaction radius gives the relative energy difference (RED) of the system:
If the molecules are alike and will dissolve
If the system will partially dissolve
If the system will not dissolve
Uses
Historically Hansen solubility parameters (HSP) have been used in industries such as paints and coatings where understanding and controlling solvent–polymer interactions was vital. Over the years their use has been extended widely to applications such as:
Environmental stress cracking of polymers
Controlled dispersion of pigments, such as carbon black
Understanding of solubility/dispersion properties of carbon nanotubes, buckyballs and quantum dots
Adhesion to polymers
Permeation of solvents and chemicals through plastics to understand i
|
https://en.wikipedia.org/wiki/Chemical%20compound%20microarray
|
A chemical compound microarray is a collection of organic chemical compounds spotted on a solid surface, such as glass and plastic. This microarray format is very similar to DNA microarray, protein microarray and antibody microarray. In chemical genetics research, they are routinely used for searching proteins that bind with specific chemical compounds, and in general drug discovery research, they provide a multiplex way to search potential drugs for therapeutic targets.
There are three different forms of chemical compound microarrays based on the fabrication method. The first form is to covalently immobilize the organic compounds on the solid surface with diverse linking techniques; this platform is usually called Small Molecule Microarray, which is invented and advanced by Dr. Stuart Schreiber and colleagues . The second form is to spot and dry organic compounds on the solid surface without immobilization, this platform has a commercial name as Micro Arrayed Compound Screening (μARCS), which is developed by scientists in Abbott Laboratories . The last form is to spot organic compounds in a homogenous solution without immobilization and drying effect, this platform is developed by Dr. Dhaval Gosalia and Dr. Scott Diamond and later commercialized as DiscoveryDot technology by Reaction Biology Corporation .
Polymer Microarrays
Polymer microarrays have been developed to allow screening for new polymeric materials to direct different tissue lineages. Research has also been directed towards studying the surface chemistry of these arrays to determine which surface chemistries control cell adhesion, although concerns have been raised as to the influence of the substrate on measurements and the questionable statistical interpretation of results.
The lack of control in the production of many of these polymer arrays suggests that any practical application of these technologies will be limited. This is particularly true for the in situ polymerisation of acrylate monom
|
https://en.wikipedia.org/wiki/Crop%20Trust
|
The Crop Trust, officially known as the Global Crop Diversity Trust, is an international nonprofit organization with a secretariat in Bonn, Germany. Its mission is to conserve and make available the world's crop diversity for food security.
Established in 2004, the Crop Trust is the only organization whose sole mission is to safeguard the world’s crop diversity for future food security. Through an endowment fund for crop diversity, the Crop Trust provides financial support for key international and national genebanks that hold collections of diversity for food crops available under the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA). The organization also provides tools and support for the efficient management of genebanks, facilitates coordination between conserving institutions, and organizes final backup of crop seeds in the Svalbard Global Seed Vault.
Since its establishment, the Crop Trust has raised more than USD 300 million for the Crop Diversity Endowment Fund and supports conservation work in over 80 countries.
Mission
Crop diversity is the biological foundation of agriculture, and is the raw material plant breeders and farmers use to adapt crop varieties to pests and diseases. In the future, this crop diversity will play a central role in helping agriculture adjust to climate change and adapt to water and energy constraints.
History
In 1996, the UN Food and Agriculture Organization (FAO) recognized the need for global coordination for the conservation of the world’s crop diversity. At a conference organized by the FAO, 150 countries launched a Global Plan of Action to coordinate efforts at halting the loss of the world’s agrobiodiversity. The Global Plan of Action formed a major pillar of what would become the International Treaty on Plant Genetic Resources for Food and Agriculture, known as the Plant Treaty. The Plant Treaty brings the diversity of 64 food and forage crops into a multilateral system where the gene
|
https://en.wikipedia.org/wiki/Phosphoric%20monoester%20hydrolases
|
Phosphoric monoester hydrolases (or phosphomonoesterases) are enzymes that catalyse the hydrolysis of O-P bonds by nucleophilic attack of phosphorus by cysteine residues or coordinated metal ions.
They are categorized with the EC number 3.1.3.
Examples include:
acid phosphatase
alkaline phosphatase
fructose-bisphosphatase
glucose-6-phosphatase
phosphofructokinase-2
phosphoprotein phosphatase
calcineurin
6-phytase
See also
phosphodiesterase
phosphatase
External links
Metabolism
|
https://en.wikipedia.org/wiki/Ibn%20al-Banna%27%20al-Marrakushi
|
Ibn al‐Bannāʾ al‐Marrākushī (), full name: Abu'l-Abbas Ahmad ibn Muhammad ibn Uthman al-Azdi al-Marrakushi () (29 December 1256 – 31 July 1321), was a Maghrebi Muslim polymath who was active as a mathematician, astronomer, Islamic scholar, Sufi and astrologer.
Biography
Ahmad ibn Muhammad ibn Uthman was born in the Qa'at Ibn Nahid Quarter of Marrakesh on 29 or 30 December 1256. His nisba al-Marrakushi is in relation to his birth and death in his hometown Marrakesh and al azdi means he was from the big arab tribe Azd. His father was a mason thus the kunya Ibn al-Banna' (lit. the son of the mason).
Ibn al-Banna' studied a variety of subjects under at least 17 masters: Quran under the Qari's Muhammad ibn al-bashir and shaykh al-Ahdab. ʻilm al-ḥadīth under qadi al-Jama'a (chief judge) of Fez َAbu al-Hajjaj Yusuf ibn Ahmad ibn Hakam al-Tujibi, Abu Yusuf Ya'qub ibn Abd al-Rahman al-Jazuli and Abu abd allah ibn. Fiqh and Usul al-Fiqh under Abu Imran Musa ibn Abi Ali az-Zanati al-Marrakushi and Abu al-Hasan Muhammad ibn Abd al-Rahman al-Maghili who taugh him al-Juwayni's Kitab al-Irsahd. He also studied Arabic grammar under Abu Ishaq Ibrahim ibn Abd as-Salam as-Sanhaji and Muhammad ibn Ali ibn Yahya as-sharif al-marrakushi who also taugh him Euclid’s Elements. ʿArūḍ and ʿilm al-farāʾiḍ under Abu Bakr Muhammad ibn Idris ibn Malik al-Quda'i al-Qallusi. Arithmetic under Muhammad ibn Ali, known as Ibn Ḥajala. Ibn al-Banna' also studied astronomy under Abu 'Abdallah Muhammad ibn Makhluf as-Sijilmassi. He also studied medecine under al-Mirrīkh.
He is known to have attached himself to the founder of the Hazmiriyya zawiya and sufi saint of Aghmat, Abu Zayd Abd al-Rahman al-Hazmiri, who guided his arithmetic skills toward divinational predictions.
Ibn al-Banna' taught classes in Marrakesh and some of his students were: Abd al-Aziz ibn Ali al-Hawari al-Misrati (d.1344), Abd al-Rahman ibn Sulayman al-Laja'i (d. 1369) and Muhammad ibn Ali ibn Ibrahim al-Abli (d. 1356).
He died a
|
https://en.wikipedia.org/wiki/Energy%20gap
|
In solid-state physics, an energy gap or band gap is an energy range in a solid where no electron states exist, i.e. an energy range where the density of states vanishes.
Especially in condensed matter physics, an energy gap is often known more abstractly as a spectral gap, a term which need not be specific to electrons or solids.
Band gap
If an energy gap exists in the band structure of a material, it is called band gap. The physical properties of semiconductors are to a large extent determined by their band gaps, but also for insulators and metals the band structure—and thus any possible band gaps—govern their electronic properties.
Superconductors
For superconductors the energy gap is a region of suppressed density of states around the Fermi energy, with the size of the energy gap much smaller than the energy scale of the band structure. The superconducting energy gap is a key aspect in the theoretical description of superconductivity and thus features prominently in BCS theory. Here, the size of the energy gap indicates the energy gain for two electrons upon formation of a Cooper pair. If a conventional superconducting material is cooled from its metallic state (at higher temperatures) into the superconducting state, then the superconducting energy gap is absent above the critical temperature , it starts to open upon entering the superconducting state at , and it grows upon further cooling.
BCS theory predicts that the size of the superconducting energy gap for conventional superconductors at zero temperature scales with their critical temperature : (with Boltzmann constant ).
Pseudogap
If the density of states is suppressed near the Fermi energy but does not fully vanish, then this suppression is called pseudogap. Pseudogaps are experimentally observed in a variety of material classes; a prominent example are the cuprate high-temperature superconductors.
Hard gap vs. soft gap
If the density of states vanishes over an extended energy range, then this i
|
https://en.wikipedia.org/wiki/Phosphoenolpyruvate%20carboxylase
|
Phosphoenolpyruvate carboxylase (also known as PEP carboxylase, PEPCase, or PEPC; , PDB ID: 3ZGE) is an enzyme in the family of carboxy-lyases found in plants and some bacteria that catalyzes the addition of bicarbonate (HCO3−) to phosphoenolpyruvate (PEP) to form the four-carbon compound oxaloacetate and inorganic phosphate:
PEP + HCO3− → oxaloacetate + Pi
This reaction is used for carbon fixation in CAM (crassulacean acid metabolism) and organisms, as well as to regulate flux through the citric acid cycle (also known as Krebs or TCA cycle) in bacteria and plants. The enzyme structure and its two step catalytic, irreversible mechanism have been well studied. PEP carboxylase is highly regulated, both by phosphorylation and allostery.
Enzyme structure
The PEP carboxylase enzyme is present in plants and some types of bacteria, but not in fungi or animals (including humans). The genes vary between organisms, but are strictly conserved around the active and allosteric sites discussed in the mechanism and regulation sections. Tertiary structure of the enzyme is also conserved.
The crystal structure of PEP carboxylase in multiple organisms, including Zea mays (maize), and Escherichia coli has been determined. The overall enzyme exists as a dimer-of-dimers: two identical subunits closely interact to form a dimer through salt bridges between arginine (R438 - exact positions may vary depending on the origin of the gene) and glutamic acid (E433) residues. This dimer assembles (more loosely) with another of its kind to form the four subunit complex. The monomer subunits are mainly composed of alpha helices (65%), and have a mass of 106kDa each. The sequence length is about 966 amino acids.
The enzyme active site is not completely characterized. It includes a conserved aspartic acid (D564) and a glutamic acid (E566) residue that non-covalently bind a divalent metal cofactor ion through the carboxylic acid functional groups. This metal ion can be magnesium, manganese or co
|
https://en.wikipedia.org/wiki/The%20Sean-Bhean%20bhocht
|
"The Sean-Bhean bhocht" (; Irish for "Poor old woman"), often spelled phonetically as "Shan Van Vocht", is a traditional Irish song from the period of the Irish Rebellion of 1798, and dating in particular to the lead up to a French expedition to Bantry Bay, that ultimately failed to get ashore in 1796.
The Sean-Bhean bhocht is used to personify Ireland, a poetic motif which heralds back to the aisling of native Irish language poetry.
Many different versions of the song have been composed by balladeers over the years, with the lyrics adapted to reflect the political climate at the time of composition. The title of the song, tune and narration of the misfortunes of the Shean Bhean bhocht remain a constant however, and this version, probably the best known, expresses confidence in the victory of the United Irishmen in the looming rebellion upon the arrival of French aid.
W. B. Yeats and Augusta, Lady Gregory based their 1902 nationalist play, Cathleen Ní Houlihan, on the legend dramatized in this song. A more recent version of the character is the speaker in Tommy Makem's "Four Green Fields," a song composed in the modern context of Northern Ireland's partition from the Republic of Ireland. The character also appears as the old lady selling milk in the opening passage of Ulysses by James Joyce.
See also
The Shan Van Vocht as the title of a late nineteenth century Irish nationalist novel and magazine.
Mise Éire
Róisín Dubh (song)
Hibernia (personification)
Kathleen Ni Houlihan
Four Green Fields
|
https://en.wikipedia.org/wiki/Boltzmann%20Medal
|
The Boltzmann Medal (or Boltzmann Award) is a prize awarded to physicists that obtain new results concerning statistical mechanics; it is named after the celebrated physicist Ludwig Boltzmann. The Boltzmann Medal is awarded once every three years by the Commission on Statistical Physics of the International Union of Pure and Applied Physics, during the STATPHYS conference.
The award consists of a gilded medal; its front carries the inscription Ludwig Boltzmann, 1844–1906.
Recipients
All the winners are influential physicists or mathematicians whose contribution to statistical physics have been relevant in the past decades. Institution with multiple recipients are Sapienza University of Rome (3) and École Normale Supérieure, Cornell University, University of Cambridge and Princeton University (2).
The Medal cannot be awarded to scientist who already has been laureate of a Nobel Prize. Two recipients of the Boltzmann Medal have gone to win the Nobel Prize in Physics: Kenneth G. Wilson (1982) and Giorgio Parisi (2021).
See also
List of physics awards
|
https://en.wikipedia.org/wiki/Intersection%20theorem
|
In projective geometry, an intersection theorem or incidence theorem is a statement concerning an incidence structure – consisting of points, lines, and possibly higher-dimensional objects and their incidences – together with a pair of objects and (for instance, a point and a line). The "theorem" states that, whenever a set of objects satisfies the incidences (i.e. can be identified with the objects of the incidence structure in such a way that incidence is preserved), then the objects and must also be incident. An intersection theorem is not necessarily true in all projective geometries; it is a property that some geometries satisfy but others don't.
For example, Desargues' theorem can be stated using the following incidence structure:
Points:
Lines:
Incidences (in addition to obvious ones such as ):
The implication is then —that point is incident with line .
Famous examples
Desargues' theorem holds in a projective plane if and only if is the projective plane over some division ring (skewfield} — . The projective plane is then called desarguesian.
A theorem of Amitsur and Bergman states that, in the context of desarguesian projective planes, for every intersection theorem there is a rational identity such that the plane satisfies the intersection theorem if and only if the division ring satisfies the rational identity.
Pappus's hexagon theorem holds in a desarguesian projective plane if and only if is a field; it corresponds to the identity .
Fano's axiom (which states a certain intersection does not happen) holds in if and only if has characteristic ; it corresponds to the identity .
|
https://en.wikipedia.org/wiki/Continua%20Health%20Alliance
|
Continua Health Alliance is an international non-profit, open industry group of nearly 240 healthcare providers, communications, medical, and fitness device companies.
Continua was a founding member of Personal Connected Health Alliance which was launched in February 2014 with other founding members mHealth SUMMIT and HIMSS.
Overview
Continua Health Alliance is an international not-for-profit industry organization enabling end-to-end, plug-and-play connectivity of devices and services for personal health management and healthcare delivery. Its mission is to empower information-driven health management and facilitate the incorporation of health and wellness into the day-to-day lives of consumers. ts activities include a certification and brand support program, events and collaborations to support technology and clinical innovation, as well as outreach to employers, payers, governments and care providers. With nearly 220 member companies reaching across the globe, Continua comprises technology, medical device and healthcare industry leaders and service providers dedicated to making personal connected health a reality.
Continua Health Alliance is working toward establishing systems of interoperable telehealth devices and services in three major categories: chronic disease management, aging independently, and health and physical fitness.
Devices and services
Continua Health Alliance version 1 design guidelines are based on proven connectivity technical standards and include Bluetooth for wireless and USB for wired device connection. The group released the guidelines to the public in June 2009.
The group is establishing a product certification program using its recognizable logo, the Continua Certified Logo program, signifying that the product is interoperable with other Continua-certified products. Products made under Continua Health Alliance guidelines will provide consumers with increased assurance of interoperability between devices, enabling them to more easi
|
https://en.wikipedia.org/wiki/El%20Jueves
|
(Spanish for "Thursday") is a Spanish weekly satirical magazine based in Barcelona.
Throughout most of its life, El Jueves'''s masthead has featured the tagline "" ("the magazine that comes out on Wednesdays"). Its mascot is a nameless jester, known simply as "el bufón", who is always fully naked, except for his bell-bearing hat.
HistoryEl Jueves debuted on 27 May 1977, at a time when satirical magazines were highly popular in Spain despite the scant freedom of the press. Its founder, Josep Ilario, creator of other humor magazines such as Barrabás and Por favor, wished El Jueves to be an adult version of Bruguera's model of children's magazines, made of character-focused comic strips lampooning stereotypes of contemporary Spanish society. El Jueves was inspired from La Codorniz.
Its first editors, cartoonists Tom, Romeu and J. L. Martín, drew inspiration from French magazines such as Hara-Kiri and Charlie Hebdo, which they admired for their extremely irreverent tone. Its first director was journalist José Luis Erviti. Among the contributors in the first issue was Joaquim Aubert "Kim", whose comic strip "Martínez El Facha" (an archetypal Spanish Falange militant and Franco nostalgic) had the longest run in the history of the magazine, appearing without interruption for 1,972 weeks.
Some other of its earliest and most emblematic contributors were Óscar Nebreda, Ventura y Nieto, Gin, Mariel, and Ramón Tosas Ivà, whose most successful comic-strip, starring the street-wise delinquent "Makinavaja", has been adapted into a play, two feature films, and a television series.
The magazine was acquired by publishing group Grupo Zeta in October 1977. In 1982 Grupo Zeta sold El Jueves to its directors J. L. Martín, Óscar Nebreda and Gin, who went on to incorporate Ediciones El Jueves. Throughout the 80s, 90s and early 2000s, their company grew vastly and published several other magazines with a spin-off spirit, such as Puta Mili and Mister K. It also expanded into film and t
|
https://en.wikipedia.org/wiki/Bechgaard%20salt
|
In organic chemistry, a Bechgaard salt is any one of a number of organic charge-transfer complexes that exhibit superconductivity at low temperatures. They are named for chemist Klaus Bechgaard, who was one of the first scientists to synthesize them and demonstrate their superconductivity with the help of physicist Denis Jérome. Most Bechgaard salt superconductors are extremely low temperature, and lose superconductivity above the 1–2 K range, although the most successful compound in this class superconducts up to almost 12 K.
All Bechgaard salts are formed using a small, planar organic molecule as an electron donor, with any of a number of electron acceptors (such as perchlorate, , or tetracyanoethylene, TCNE). All the organic electron donors contain multiply conjugated heterocycles with a number of properties, including planarity, low ionization potential and good orbital overlap between heteroatoms in neighboring donor molecules. These properties help the final salt conduct electrons by shuttling them through the orbital vacancies left in the donor molecules.
All Bechgaard salts have a variation on a single tetrathiafulvalene motif—different superconductors have been made with appendages to the motif, or using a tetraselenafulvalene center instead (which is a related compound), but all bear this general structural similarity.
There are a wide range of other organic superconductors including many other charge-transfer complexes.
See also
Superconductivity
Tetrathiafulvalene
|
https://en.wikipedia.org/wiki/Chiral%20derivatizing%20agent
|
In analytical chemistry, A chiral derivatizing agent (CDA), also known as a chiral resolving reagent, is a derivatization reagent that is a chiral auxiliary used to convert a mixture of enantiomers into diastereomers in order to analyze the quantities of each enantiomer present and determine the optical purity of a sample. Analysis can be conducted by spectroscopy or by chromatography. Some analytical techniques such as HPLC and NMR, in their most commons forms, cannot distinguish enantiomers within a sample, but can distinguish diastereomers. Therefore, converting a mixture of enantiomers to a corresponding mixture of diastereomers can allow analysis. The use of chiral derivatizing agents has declined with the popularization of chiral HPLC. Besides analysis, chiral derivatization is also used for chiral resolution, the actual physical separation of the enantiomers.
History
Since NMR spectroscopy has been available to chemists, there have been numerous studies on the applications of this technique. One of these noted the difference in the chemical shift (i.e. the distance between the peaks) of two diastereomers. Conversely, two compounds that are enantiomers have the same NMR spectral properties. It was reasoned that if a mix of enantiomers could be converted into a mix of diastereomers by bonding them to another chemical that was itself chiral, it would be possible to distinguish this new mixture using NMR, and therefore learn about the original enantiomeric mixture. The first popular example of this technique was published in 1969 by Harry S. Mosher. The chiral agent used was a single enantiomer of MTPA (α-methoxy-α-(trifluoromethyl)phenylacetic acid), also known as Mosher's acid. The corresponding acid chloride is also known as Mosher's acid chloride, and the resultant diastereomeric esters are known as Mosher's esters. Another system is Pirkle's Alcohol developed in 1977.
Requirements
The general use and design of CDAs obey the following rules so that the CD
|
https://en.wikipedia.org/wiki/Nomina%20sacra
|
In Christian scribal practice, nomina sacra (singular: nomen sacrum from Latin sacred name) is the abbreviation of several frequently occurring divine names or titles, especially in Greek manuscripts of the Bible. A nomen sacrum consists of two or more letters from the original word spanned by an overline.
Biblical scholar and textual critic Bruce M. Metzger lists 15 such words treated as nomina sacra from Greek papyri: the Greek counterparts of God, Lord, Jesus, Christ, Son, Spirit, David, Cross, Mother, Father, Israel, Savior, Man, Jerusalem, and Heaven. These nomina sacra are all found in Greek manuscripts of the 3rd century and earlier, except Mother, which appears in the 4th. All 15 occur in Greek manuscripts later than the 4th century.
Nomina sacra also occur in some form in Latin, Coptic, Armenian (indicated by the pativ), Gothic, Old Nubian, and Cyrillic (indicated by the titlo).
Origin and development
]
Nomina sacra are consistently observed in even the earliest extant Christian writings, along with the codex form rather than the roll, implying that when these were written, in approximately the second century, the practice had already been established for some time. However, it is not known precisely when and how the nomina sacra first arose.
The initial system of nomina sacra apparently consisted of just four or five words, called nomina divina: the Greek words for Jesus, Christ, Lord, God, and possibly Spirit. The practice quickly expanded to a number of other words regarded as sacred.
In the system of nomina sacra that came to prevail, abbreviation is by contraction, meaning that the first and last letter (at least) of each word are used. In a few early cases, an alternate practice is seen of abbreviation by suspension, meaning that the initial two letters (at least) of the word are used; e.g., the opening verses of Revelation in write (Jesus Christ) as . Contraction, however, offered the practical advantage of indicating the case of the abbrevi
|
https://en.wikipedia.org/wiki/Cyclin%20B
|
Cyclin B is a member of the cyclin family. Cyclin B is a mitotic cyclin. The amount of cyclin B (which binds to Cdk1) and the activity of the cyclin B-Cdk complex rise through the cell cycle until mitosis, where they fall abruptly due to degradation of cyclin B (Cdk1 is constitutively present). The complex of Cdk and cyclin B is called maturation promoting factor or mitosis promoting factor (MPF).
Function
Cyclin B is necessary for the progression of the cells into and out of M phase of the cell cycle.
At the end of S phase the phosphatase cdc25c dephosphorylates tyrosine15 and this activates the cyclin B/CDK1 complex. Upon activation the complex is shuttled to the nucleus where it serves to trigger for entry into mitosis. However, if DNA damage is detected alternative proteins are activated which results in the inhibitory phosphorylation of cdc25c and therefore cyclinB/CDK1 is not activated. In order for the cell to progress out of mitosis, the degradation of cyclin B is necessary.
The cyclin B/CDK1 complex also interacts with a variety of other key proteins and pathways which regulate cell growth and progression of mitosis. Cross-talk between many of these pathways links cyclin B levels indirectly to induction of apoptosis. The cyclin B/CDK1 complex plays a critical role in the expression of the survival signal survivin. Survivin is necessary for proper creation of the mitotic spindle which strongly affects cell viability, therefore when cyclin B levels are disrupted cells experience difficulty polarizing. A decrease in survivin levels and the associated mitotic disarray triggers apoptosis via caspase 3 mediated pathway.
Role in Cancer
Cyclin B plays an integral role in many types of cancer. Hyperplasia (uncontrolled cell growth) is one of the hallmarks of cancer. Because cyclin B is necessary for cells to enter mitosis and therefore necessary for cell division, cyclin B levels are often de-regulated in tumors. When cyclin B levels are elevated, cells
|
https://en.wikipedia.org/wiki/Cyclin%20A
|
Cyclin A is a member of the cyclin family, a group of proteins that function in regulating progression through the cell cycle. The stages that a cell passes through that culminate in its division and replication are collectively known as the cell cycle Since the successful division and replication of a cell is essential for its survival, the cell cycle is tightly regulated by several components to ensure the efficient and error-free progression through the cell cycle. One such regulatory component is cyclin A which plays a role in the regulation of two different cell cycle stages.
Types
Cyclin A was first identified in 1983 in sea urchin embryos. Since its initial discovery, homologues of cyclin A have been identified in numerous eukaryotes including Drosophila, Xenopus, mice, and in humans but has not been found in lower eukaryotes like yeast. The protein exists in both an embryonic form and somatic form. A single cyclin A gene has been identified in Drosophila while Xenopus, mice and humans contain two distinct types of cyclin A: A1, the embryonic-specific form, and A2, the somatic form. Cyclin A1 is prevalently expressed during meiosis and early on in embryogenesis. Cyclin A2 is expressed in dividing somatic cells.
Role in cell cycle progression
Cyclin A, along with the other members of the cyclin family, regulates cell cycle progression through physically interacting with cyclin-dependent kinases (CDKs), which thereby activates the enzymatic activity of its CDK partner.
CDK partner association
The interaction between the cyclin box, a region conserved across cyclins, and a region of the CDK, called the PSTAIRE, confers the foundation of the cyclin-CDK complex. Cyclin A is the only cyclin that regulates multiple steps of the cell cycle. Cyclin A can regulate multiple cell cycle steps because it associates with, and thereby activates, two distinct CDKs – CDK2 and CDK1. Depending on which CDK partner cyclin A binds, the cell will continue through the S phase o
|
https://en.wikipedia.org/wiki/Cyclin%20E
|
Cyclin E is a member of the cyclin family.
Cyclin E binds to G1 phase Cdk2, which is required for the transition from G1 to S phase of the cell cycle that determines initiation of DNA duplication. The Cyclin E/CDK2 complex phosphorylates p27Kip1 (an inhibitor of Cyclin D), tagging it for degradation, thus promoting expression of Cyclin A, allowing progression to S phase.
Functions of Cyclin E
Like all cyclin family members, cyclin E forms complexes with cyclin-dependent kinases. In particular, Cyclin E binds with CDK2. Cyclin E/CDK2 regulates multiple cellular processes by phosphorylating numerous downstream proteins.
Cyclin E/CDK2 plays a critical role in the G1 phase and in the G1-S phase transition. Cyclin E/CDK2 phosphorylates retinoblastoma protein (Rb) to promote G1 progression. Hyper-phosphorylated Rb will no longer interact with E2F transcriptional factor, thus release it to promote expression of genes that drive cells to S phase through G1 phase. Cyclin E/CDK2 also phosphorylates p27 and p21 during G1 and S phases, respectively. Smad3, a key mediator of TGF-β pathway which inhibits cell cycle progression, can be phosphorylated by cyclin E/CDK2. The phosphorylation of Smad3 by cyclin E/CDK2 inhibits its transcriptional activity and ultimately facilitates cell cycle progression. CBP/p300 and E2F-5 are also substrates of cyclin E/CDK2. Phosphorylation of these two proteins stimulates the transcriptional events during cell cycle progression. Cyclin E/CDK2 can phosphorylate p220(NPAT) to promote histone gene transcription during cell cycle progression.
Apart from the function in cell cycle progression, cyclin E/CDK2 plays a role in the centrosome cycle. This function is performed by phosphorylating nucleophosmin (NPM). Then NPM is released from binding to an unduplicated centrosome, thereby triggering duplication. CP110 is another cyclin E/CDK2 substrate which involves in centriole duplication and centrosome separation. Cyclin E/CDK2 has also been shown to
|
https://en.wikipedia.org/wiki/DNA%20field-effect%20transistor
|
A DNA field-effect transistor (DNAFET) is a field-effect transistor which uses the field-effect due to the partial charges of DNA molecules to function as a biosensor. The structure of DNAFETs is similar to that of MOSFETs, with the exception of the gate structure which, in DNAFETs, is replaced by a layer of immobilized ssDNA (single-stranded DNA) molecules which act as surface receptors. When complementary DNA strands hybridize to the receptors, the charge distribution near the surface changes, which in turn modulates current transport through the semiconductor transducer.
Arrays of DNAFETs can be used for detecting single nucleotide polymorphisms (causing many hereditary diseases) and for DNA sequencing. Their main advantage compared to optical detection methods in common use today is that they do not require labeling of molecules. Furthermore, they work continuously and (near) real-time. DNAFETs are highly selective since only specific binding modulates charge transport.
|
https://en.wikipedia.org/wiki/Yrast
|
Yrast ( , ) is a technical term in nuclear physics that refers to a state of a nucleus with a minimum of energy (when it is least excited) for a given angular momentum. Yr is a Swedish adjective sharing the same root as the English whirl. Yrast is the superlative of yr and can be translated whirlingest, although it literally means "dizziest" or "most bewildered". The yrast levels are vital to understanding reactions, such as off-center heavy ion collisions, that result in high-spin states.
Yrare is the comparative of yr and is used to refer to the second-least energetic state of a given angular momentum.
Background
An unstable nucleus may decay in several different ways: it can eject a neutron, proton, alpha particle, or other fragment; it can emit a gamma ray; it can undergo beta decay. Because of the relative strengths of the fundamental interactions associated with those processes (the strong interaction, electromagnetism, and the weak interaction respectively), they usually occur with frequencies in that order. Theoretically, a nucleus has a very small probability of emitting a gamma ray even if it could eject a neutron, and beta decay rarely occurs unless both of the other two pathways are highly unlikely.
In some instances, however, predictions based on this model underestimate the total amount of energy released in the form of gamma rays; that is, nuclei appear to have more than enough energy to eject neutrons, but decay by gamma emission instead. This discrepancy is found by the energy of a nuclear angular momentum, and documentation and calculation of yrast levels for a given system may be used for analyzing such a situation.
The energy stored in the angular momentum of an atomic nucleus can also be responsible for the emission of larger-than-expected particles, such as alpha particles over single nucleons, because they can carry away angular momentum more effectively. This is not the only reason alpha particles are preferentially emitted, though; anoth
|
https://en.wikipedia.org/wiki/Heterodont
|
In anatomy, a heterodont (from Greek, meaning 'different teeth') is an animal which possesses more than a single tooth morphology.
In vertebrates, heterodont pertains to animals where teeth are differentiated into different forms. For example, members of the Synapsida generally possess incisors, canines ("dogteeth"), premolars, and molars. The presence of heterodont dentition is evidence of some degree of feeding and or hunting specialization in a species. In contrast, homodont or isodont dentition refers to a set of teeth that possess the same tooth morphology.
In invertebrates, the term heterodont refers to a condition where teeth of differing sizes occur in the hinge plate, a part of the Bivalvia.
|
https://en.wikipedia.org/wiki/Preimplantation%20genetic%20haplotyping
|
Preimplantation genetic haplotyping (PGH) is a clinical method of preimplantation genetic diagnosis (PGD) used to determine the presence of single gene disorders in offspring. PGH provides a more feasible method of gene location than whole-genome association experiments, which are expensive and time-consuming.
PGH differs from common PGD methods such as fluorescence in situ hybridization (FISH) and polymerase chain reaction (PCR) for two primary reasons. First, rather than focusing on the genetic makeup of an embryo PGH compares the genome of affected and unaffected members of previous generations. This examination of generational variation then allows for a haplotype of genetic markers statistically associated with the target disease to be identified, rather than searching merely for a mutation. PGH is often used to reinforce other methods of genetic testing, and is considered more accurate than certain more common PGD methods because it has been found to reduce risk of misdiagnoses. Studies have found that misdiagnoses due to allele dropout (ADO), one of the most common causes of interpretation error, can be almost eliminated through use of PGH. Further, in the case of mutation due to translocation, PGH is able to detect chromosome abnormality to its full extent by differentiating between embryos carrying balanced forms of a translocation versus those carrying the homologous normal chromosomes. This is an advantage because PGD methods such as FISH are able to reveal whether an embryo will express the phenotypic difference, but not whether an embryo may be a carrier. In 2015, PGH was used in conjunction with a whole-genome amplification (WGA) process to not only diagnose disease but also distinguish meiotic segregation errors from mitotic ones.
Studies are being continually performed in an attempt to utilize and improve PGD methods since their initial invention. It has become increasingly popular because it grants individuals the option of detecting embryo abnor
|
https://en.wikipedia.org/wiki/Konrad%20Knopp
|
Konrad Hermann Theodor Knopp (22 July 1882 – 20 April 1957) was a German mathematician who worked on generalized limits and complex functions.
Family and education
Knopp was born in 1882 in Berlin to Paul Knopp (1845–1904), a businessman in manufacturing, and Helene (1857–1923), née Ostertun, whose own father was a butcher. Paul's hometown of Neustettin, then part of Germany, became Polish territory after the Second World War and is now called Szczecinek. In 1910, Konrad married the painter Gertrud Kressner (1879 - 1974). They had a daughter Ortrud Knopp (1911 - 1976), with the grandchildren Willfried Spohn (1944 - 2012), Herbert Spohn (*1946) und Wolfgang Spohn (*1950), and a son Ingolf Knopp (1915 – 2008), with the grandchildren Brigitte Knopp (*1952) and Werner Knopp (*1954).
Konrad was primarily educated in Berlin, with a brief sojourn at the University of Lausanne in 1901 for a single semester, before settling at the University of Berlin, where he remained for his doctoral studies. His doctoral thesis, entitled Grenzwerte von Reihen bei der Annäherung an die Konvergenzgrenze, was supervised by Schottky and Frobenius; he received his PhD in 1907.
Travels, teaching, and military career
Knopp traveled widely in Asia, taking teaching jobs in Nagasaki, Japan (1908-9), at the commercial college, and in Qingdao, China (1910–11), at the German-Chinese college there, and spending some time in India and China following his stay in Japan. His wedding to Kressner, the daughter of Colonel Karl Kressner and Hedwig Rebling, took place in Germany between these periods. After Qingdao he returned to Germany for good and taught at military academies while writing his habilitation thesis for Berlin University.
During the First World War he was an officer and was wounded at the beginning of the war, which resulted in his discharge from the army; by the autumn of 1914 he was teaching at Berlin University. In the following year he was appointed as an extraordinary professor a
|
https://en.wikipedia.org/wiki/Distributed%20amplifier
|
Distributed amplifiers are circuit designs that incorporate transmission line theory into traditional amplifier design to obtain a larger gain-bandwidth product than is realizable by conventional circuits.
History
The design of the distributed amplifiers was first formulated by William S. Percival in 1936. In that year Percival proposed a design by which the transconductances of individual vacuum tubes could be added linearly without lumping their element capacitances at the input and output, thus arriving at a circuit that achieved a gain-bandwidth product greater than that of an individual tube. Percival's design did not gain widespread awareness however, until a publication on the subject was authored by Ginzton, Hewlett, Jasberg, and Noe in 1948. It is to this later paper that the term distributed amplifier can actually be traced. Traditionally, DA design architectures were realized using vacuum tube technology.
Current technology
More recently, III-V semiconductor technologies, such as GaAs and InP have been used. These have superior performance resulting from higher bandgaps (higher electron mobility), higher saturated electron velocity, higher breakdown voltages and higher-resistivity substrates. The latter contributes much to the availability of higher quality-factor (Q-factor or simply Q) integrated passive devices in the III-V semiconductor technologies.
To meet the marketplace demands on cost, size, and power consumption of monolithic microwave integrated circuits (MMICs), research continues in the development of mainstream digital bulk-CMOS processes for such purposes. The continuous scaling of feature sizes in current IC technologies has enabled microwave and mm-wave CMOS circuits to directly benefit from the resulting increased unity-gain frequencies of the scaled technology. This device scaling, along with the advanced process control available in today's technologies, has recently made it possible to reach a transition frequency (ft) of 170
|
https://en.wikipedia.org/wiki/N-vector
|
The n-vector representation (also called geodetic normal or ellipsoid normal vector) is a three-parameter non-singular representation well-suited for replacing geodetic coordinates (latitude and longitude) for horizontal position representation in mathematical calculations and computer algorithms.
Geometrically, the n-vector for a given position on an ellipsoid is the outward-pointing unit vector that is normal in that position to the ellipsoid. For representing horizontal positions on Earth, the ellipsoid is a reference ellipsoid and the vector is decomposed in an Earth-centered Earth-fixed coordinate system. It behaves smoothly at all Earth positions, and it holds the mathematical one-to-one property.
More in general, the concept can be applied to representing positions on the boundary of a strictly convex bounded subset of k-dimensional Euclidean space, provided that that boundary is a differentiable manifold. In this general case, the n-vector consists of k parameters.
General properties
A normal vector to a strictly convex surface can be used to uniquely define a surface position. n-vector is an outward-pointing normal vector with unit length used as a position representation.
For most applications the surface is the reference ellipsoid of the Earth, and thus n-vector is used to represent a horizontal position. Hence, the angle between n-vector and the equatorial plane corresponds to geodetic latitude, as shown in the figure.
A surface position has two degrees of freedom, and thus two parameters are sufficient to represent any position on the surface. On the reference ellipsoid, latitude and longitude are common parameters for this purpose, but like all two-parameter representations, they have singularities. This is similar to orientation, which has three degrees of freedom, but all three-parameter representations have singularities. In both cases the singularities are avoided by adding an extra parameter, i.e. to use n-vector (three parameters) to rep
|
https://en.wikipedia.org/wiki/Power%20automorphism
|
In mathematics, in the realm of group theory, a power automorphism of a group is an automorphism that takes each subgroup of the group to within itself. It is worth noting that the power automorphism of an infinite group may not restrict to an automorphism on each subgroup. For instance, the automorphism on rational numbers that sends each number to its double is a power automorphism even though it does not restrict to an automorphism on each subgroup.
Alternatively, power automorphisms are characterized as automorphisms that send each element of the group to some power of that element. This explains the choice of the term power. The power automorphisms of a group form a subgroup of the whole automorphism group. This subgroup is denoted as where is the group.
A universal power automorphism is a power automorphism where the power to which each element is raised is the same. For instance, each element may go to its cube. Here are some facts about the powering index:
The powering index must be relatively prime to the order of each element. In particular, it must be relatively prime to the order of the group, if the group is finite.
If the group is abelian, any powering index works.
If the powering index 2 or -1 works, then the group is abelian.
The group of power automorphisms commutes with the group of inner automorphisms when viewed as subgroups of the automorphism group. Thus, in particular, power automorphisms that are also inner must arise as conjugations by elements in the second group of the upper central series.
|
https://en.wikipedia.org/wiki/IA%20automorphism
|
In mathematics, in the realm of group theory, an IA automorphism of a group is an automorphism that acts as identity on the abelianization. The abelianization of a group is its quotient by its commutator subgroup. An IA automorphism is thus an automorphism that sends each coset of the commutator subgroup to itself.
The IA automorphisms of a group form a normal subgroup of the automorphism group. Every inner automorphism is an IA automorphism.
See also
Torelli group
|
https://en.wikipedia.org/wiki/Quotientable%20automorphism
|
In mathematics, in the realm of group theory, a quotientable automorphism of a group is an automorphism that takes every normal subgroup to within itself. As a result, it gives a corresponding automorphism for every quotient group.
All family automorphisms are quotientable, and particularly, all class automorphisms and power automorphisms are. As well, all inner automorphisms are quotientable, and more generally, any automorphism defined by an algebraic formula is quotientable.
Group automorphisms
|
https://en.wikipedia.org/wiki/Class%20automorphism
|
In mathematics, in the realm of group theory, a class automorphism is an automorphism of a group that sends each element to within its conjugacy class. The class automorphisms form a subgroup of the automorphism group. Some facts:
Every inner automorphism is a class automorphism.
Every class automorphism is a family automorphism and a quotientable automorphism.
Under a quotient map, class automorphisms go to class automorphisms.
Every class automorphism is an IA automorphism, that is, it acts as identity on the abelianization.
Every class automorphism is a center-fixing automorphism, that is, it fixes all points in the center.
Normal subgroups are characterized as subgroups invariant under class automorphisms.
For infinite groups, an example of a class automorphism that is not inner is the following: take the finitary symmetric group on countably many elements and consider conjugation by an infinitary permutation. This conjugation defines an outer automorphism on the group of finitary permutations. However, for any specific finitary permutation, we can find a finitary permutation whose conjugation has the same effect as this infinitary permutation. This is essentially because the infinitary permutation takes permutations of finite supports to permutations of finite support.
For finite groups, the classical example is a group of order 32 obtained as the semidirect product of the cyclic ring on 8 elements, by its group of units acting via multiplication. Finding a class automorphism in the stability group that is not inner boils down to finding a cocycle for the action that is locally a coboundary but is not a global coboundary.
Group theory
Group automorphisms
|
https://en.wikipedia.org/wiki/Shift%20work%20sleep%20disorder
|
Shift work sleep disorder (SWSD) is a circadian rhythm sleep disorder characterized by insomnia, excessive sleepiness, or both affecting people whose work hours overlap with the typical sleep period. Insomnia can be the difficulty to fall asleep or to wake up before the individual has slept enough. About 20% of the working population participates in shift work. SWSD commonly goes undiagnosed, so it's estimated that 10–40% of shift workers have SWSD. The excessive sleepiness appears when the individual has to be productive, awake and alert. Both symptoms are predominant in SWSD. There are numerous shift work schedules, and they may be permanent, intermittent, or rotating; consequently, the manifestations of SWSD are quite variable. Most people with different schedules than the ordinary one (from 8 AM to 6 PM) might have these symptoms but the difference is that SWSD is continual, long-term, and starts to interfere with the individual's life.
Health effects
There have been many studies suggesting health risks associated with shift work. Many studies have associated sleep disorders with decreased bone mineral density (BMD) and risk for fracture. Researchers have found that those who work long-term in night positions, like nurses, are at a great risk for wrist and hip fractures (RR=1.37). Low fertility and issues during pregnancy are increased in shift workers. Obesity, diabetes, insulin resistance, elevated body fat levels and dyslipidemias were shown to be much higher in those who work night shift. SWSD can increase the risk of mental disorders. Specifically, depression, anxiety, and alcohol use disorder is increased in shift workers. Because the circadian system regulates the rate of chemical substances in the body, when it is impaired, several consequences are possible. Acute sleep loss has been shown to increase the levels of t-tau in blood plasma, which may explain the neurocognitive effects of sleep loss.
Sleep quality
Sleep loss and decreased quality of slee
|
https://en.wikipedia.org/wiki/Splice%20%28system%20call%29
|
is a Linux-specific system call that moves data between a file descriptor and a pipe without a round trip to user space. The related system call moves or copies data between a pipe and user space. Ideally, splice and vmsplice work by remapping pages and do not actually copy any data, which may improve I/O performance. As linear addresses do not necessarily correspond to contiguous physical addresses, this may not be possible in all cases and on all hardware combinations.
Workings
With , one can move data from one file descriptor to another without incurring any copies from user space into kernel space, which is usually required to enforce system security and also to keep a simple interface for processes to read and write to files. works by using the pipe buffer. A pipe buffer is an in-kernel memory buffer that is opaque to the user space process. A user process can splice the contents of a source file into this pipe buffer, then splice the pipe buffer into the destination file, all without moving any data through userspace.
Linus Torvalds described in a 2006 email, which was included in a KernelTrap article.
Origins
The Linux splice implementation borrows some ideas from an original proposal by Larry McVoy in 1998. The splice system calls first appeared in Linux kernel version 2.6.17 and were written by Jens Axboe.
Prototype
ssize_t splice(int fd_in, loff_t *off_in, int fd_out, loff_t *off_out, size_t len, unsigned int flags);
Some constants that are of interest are:
/* Splice flags (not laid down in stone yet). */
#ifndef SPLICE_F_MOVE
#define SPLICE_F_MOVE 0x01
#endif
#ifndef SPLICE_F_NONBLOCK
#define SPLICE_F_NONBLOCK 0x02
#endif
#ifndef SPLICE_F_MORE
#define SPLICE_F_MORE 0x04
#endif
#ifndef SPLICE_F_GIFT
#define SPLICE_F_GIFT 0x08
#endif
Example
This is an example of splice in action:
/* Transfer from disk to a log. */
int log_blocks (struct log_handle * handle, int fd, loff_t offset, size_t size)
{
int
|
https://en.wikipedia.org/wiki/Protocrystalline
|
A protocrystalline phase is a distinct phase occurring during crystal growth, which evolves into a microcrystalline form. The term is typically associated with silicon films in optical applications such as solar cells.
Applications
Silicon solar cells
Amorphous silicon (a-Si) is a popular solar cell material owing to its low cost and ease of production. Owing to disordered structure (Urbach tail), its absorption extends to the energies below the band gap resulting in a wide-range spectral response; however, it has a relatively low solar cell efficiency. Protocrystalline Si (pc-Si:H) also has a relatively low absorption near the band gap, owing to its more ordered crystalline structure. Thus, protocrystalline and amorphous silicon can be combined in a tandem solar cell, where the top thin layer of a-Si:H absorbs short-wavelength light whereas the longer wavelengths are absorbed by the underlying protocrystalline silicon layer.
See also
Amorphous silicon
Crystallite
Multijunction
Polycarbonate (PC)
Polyethylene terephthalate (PET)
|
https://en.wikipedia.org/wiki/Acrocyanosis
|
Acrocyanosis is persistent blue or cyanotic discoloration of the extremities, most commonly occurring in the hands, although it also occurs in the feet and distal parts of face. Although described over 100 years ago and not uncommon in practice, the nature of this phenomenon is still uncertain. The very term "acrocyanosis" is often applied inappropriately in cases when blue discoloration of the hands, feet, or parts of the face is noted.
The principal (primary) form of acrocyanosis is that of a benign cosmetic condition, sometimes caused by a relatively benign neurohormonal disorder. Regardless of its cause, the benign form typically does not require medical treatment. A medical emergency would ensue if the extremities experience prolonged periods of exposure to the cold, particularly in children and patients with poor general health. However, frostbite differs from acrocyanosis because pain (via thermal nociceptors) often accompanies the former condition, while the latter is very rarely associated with pain. There are also a number of other conditions that affect hands, feet, and parts of the face with associated skin color changes that need to be differentiated from acrocyanosis: Raynaud phenomenon, pernio, acrorygosis, erythromelalgia, and blue finger syndrome. The diagnosis may be challenging in some cases, especially when these syndromes co-exist.
Acrocyanosis may be a sign of a more serious medical problem, such as connective tissue diseases and diseases associated with central cyanosis. Other causative conditions include infections, toxicities, antiphospholipid syndrome, cryoglobulinemia, neoplasms. In these cases, the observed cutaneous changes are known as "secondary acrocyanosis". They may have a less symmetric distribution and may be associated with pain and tissue loss.
Signs and symptoms
Acrocyanosis is characterized by peripheral cyanosis: persistent cyanosis of the hands, feet, knees, or face. The extremities often are cold and clammy and may exhibi
|
https://en.wikipedia.org/wiki/C.%20S.%20Seshadri
|
Conjeevaram Srirangachari Seshadri (29 February 1932 – 17 July 2020) was an Indian mathematician. He was the founder and director-emeritus of the Chennai Mathematical Institute, and is known for his work in algebraic geometry. The Seshadri constant is named after him. He was also known for his collaboration with mathematician M. S. Narasimhan, for their proof of the Narasimhan–Seshadri theorem which proved the necessary conditions for stable vector bundles on a Riemann surface.
He was a recipient of the Padma Bhushan in 2009, the third highest civilian honor in the country.
Degrees and posts
Seshadri was born into a Hindu Brahmin family in Kanchipuram, Tamil Nadu. He received his B.A. (Hons) degree in mathematics from Madras University in 1953 and was mentored by the Jesuit priest Fr. Charles Racine and S. Naryanan there. He completed his PhD from Bombay University in 1958 under the supervision of K. S. Chandrasekharan. He was elected Fellow of the Indian Academy of Sciences in 1971.
Seshadri worked in the School of Mathematics at the Tata Institute of Fundamental Research in Bombay from 1953 to 1984 starting as a Research Scholar and rising to a senior professor. From 1984 to 1989, he worked at the Institute of Mathematical Sciences, Chennai. From 1989 to 2010, he worked as the founding director of the Chennai Mathematical Institute. After stepping down he continued to be the institute's Director-Emeritus till his death in 2020. He also served on the Mathematical Sciences jury for the Infosys Prize in 2010 and 2011.
Visiting professorships
University of Paris, France
Harvard University, Cambridge
UCLA
Brandeis University
University of Bonn, Bonn
Kyoto University, Kyoto, Japan
He has given talks at the ICM.
Awards and fellowships
Honorary degree, Université Pierre et Marie Curie (UPMC), Paris, 2013
Honoris Causa, University of Hyderabad, India
Padma Bhushan
Shanti Swarup Bhatnagar Award
Srinivasa Ramanujan Medal from the Indian Academy of Scienc
|
https://en.wikipedia.org/wiki/Prosalirus
|
Prosalirus is the name given to a fossilised prehistoric frog found in the Kayenta Formation of Arizona in 1981 by Farish Jenkins. The type, and currently only, species is Prosalirus bitis.
Description
The skeleton has primitive features, but has mostly lost the salamander-like traits of its ancestors. It has a skeleton designed to absorb the force of jumping with its hind legs and tail. It also has long hip bones, long hind leg bones, and long ankle bones, all similar to modern frogs, and is as of 2009 the earliest true frog.
The name comes from the Latin verb prosalire, meaning 'to leap forward'. It is thought to have lived during the Early Jurassic epoch 190 million years ago, well before the first known modern frog, Callobatrachus.
As of 2020, only three Prosalirus skeletons have been discovered.
Habitat
The Prosalirus is believed to have lived in brackish, freshwater, and terrestrial environments.
|
https://en.wikipedia.org/wiki/Small%20cleaved%20cells
|
Small cleaved cells are a distinctive type of cell that appears in certain types of lymphoma.
When used to uniquely identify a type of lymphoma, they are usually categorized as follicular () or diffuse () .
The "small cleaved cells" are usually centrocytes that express B-cell markers such as CD20. The disease is strongly correlated with the genetic translocation t(14;18), which results in juxtaposition of the bcl-2 proto-oncogene with the heavy chain JH locus, and thus in overexpression of bcl-2. Bcl-2 is a well known anti-apoptotic gene, and thus its overexpression results in the "failure to die" motif of cancer seen in follicular lymphoma.
Follicular lymphoma must be carefully monitored, as it often progresses into a more aggressive "Diffuse Large B-Cell Lymphoma."
External links
Histopathology
|
https://en.wikipedia.org/wiki/Craniopharyngeal%20canal
|
The craniopharyngeal canal is a human anatomical feature sometimes found in the sphenoid bone opening to the sella turcica. It is a canal (a passage or channel) sometimes found extending from the anterior part of the fossa hypophyseos of the sphenoid bone to the under surface of the skull, and marks the original position of Rathke's pouch; while at the junction of the septum of the nose with the palate traces of the stomodeal end are occasionally present. This canal is found in 0.4% of individuals.
|
https://en.wikipedia.org/wiki/Ekman%20transport
|
Ekman transport is part of Ekman motion theory, first investigated in 1902 by Vagn Walfrid Ekman. Winds are the main source of energy for ocean circulation, and Ekman transport is a component of wind-driven ocean current. Ekman transport occurs when ocean surface waters are influenced by the friction force acting on them via the wind. As the wind blows it casts a friction force on the ocean surface that drags the upper 10-100m of the water column with it. However, due to the influence of the Coriolis effect, the ocean water moves at a 90° angle from the direction of the surface wind. The direction of transport is dependent on the hemisphere: in the northern hemisphere, transport occurs at 90° clockwise from wind direction, while in the southern hemisphere it occurs at 90° anticlockwise. This phenomenon was first noted by Fridtjof Nansen, who recorded that ice transport appeared to occur at an angle to the wind direction during his Arctic expedition of the 1890s. Ekman transport has significant impacts on the biogeochemical properties of the world's oceans. This is because it leads to upwelling (Ekman suction) and downwelling (Ekman pumping) in order to obey mass conservation laws. Mass conservation, in reference to Ekman transfer, requires that any water displaced within an area must be replenished. This can be done by either Ekman suction or Ekman pumping depending on wind patterns.
Theory
Ekman theory explains the theoretical state of circulation if water currents were driven only by the transfer of momentum from the wind. In the physical world, this is difficult to observe because of the influences of many simultaneous current driving forces (for example, pressure and density gradients). Though the following theory technically applies to the idealized situation involving only wind forces, Ekman motion describes the wind-driven portion of circulation seen in the surface layer.
Surface currents flow at a 45° angle to the wind due to a balance between the Coriolis
|
https://en.wikipedia.org/wiki/Shape%20correction%20function
|
The shape correction function is a ratio of the surface area of a growing organism and that of an isomorph as function of the volume. The shape of the isomorph is taken to be equal to that of the organism for a given reference volume, so for that particular volume the surface areas are also equal and the shape correction function has value one.
For a volume and reference volume , the shape correction function equals:
V0-morphs:
V1-morphs:
Isomorphs:
Static mixtures between a V0 and a V1-morph can be found as: for
The shape correction function is used in Dynamic Energy Budget theory to correct equations for isomorphs to organisms that change shape during growth. The conversion is necessary for accurately modelling food (substrate) acquisition and mobilization of reserve for use by metabolism.
|
https://en.wikipedia.org/wiki/Panning%20%28audio%29
|
Panning is the distribution of an audio signal (either monaural or stereophonic pairs) into a new stereo or multi-channel sound field determined by a pan control setting. A typical physical recording console has a pan control for each incoming source channel. A pan control or pan pot (short for "panning potentiometer") is an analog control with a position indicator which can range continuously from the 7 o'clock when fully left to the 5 o'clock position fully right. Audio mixing software replaces pan pots with on-screen virtual knobs or sliders which function like their physical counterparts.
Overview
A pan pot has an internal architecture which determines how much of a source signal is sent to the left and right buses. "Pan pots split audio signals into left and right channels, each equipped with its own discrete gain (volume) control." This signal distribution is often called a taper or law.
When centered (at 12 o'clock), the law can be designed to send −3, −4.5 or −6 decibels (dB) equally to each bus. "Signal passes through both the channels at an equal volume while the pan pot points directly north." If the two output buses are later recombined into a monaural signal, then a pan law of -6 dB is desirable. If the two output buses are to remain stereo then a law of -3 dB is desirable. A law of −4.5 dB at center is a compromise between the two. A pan control fully rotated to one side results in the source being sent at full strength (0 dB) to one bus (either the left or right channel) and zero strength (− dB) to the other. Regardless of the pan setting, the overall sound power level remains (or appears to remain) constant. Because of the phantom center phenomenon, sound panned to the center position is perceived as coming from between the left and right speakers, but not in the center unless listened to with headphones, because of head-related transfer function HRTF.
Panning in audio borrows its name from panning action in moving image technology. An audio pan
|
https://en.wikipedia.org/wiki/Syntrophy
|
In biology, syntrophy, synthrophy, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the phenomenon of one species feeding on the metabolic products of another species to cope up with the energy limitations by electron transfer. In this type of biological interaction, metabolite transfer happens between two or more metabolically diverse microbial species that live in close proximity to each other. The growth of one partner depends on the nutrients, growth factors, or substrates provided by the other partner. Thus, syntrophism can be considered as an obligatory interdependency and a mutualistic metabolism between two different bacterial species.
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation
|
https://en.wikipedia.org/wiki/Lignocellulosic%20biomass
|
Lignocellulose refers to plant dry matter (biomass), so called lignocellulosic biomass. It is the most abundantly available raw material on the Earth for the production of biofuels. It is composed of two kinds of carbohydrate polymers, cellulose and hemicellulose, and an aromatic-rich polymer called lignin. Any biomass rich in cellulose, hemicelluloses, and lignin are commonly referred to as lignocellulosic biomass. Each component has a distinct chemical behavior. Being a composite of three very different components makes the processing of lignocellulose challenging. The evolved resistance to degradation or even separation is referred to as recalcitrance. Overcoming this recalcitrance to produce useful, high value products requires a combination of heat, chemicals, enzymes, and microorganisms. These carbohydrate-containing polymers contain different sugar monomers (six and five carbon sugars) and they are covalently bound to lignin.
Lignocellulosic biomass can be broadly classified as virgin biomass, waste biomass, and energy crops. Virgin biomass includes plants. Waste biomass is produced as a low value byproduct of various industrial sectors such as agriculture (corn stover, sugarcane bagasse, straw etc.) and forestry (saw mill and paper mill discards). Energy crops are crops with a high yield of lignocellulosic biomass produced as a raw material for the production of second-generation biofuel; examples include switchgrass (Panicum virgatum) and Elephant grass. The biofuels generated from these energy crops are sources of sustainable energy.
Chemical composition
Lignocellulose consists of three components, each with properties that pose challenges to commercial applications.
lignin is a heterogeneous, highly crosslinked polymer akin to phenol-formaldehyde resins. It is derived from 3-4 monomers, the ratio of which varies from species to species. The crosslinking is extensive. Being rich in aromatics, lignin is hydrophobic and relatively rigid. Lignin confe
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%2015
|
Bone morphogenetic protein 15 (BMP-15) is a protein that in humans is encoded by the BMP15 gene. It is involved in folliculogenesis, the process in which primordial follicles develop into pre-ovulatory follicles.
Structure & Interactions
Structure
The BMP-15 gene is located on the X-chromosome and using Northern blot analysis BMP-15 mRNA is locally expressed within the ovaries in oocytes only after they have started to undergo the primary stages of development. BMP-15 is translated as a preproprotein that is composed of a single peptide, which contains a proregion and a smaller mature region. Intracellular processing then leads to the removal of the proregion, leaving the biologically active mature region to perform the functions. This protein is a member of the Transforming growth factor beta (TGF-β) superfamily and is a paracrine signalling molecule. Most active BMPs have a common structure, in which they contain 7 cysteines, 6 of which form three intramolecular disulphide bonds and the seventh being involved in the formation of dimers with other monomers. BMP-15 is an exception to this as the molecule does not contain the seventh cysteine. Instead in BMP-15 the fourth cysteine is replaced by a serine.
Interactions
BMP-15 and GDF9 interact with each other and work synergistically to have similar interactions with the target cell. BMP15 can act as a heterodimer with GDF9 or on its own as a homodimer. In most of the BMP family heterodimers and homodimers form as the seventh cysteine is involved in the formation of a covalent bond, leading the dimerization. However, in the BMP-15 the homodimers form as a non-covalent bond is present between two BMP-15 subunits.
Function
Functions of BMP-15 include
Promotion of growth and maturation of ovarian follicles, starting from the primary gonadotrophin-independent phases of folliculogenesis.
Regulation of the sensitivity of granulosa cells to follicle-stimulating hormone (FSH) action, contributing to the determinati
|
https://en.wikipedia.org/wiki/Maintenance%20of%20an%20organism
|
Maintenance of an organism is the collection of processes to stay alive, excluding production processes. The Dynamic Energy Budget theory delineates two classes
Somatic maintenance mainly comprises the turnover of structural mass (mainly proteins) and the maintenance of concentration gradients of metabolites across membranes (e.g., counteracting leakage). This is related to maintenance respiration.
Maturity maintenance concerns the maintenance of defence systems (such as the immune system), and the preparation of the body for reproduction.
The theory assumes that maturity maintenance costs can be reduced more easily during starvation than somatic maintenance costs. Under extreme starvation conditions, somatic maintenance costs are paid from structural mass, which causes shrinking. Some organism manage to switch to the torpor state under starvation conditions, and reduce their maintenance costs.
Developmental biology
|
https://en.wikipedia.org/wiki/Tonnetz
|
In musical tuning and harmony, the (German for 'tone network') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739. Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music.
History through 1900
The Tonnetz originally appeared in Leonhard Euler's 1739 . Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). The Tonnetz was rediscovered in 1858 by Ernst Naumann, and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic motion between chords and modulation between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists.
Oettingen and Riemann both conceived of the relationships in the chart being defined through just intonation, which uses pure intervals. One can extend out one of the horizontal rows of the Tonnetz indefinitely, to form a never-ending sequence of perfect fifths: F-C-G-D-A-E-B-F♯-C♯-G♯-D♯-A♯-E♯-B♯-F𝄪-C𝄪-G𝄪- (etc.) Starting with F, after 12 perfect fifths, one reaches E♯. Perfect fifths in just intonation are slightly larger than the compromised fifths used in equal temperament tuning systems more common in the present. This means that when one stacks 12 fifths starting from F, the E♯ we arrive at will not be seven octaves above the F we started with. Oettingen and Riemann's Tonnetz thus extended on infinitely in every direction without actually repeating any pitches. In the twentieth century, composer-theorists such as Ben Johnston and James Tenney continued to developed theories and applications involving
|
https://en.wikipedia.org/wiki/Dynamic%20reserve
|
Dynamic reserve, in the context of the dynamic energy budget theory, refers to the set of metabolites (mostly polymers and lipids) that an organism can use for metabolic purposes. These chemical compounds can have active metabolic functions, however. They are not just "set apart for later use." Reserve differs from structure in the first place by its dynamics. Reserve has an implied turnover, because it is synthesized from food (or other substrates in the environment) and used by metabolic processes occurring in cells. The turnover of structure depends on the maintenance of an organism. Maintenance is not required for reserve. A freshly laid egg consists almost exclusively of reserve, and hardly respires. The chemical compounds in the reserve have the same turnover, while that in the structure can have a different turnover, and so it depends on the compound.
Functionality
Reserves are synthesized from environmental substrates (food) for use by the metabolism for the purpose of somatic maintenance (including protein turnover, maintenance of concentration gradients across membranes, activity and other types of work), growth (increase of structural mass), maturity maintenance (installation of regulation systems, preparation for reproduction, maintenance of defense systems, such as the immune system), maturation (increase of the state of maturity) and reproduction. This organizational position of reserve creates a rather constant internal chemical environment, with only an indirect coupling with the extra-organismal environment. Reserves as well as structure are taken to be generalised compounds, i.e. mixtures of a large number of compounds, which do not change in composition. The latter requirement is called the strong homeostasis assumption. Polymers (carbohydrates, proteins, ribosomal RNA) and lipids form the main bulk of reserves and of structure.
Some reasons for including reserve are to give an explanation for (from ):
the metabolic memory; changes in food (su
|
https://en.wikipedia.org/wiki/4-Hexylresorcinol
|
4-Hexylresorcinol is an organic compound with local anaesthetic, antiseptic, and anthelmintic properties.
As an antiseptic, it is marketed as S.T.37 by Numark Laboratories, Inc. (in a 0.1% solution) for oral pain relief and as a topical antiseptic. It is available for use topically on small skin infections or as an ingredient in throat lozenges.
As an anthelmintic, 4-hexylresorcinol was sold under the brand Crystoids.
Sytheon Ltd., USA markets 4-hexylresorcinol (trade named Synovea HR).
Johnson & Johnson uses 4-hexylresorcinol in its Neutrogena, Aveno, and RoC skincare products as an anti-aging cream. 4-Hexylresorcinol has been used commercially by many cosmetic and personal care companies, such as Mary Kay, Clarins, Unilever, Murad, Facetheory, Arbonne, and many small and large companies.
A study published in Chemical Research in Toxicology shows that 4-hexylresorcinol used as a food additive (E-586) exhibits some estrogenic activity, i.e. resembles action of the female sex hormone estrogen. However, recent study published in Applied Sciences shows that 4-hexylresorcinol did not change the expression of estrogen receptor-α, -β, or p-ERK1/2 in MCF-7 cells. In an ovariectomized animal model, the 4HR group showed similar levels of ERα, ERβ, and prolactin expression in the pituitary gland compared to the solvent only group, while the estradiol group showed higher levels. Serum prolactin levels were similar between the 4HR and solvent only groups.
In one study, 4-hexylresorcinol increased the shelf life of shrimp by reducing melanosis (black spots).
In mice with cancer, 4-hexylresorcinol inhibited NF-κB and extended their survival rate.
4-Hexylresorcinol can be used in the synthesis of Tetrahydrocannabihexol.
|
https://en.wikipedia.org/wiki/Fast-ion%20conductor
|
In materials science, fast ion conductors are solid conductors with highly mobile ions. These materials are important in the area of solid state ionics, and are also known as solid electrolytes and superionic conductors. These materials are useful in batteries and various sensors. Fast ion conductors are used primarily in solid oxide fuel cells. As solid electrolytes they allow the movement of ions without the need for a liquid or soft membrane separating the electrodes. The phenomenon relies on the hopping of ions through an otherwise rigid crystal structure.
Mechanism
Fast ion conductors are intermediate in nature between crystalline solids which possess a regular structure with immobile ions, and liquid electrolytes which have no regular structure and fully mobile ions. Solid electrolytes find use in all solid-state supercapacitors, batteries, and fuel cells, and in various kinds of chemical sensors.
Classification
In solid electrolytes (glasses or crystals), the ionic conductivity σi can be any value, but it should be much larger than the electronic one. Usually, solids where σi is on the order of 0.0001 to 0.1 Ω−1 cm−1 (300 K) are called superionic conductors.
Proton conductors
Proton conductors are a special class of solid electrolytes, where hydrogen ions act as charge carriers. One notable example is superionic water.
Superionic conductors
Superionic conductors where σi is more than 0.1 Ω−1 cm−1 (300 K) and the activation energy for ion transport Ei is small (about 0.1 eV), are called advanced superionic conductors. The most famous example of advanced superionic conductor-solid electrolyte is RbAg4I5 where σi > 0.25 Ω−1 cm−1 and σe ~10−9 Ω−1 cm−1 at 300 K. The Hall (drift) ionic mobility in RbAg4I5 is about 2 cm2/(V•s) at room temperatures. The σe – σi systematic diagram distinguishing the different types of solid-state ionic conductors is given in the figure.
No clear examples have been described as yet, of fast ion conductors in the hypothetical advan
|
https://en.wikipedia.org/wiki/Generalised%20compound
|
A generalized compound is a mixture of chemical compounds of constant composition, despite possible changes in the total amount. The concept is used in the Dynamic Energy Budget theory, where biomass is partitioned into a limited set of generalised compounds, which contain a high percentage of organic compounds. The amount of generalized compound can be quantified in terms of weight, but more conveniently in terms of C-moles. The concept of strong homeostasis has an intimate relationship with that of generalised compound.
|
https://en.wikipedia.org/wiki/Macroblock
|
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit.
Technical details
Transform blocks
A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. These samples are split into four Y blocks, one Cb block and one Cr block. This design is also used in JPEG and most other macroblock-based video codecs with a fixed transform block size, such as MPEG-1 Part 2 and H.262/MPEG-2 Part 2. In other chroma subsampling formats, e.g. 4:0:0, 4:2:2, or 4:4:4, the number of chroma samples in a macroblock will be smaller or larger, and the grouping of chroma samples into blocks will differ accordingly.
In more modern macroblock-based video coding standards such as H.263 and H.264/AVC, transform blocks can be of sizes other than 8×8 samples. For instance, in H.264/AVC main profile, the transform block size is 4×4. In H.264/AVC High profile, the transform block size can be either 4×4 or 8×8, adapted on a per-macroblock basis.
Prediction blocks
Distinct from the division into transform blocks, a macroblock can be split into prediction blocks. In early standards such as H.261, MPEG-1 Part 2, and H.262/MPEG-2 Part 2, motion compensation is performed with one motion ve
|
https://en.wikipedia.org/wiki/Neural%20facilitation
|
Neural facilitation, also known as paired-pulse facilitation (PPF), is a phenomenon in neuroscience in which postsynaptic potentials (PSPs) (EPPs, EPSPs or IPSPs) evoked by an impulse are increased when that impulse closely follows a prior impulse. PPF is thus a form of short-term synaptic plasticity. The mechanisms underlying neural facilitation are exclusively pre-synaptic; broadly speaking, PPF arises due to increased presynaptic concentration leading to a greater release of neurotransmitter-containing synaptic vesicles. Neural facilitation may be involved in several neuronal tasks, including simple learning, information processing, and sound-source localization.
Mechanisms
Overview
plays a significant role in transmitting signals at chemical synapses. Voltage-gated channels are located within the presynaptic terminal. When an action potential invades the presynaptic membrane, these channels open and enters. A higher concentration of enables synaptic vesicles to fuse to the presynaptic membrane and release their contents (neurotransmitters) into the synaptic cleft to ultimately contact receptors in the postsynaptic membrane. The amount of neurotransmitter released is correlated with the amount of influx. Therefore, short-term facilitation (STF) results from a build up of within the presynaptic terminal when action potentials propagate close together in time.
Facilitation of excitatory post-synaptic current (EPSC) can be quantified as a ratio of subsequent EPSC strengths. Each EPSC is triggered by pre-synaptic calcium concentrations and can be approximated by:
EPSC = k([]presynaptic)4 = k([]rest + []influx + []residual)4
Where k is a constant.
Facilitation = EPSC2 / EPSC1 = (1 + []residual / []influx)4 - 1
Experimental evidence
Early experiments by Del Castillo & Katz in 1954 and Dudel & Kuffler in 1968 showed that facilitation was possible at the neuromuscular junction even if transmitter release does not occur, indicating that facilitation is an
|
https://en.wikipedia.org/wiki/Chimera%20%28virus%29
|
A chimera or chimeric virus is a virus that contains genetic material derived from two or more distinct viruses. It is defined by the Center for Veterinary Biologics (part of the U.S. Department of Agriculture's Animal and Plant Health Inspection Service) as a "new hybrid microorganism created by joining nucleic acid fragments from two or more different microorganisms in which each of at least two of the fragments contain essential genes necessary for replication." The term genetic chimera had already been defined to mean: an individual organism whose body contained cell populations from different zygotes or an organism that developed from portions of different embryos. Chimeric flaviviruses have been created in an attempt to make novel live attenuated vaccines.
Etymology
In mythology, a chimera is a creature such as a hippogriff or a gryphon formed from parts of different animals, thus the name for these viruses.
As a natural phenomenon
Viruses are categorized in two types: In prokaryotes, the great majority of viruses possess double-stranded (ds) DNA genomes, with a substantial minority of single-stranded (ss) DNA viruses and only limited presence of RNA viruses. In contrast, in eukaryotes, RNA viruses account for the majority of the virome diversity although ssDNA and dsDNA viruses are common as well.
In 2012, the first example of a naturally-occurring RNA-DNA hybrid virus was unexpectedly discovered during a metagenomic study of the acidic extreme environment of Boiling Springs Lake that is in Lassen Volcanic National Park, California. The virus was named BSL-RDHV (Boiling Springs Lake RNA DNA Hybrid Virus).<ref name=devor12>Devor, Caitlin (12 July 2012)."Scientists discover hybrid virus". Journal of Young Investigators". Retrieved 31 March 2020.</ref> Its genome is related to a DNA circovirus, which usually infect birds and pigs, and a RNA tombusvirus, which infect plants. The study surprised scientists, because DNA and RNA viruses vary and the way the chi
|
https://en.wikipedia.org/wiki/Olav%20Kallenberg
|
Olav Kallenberg (born 1939) is a probability theorist known for his work on exchangeable stochastic processes and for his graduate-level textbooks and monographs. Kallenberg is a professor of mathematics at Auburn University in Alabama in the USA.
From 1991 to 1994, Kallenberg served as the Editor-in-Chief of Probability Theory and Related Fields (a leading journal in probability).
Biography
Olav Kallenberg was educated in Sweden. He has worked as a probabilist in Sweden and in the United States.
Sweden
Kallenberg was born and educated in Sweden, with an undergraduate exam in engineering physics from Royal Institute of Technology (KTH) in Stockholm. Kallenberg entered doctoral studies in mathematical statistics at KTH, but left his studies to work in operations analysis for a consulting firm in Gothenburg. While in Gothenburg, Kallenberg also taught at Chalmers University of Technology, from which he received his Ph.D. in 1972 under the supervision of Harald Bergström.
After earning his doctoral degree, Kallenberg stayed with Chalmers as a lecturer.
Kallenberg was appointed a full professor in Uppsala University.
United States
Later he moved to the United States. Since 1986, he has been Professor of Mathematics and Statistics at Auburn University.
Honours and awards
In 1977, Kallenberg was awarded the Rollo Davidson Prize from Cambridge University, and Kallenberg was only the second recipient of the prize in history.
Kallenberg is a Fellow of the Institute of Mathematical Statistics.
In April 2006 Kallenberg was selected Auburn's 32nd annual Distinguished Graduate Faculty Lecturer at Auburn. Kallenberg delivered the 2003 AACTM Lewis-Parker Lecture at the University of Alabama in Huntsville.
Selected publications
Books
Kallenberg, O., Probabilistic Symmetries and Invariance Principles. Springer -Verlag, New York (2005). 510 pp. .
Kallenberg, O., Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. (2002). 650 pp. ; 3rd ed. Probabilit
|
https://en.wikipedia.org/wiki/Lie%20coalgebra
|
In mathematics a Lie coalgebra is the dual structure to a Lie algebra.
In finite dimensions, these are dual objects: the dual vector space to a Lie algebra naturally has the structure of a Lie coalgebra, and conversely.
Definition
Let E be a vector space over a field k equipped with a linear mapping from E to the exterior product of E with itself. It is possible to extend d uniquely to a graded derivation (this means that, for any a, b ∈ E which are homogeneous elements, ) of degree 1 on the exterior algebra of E:
Then the pair (E, d) is said to be a Lie coalgebra if d2 = 0,
i.e., if the graded components of the exterior algebra with derivation
form a cochain complex:
Relation to de Rham complex
Just as the exterior algebra (and tensor algebra) of vector fields on a manifold form a Lie algebra (over the base field K), the de Rham complex of differential forms on a manifold form a Lie coalgebra (over the base field K). Further, there is a pairing between vector fields and differential forms.
However, the situation is subtler: the Lie bracket is not linear over the algebra of smooth functions (the error is the Lie derivative), nor is the exterior derivative: (it is a derivation, not linear over functions): they are not tensors. They are not linear over functions, but they behave in a consistent way, which is not captured simply by the notion of Lie algebra and Lie coalgebra.
Further, in the de Rham complex, the derivation is not only defined for , but is also defined for .
The Lie algebra on the dual
A Lie algebra structure on a vector space is a map which is skew-symmetric, and satisfies the Jacobi identity. Equivalently, a map that satisfies the Jacobi identity.
Dually, a Lie coalgebra structure on a vector space E is a linear map which is antisymmetric (this means that it satisfies , where is the canonical flip ) and satisfies the so-called cocycle condition (also known as the co-Leibniz rule)
.
Due to the antisymmetry condition, the map can be
|
https://en.wikipedia.org/wiki/Prosopography%20of%20Anglo-Saxon%20England
|
The Prosopography of Anglo-Saxon England (PASE) is a database and associated website that aims to construct a prosopography of individuals within Anglo-Saxon England. The PASE online database presents details (which it calls factoids) of the lives of every recorded individual who lived in, or was closely connected with, Anglo-Saxon England from 597 to 1087, with specific citations to (and often quotations from) each primary source describing each factoid.
PASE was funded by the British Arts and Humanities Research Council from 2000 to 2008 as a major research project based at King's College London in the Department of History and the Centre for Computing in the Humanities (now the Department of Digital Humanities), and at the Department of Anglo-Saxon, Norse and Celtic, University of Cambridge.
The first phase of the project (PASE1) was launched at the British Academy on 27 May 2005 and is freely available on the Internet at www.pase.ac.uk. This covers individuals named in written sources up to 1066, and contains 11,758 individuals. Each person is assigned a number, to aid the ready identification of individuals in future scholarship- e.g. King Alfred the Great is denoted as Alfred 8. Each named individual is accompanied by the various spellings of their name as it appears in the written sources, along with factoids on their career and personal relationships where this can be determined.
A second phase (PASE2), released on 10 August 2010, added information drawn chiefly from Domesday Book to the database. This includes 19,807 named individuals. The landholdings of these individuals are mapped, along with a table illustrating their named landholdings. In cases where enough information is possible, a small prose biography is provided.
A number of publications have resulted from the creation of the PASE database - these are listed on the site.
The PASE database is dedicated to professor Nicholas Brooks and Ann Williams.
Directors
Dame Janet 'Jinty' Nelson
Sim
|
https://en.wikipedia.org/wiki/Sage%20oil
|
Sage oils are essential oils that come in several varieties:
Dalmatian sage oil
Also called English, Garden, and True sage oil. Made by steam distillation of Salvia officinalis partially dried leaves. Yields range from 0.5 to 1.0%. A colorless to yellow liquid with a warm camphoraceous, thujone-like odor and sharp and bitter taste. The main components of the oil are thujone (50%), camphor, pinene, and cineol.
Clary sage oil
Sometimes called muscatel. Made by steam or water distillation of Salvia sclarea flowering tops and foliage. Yields range from 0.7 to 1.5%. A pale yellow to yellow liquid with a herbaceous odor and a winelike bouquet. Produced in large quantities in France, Russia and Morocco. The oil contains linalyl acetate, linalool and other terpene alcohols (sclareol), as well as their acetates.
Spanish sage oil
Made by steam distillation of Salvia lavandulifolia leaves and twigs. A colorless to pale yellow liquid with the characteristic camphoraceous odor. Unlike Dalmatian sage oil, Spanish sage oil contains no or only traces of thujone; camphor and eucalyptol are the major components.
Greek sage oil
Made by steam distillation of Salvia triloba leaves. Grows in Greece and Turkey. Yields range from 0.25% to 4%. The oil contains camphor, thujone, and pinene, the dominant component being eucalyptol.
Judaean sage oil
Made by steam distillation of Salvia judaica leaves. The oil contains mainly cubebene and ledol.
|
https://en.wikipedia.org/wiki/Max%20Deuring
|
Max Deuring (9 December 1907 – 20 December 1984) was a German mathematician. He is known for his work in arithmetic geometry, in particular on elliptic curves in characteristic p. He worked also in analytic number theory.
Deuring graduated from the University of Göttingen in 1930, then began working with Emmy Noether, who noted his mathematical acumen even as an undergraduate. When she was forced to leave Germany in 1933, she urged that the university offer her position to Deuring. In 1935 he published a report entitled Algebren ("Algebras"), which established his notability in the world of mathematics. He went on to serve as Ordinarius at Marburg and Hamburg, then took a position as ordentlicher Lehrstuhl at Göttingen, where he remained until his retirement.
Deuring was a fellow of the Leopoldina. His doctoral students include Max Koecher and Hans-Egon Richert.
Selected works
Algebren, Springer 1935
Sinn und Bedeutung der mathematischen Erkenntnis, Felix Meiner, Hamburg 1949
Klassenkörper der komplexen Multiplikation, Teubner 1958
Lectures on the theory of algebraic functions of one variable, 1973 (from lectures at the Tata Institute, Mumbai)
Sources
Peter Roquette Über die algebraisch-zahlentheoretischen Arbeiten von Max Deuring, Jahresbericht DMV Vol.91, 1989, p. 109
Martin Kneser Max Deuring, Jahresbericht DMV Vol.89, 1987, p. 135
Martin Kneser, Martin Eichler Das wissenschaftliche Werk von Max Deuring, Acta Arithmetica Vol.47, 1986, p. 187
See also
Deuring–Heilbronn phenomenon
Birch and Swinnerton-Dyer conjecture
Supersingular elliptic curve
|
https://en.wikipedia.org/wiki/DNA%20footprinting
|
DNA footprinting is a method of investigating the sequence specificity of DNA-binding proteins in vitro. This technique can be used to study protein-DNA interactions both outside and within cells.
The regulation of transcription has been studied extensively, and yet there is still much that is unknown. Transcription factors and associated proteins that bind promoters, enhancers, or silencers to drive or repress transcription are fundamental to understanding the unique regulation of individual genes within the genome. Techniques like DNA footprinting help elucidate which proteins bind to these associated regions of DNA and unravel the complexities of transcriptional control.
History
In 1978, David J. Galas and Albert Schmitz developed the DNA footprinting technique to study the binding specificity of the lac repressor protein. It was originally a modification of the Maxam-Gilbert chemical sequencing technique.
Method
The simplest application of this technique is to assess whether a given protein binds to a region of interest within a DNA molecule. Polymerase chain reaction (PCR) amplify and label region of interest that contains a potential protein-binding site, ideally amplicon is between 50 and 200 base pairs in length. Add protein of interest to a portion of the labeled template DNA; a portion should remain separate without protein, for later comparison. Add a cleavage agent to both portions of DNA template. The cleavage agent is a chemical or enzyme that will cut at random locations in a sequence independent manner. The reaction should occur just long enough to cut each DNA molecule in only one location. A protein that specifically binds a region within the DNA template will protect the DNA it is bound to from the cleavage agent. Run both samples side by side on a polyacrylamide gel electrophoresis. The portion of DNA template without protein will be cut at random locations, and thus when it is run on a gel, will produce a ladder-like distribution. Th
|
https://en.wikipedia.org/wiki/Far-western%20blot
|
The far-western blot, or far-western blotting, is a molecular biological method based on the technique of western blot to detect protein-protein interaction in vitro. Whereas western blot uses an antibody probe to detect a protein of interest, far-western blot uses a non-antibody probe which can bind the protein of interest. Thus, whereas western blotting is used for the detection of certain proteins, far-western blotting is employed to detect protein/protein interactions.
Method
In conventional western blot, gel electrophoresis is used to separate proteins from a sample; these proteins are then transferred to a membrane in a 'blotting' step. In a western blot, specific proteins are then identified using an antibody probe.
Far-western blot employs non-antibody proteins to probe the protein of interest on the blot. In this way, binding partners of the probe (or the blotted) protein may be identified. The probe protein is often produced in E. coli using an expression cloning vector.
The probe protein can then be visualized through the usual methods — it may be radiolabelled; it may bear a specific affinity tag like His or FLAG for which antibodies exist; or there may be a protein specific antibody (to the probe protein).
Because cell extracts are usually completely denatured by boiling in detergent before gel electrophoresis, this approach is most useful for detecting interactions that do not require the native folded structure of the protein of interest.
|
https://en.wikipedia.org/wiki/One-compartment%20kinetics
|
One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid).
This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants.
|
https://en.wikipedia.org/wiki/Luigi%20G.%20Napolitano%20Award
|
The Luigi G. Napolitano Award is presented every year at the International Astronautical Congress. Luigi Gerardo Napolitano was an engineer, scientist and professor.
The award has been presented annually since 1993, to a young scientist, below 30 years of age, who has contributed significantly to the advancement of the aerospace science and has given a paper at the International Astronautical Congress on the contribution.
The Luigi G. Napolitano Award is donated by the Napolitano family and it consists of the Napolitano commemorative medal and a certificate of citation, and is presented by the Education Committee of the IAF.
The International Academy of Astronautics awards the Luigi Napolitano Book Award annually.
Winners
1993 Shin-ichi Nishizawa
1994 Ralph D. Lorenz
1995 O.G.Liepack
1996 W. Tang
1997 G.W.R. Frenken
1998 Michael Donald Ingham
1999 Chris Blanksby
2000 Frederic Monnaie
2001 Noboru Takeichi
2002 Stefano Ferreti
2003 Veronica de Micco
2004 Julie Bellerose
2005 Nicola Baggio
2006 Carlo Menon
2007 Paul Williams
2008 Giuseppe Del Gaudio
2009 Daniel Kwom
2010 Andrew Flasch
2011 Nishchay Mhatre
2012 Valerio Carandente
2013 Sreeja Nag
2014 Alessandro Golkar
2015 Koki Ho
2016 Melissa Mirino
2017 Akshata Krishnamurthy
2018 Peter Z. Schulte
2019 Hao Chen
See also
List of engineering awards
List of physics awards
List of space technology awards
External links
International Astronautical Federation
Award winners IAA
Napolitano
Napolitano
Space-related awards
|
https://en.wikipedia.org/wiki/ICONIX
|
ICONIX is a software development methodology which predates both the Rational Unified Process (RUP), Extreme Programming (XP) and Agile software development. Like RUP, the ICONIX process is UML Use Case driven but more lightweight than RUP. ICONIX provides more requirement and design documentation than XP, and aims to avoid analysis paralysis. The ICONIX Process uses only four UML based diagrams in a four-step process that turns use case text into working code.
A principal distinction of ICONIX is its use of robustness analysis, a method for bridging the gap between analysis and design. Robustness analysis reduces the ambiguity in use case descriptions, by ensuring that they are written in the context of an accompanying domain model. This process makes the use cases much easier to design, test and estimate.
The ICONIX Process is described in the book Use Case Driven Object Modeling with UML: Theory and Practice.
Essentially, the ICONIX Process describes the core "logical" analysis and design modeling process. However, the process can be used without much tailoring on projects that follow different project management.
Overview of the ICONIX Process
The ICONIX process is split up into four milestones. At each stage the work for the previous milestone is reviewed and updated.
Milestone 1: Requirements review
Before beginning the ICONIX process there needs to have been some requirements analysis done. From this analysis use cases can be identified, a domain model produced and some prototype GUIs made.
Milestone 2: Preliminary Design Review
Once use cases have been identified, text can be written describing how the user and system will interact. A robustness analysis is performed to find potential errors in the use case text, and the domain model is updated accordingly. The use case text is important for identifying how the users will interact with the intended system. They also provide the developer with something to show the Customer and verify that the
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%2010
|
Bone morphogenetic protein 10 (BMP10) is a protein that in humans is encoded by the BMP10 gene.
BMP10 is a polypeptide belonging to the TGF-β superfamily of proteins. It is a novel protein that, unlike most other BMP's, is likely to be involved in the trabeculation of the heart. Bone morphogenetic proteins are known for their ability to induce bone and cartilage development. BMP10 is categorized as a BMP since it shares a large sequence homology with other BMP's in the TGF-β superfamily.
Further reading
|
https://en.wikipedia.org/wiki/Breweries%20%26%20Bottleyards%20Employees%20Industrial%20Union%20of%20Workers%20WA
|
The Breweries & Bottleyards Employees Industrial Union of Workers WA (BBEIUW (WA)) is a trade union in Australia. It is affiliated with the Australian Council of Trade Unions.
|
https://en.wikipedia.org/wiki/Fermentation%20crock
|
A fermentation crock, also known as a gärtopf crock or Harsch crock, is a crock for fermentation. It has a gutter in the rim which is then filled with water so that when the top is put on an airlock is created, which prevents the food within from spoiling due to the development of surface molds. Ceramic weights may also be used to keep the fermenting food inside submerged.
See also
Sauerkraut
External links
Image with cross-section through crock
Cooking vessels
Crock
|
https://en.wikipedia.org/wiki/Index%20Translationum
|
The Index Translationum is UNESCO's database of book translations. Books have been translated for thousands of years, with no central record of the fact. The League of Nations established a record of translations in 1932. In 1946, the United Nations superseded the League and UNESCO was assigned the Index. In 1979, the records were computerised.
Since the Index counts translations of individual books, authors with many books with few translations can rank higher than authors with a few books with more translations. So, for example, while the Bible is the single most translated book in the world, it does not rank in the top ten of the index. The Index counts the Walt Disney Company, employing many writers, as a single writer. Authors with similar names are sometimes included as one entry, for example, the ranking for "Hergé" applies not only to the author of The Adventures of Tintin (Hergé), but also to B.R. Hergehahn, Elisabeth Herget, and Douglas Hergert. Hence, the top authors, as the Index presents them, are from a database query whose results require interpretation.
According to the Index, Agatha Christie remains the most-translated individual author.
Statistics
Source: UNESCO
Top 10 Author
Top 10 Country
Top 10 Target Language
Top 10 Original language
See also
UNESCO Collection of Representative Works, UNESCO's program for funding the translation of works
List of literary works by number of translations
|
https://en.wikipedia.org/wiki/History%20of%20information%20theory
|
The decisive event which established the discipline of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948.
In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of
the information entropy and redundancy of a source, and its relevance through the source coding theorem;
the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem;
the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; and of course
the bit - a new way of seeing the most fundamental unit of information.
Before 1948
Early telecommunications
Some of the oldest methods of telecommunications implicitly use many of the ideas that would later be quantified in information theory. Modern telegraphy, starting in the 1830s, used Morse code, in which more common letters (like "E", which is expressed as one "dot") are transmitted more quickly than less common letters (like "J", which is expressed by one "dot" followed by three "dashes"). The idea of encoding information in this manner is the cornerstone of lossless data compression. A hundred years later, frequency modulation illustrated that bandwidth can be considered merely another degree of freedom. The vocoder, now largely looked at as an audio engineering curiosity, was originally designed in 1
|
https://en.wikipedia.org/wiki/Soil%20zoology
|
Soil zoology or pedozoology is the study of animals living fully or partially in the soil (soil fauna). The field of study was developed in the 1940s by Mercury Ghilarov in Russia. Ghilarov noted inverse relationships between size and numbers of soil organisms. He also suggested that soil included water, air and solid phases and that soil may have provided the transitional environment between aquatic and terrestrial life. The phrase was apparently first used in the English speaking world at a conference of soil zoologists presenting their research at the University of Nottingham, UK, in 1955.
See also
Biogeochemical cycle
Soil ecology
Zoology
|
https://en.wikipedia.org/wiki/Lorenz%20system
|
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. In popular media the "butterfly effect" stems from the real-world implications of the Lorenz attractor, namely that several different initial chaotic conditions evolve in phase space in a way that never repeats, so all chaos is unpredictable. This underscores that chaotic systems can be completely deterministic and yet still be inherently unpredictable over long periods of time. Because chaos continually increases in systems, we cannot predict the future of systems well. E.g., even the small flap of a butterfly’s wings could set the world on a vastly different trajectory, such as by causing a hurricane. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Overview
In 1963, Edward Lorenz, with the help of Ellen Fetter who was responsible for the numerical simulations and figures, and Margaret Hamilton who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations:
The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: is proportional to the rate of convection, to the horizontal temperature variation, and to the vertical temperature variation. The constants , , and are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself.
The Lorenz equations can arise in simplifi
|
https://en.wikipedia.org/wiki/Information%20theory%20and%20measure%20theory
|
This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).
Measures in information theory
Many of the concepts in information theory have separate definitions and formulas for continuous and discrete cases. For example, entropy is usually defined for discrete random variables, whereas for continuous random variables the related concept of differential entropy, written , is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematical expectations, but the expectation is defined with an integral for the continuous case, and a sum for the discrete case.
These separate definitions can be more closely related in terms of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment.
Consider the formula for the differential entropy of a continuous random variable with range and probability density function :
This can usually be interpreted as the following Riemann–Stieltjes integral:
where is the Lebesgue measure.
If instead, is discrete, with range a finite set, is a probability mass function on , and is the counting measure on , we can write:
The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density function is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.
If is the probability measure induced by , then the integral can also be taken directly with respect to :
If instead of the underlying measure μ we take another probability measure , we are led to the Kullback–Leibler divergence: le
|
https://en.wikipedia.org/wiki/Gambling%20and%20information%20theory
|
Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance.
Kelly Betting
Kelly betting or proportional betting is an application of information theory to investing and gambling. Its discoverer was John Larry Kelly, Jr.
Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies."
Side information
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand for certain what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event:
where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence, or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds
|
https://en.wikipedia.org/wiki/Yitzhak%20Katznelson
|
Yitzhak Katznelson (; born 1934) is an Israeli mathematician.
Katznelson was born in Jerusalem. He received his doctoral degree from the University of Paris in 1956. He is a professor of mathematics at Stanford University.
He is the author of An Introduction to Harmonic Analysis, which won the Steele Prize for Mathematical Exposition in 2002.
In 2012 he became a fellow of the American Mathematical Society.
|
https://en.wikipedia.org/wiki/Andrei%20Toom
|
Andrei Leonovich Toom (in Russian: Андрей Леонович Тоом), also known as André Toom, (1942 Tashkent, Soviet Union - 2022 New York, USA) was a mathematician known for the Toom–Cook algorithm and Toom's rule. Toom was a retired professor of the statistics department at Federal University of Pernambuco in Brazil. Toom died of prolonged illness in New York.
Toom was a student of Ilya Piatetski-Shapiro.
|
https://en.wikipedia.org/wiki/Enaliarctos
|
Enaliarctos is an extinct genus of pinnipedimorph, and may represent the ancestor to all pinnipeds. Prior to the discovery of Puijila, the five species in the genus Enaliarctos represented the oldest known pinnipedimorph fossils, having been recovered from late Oligocene and early Miocene (ca. 24–22 million years ago) strata of California and Oregon.
Description
It had a short tail and developed limbs with webbed feet. Unlike modern sea lions, it had a set of slicing carnassials; the presence of slicing teeth (rather than purely piercing teeth as in modern fish-eating pinnipeds) suggests that Enaliarctos needed to return to shore with prey items in order to masticate and ingest them. Still, Enaliarctos had some sea lion-like characteristics, such as large eyes, sensitive whiskers, and a specialized inner ear for hearing underwater.
Evolution
Enaliarctos has been heralded as the ancestor of all known pinnipeds, including the families Otariidae (fur seals and sea lions), Desmatophocidae (extinct seal convergent pinnipeds), Phocidae (true seals), and Odobenidae (walruses). Investigations of the biomechanics of Enaliarctos indicate that it used both its forelimbs and hindlimbs during swimming. Modern fur seals and sea lions only use their forelimbs, while true seals primarily use their hindlimbs for aquatic propulsion; lastly, the extant walrus uses both fore- and hindlimbs for swimming. It has been postulated that the condition in Enaliarctos is ancestral for all pinnipeds, and that forelimb swimming was lost in true seals, while hindlimb swimming was lost in fur seals and sea lions. This is significant because there has been considerable debate as to whether pinnipeds share common ancestry. Interpretation of Enaliarctos indicates that all pinnipeds share a common ancestor (which, if it was not Enaliarctos, must have been something very similar, such as the more newly discovered Puijila darwini, of controversial affinities, though).Enaliarctos emlongi is represented
|
https://en.wikipedia.org/wiki/Prorastomus
|
Prorastomus sirenoides is an extinct species of primitive sirenian that lived during the Eocene Epoch 40 million years ago in Jamaica.
Taxonomy
The generic name Prorastomus, a combination of Greek (prōra), prow, and (stoma), mouth, refers to the lower jaw of the animal "resembling the prow of a wherry".
The genus name Prorastomus comes from Greek prora meaning "prow" and Latin stomus meaning "mouth." In 1892, naturalist Richard Lydekker respelled it as Prorastoma with a feminine ending, however this was unjustified as stomus is masculine in Latin.
Prorastomus is one of two genera of the family Prorastomidae, the other Pezosiren. These two species are the oldest sirenians, dating to the Eocene.
The first specimen was described by paleontologist Sir Richard Owen in 1855, and, being found in Jamaica in the Yellow Limestone Group, pointed to the origin of Sirenia as being in the New World rather than the Old World as was previously thought. However, the modern understanding of Afrotheria as a clade that originally diversified in Africa overturns this idea. The holotype specimen, BMNH 44897, comprises a skull, jaw, and atlas of the neck vertebrae. When Owen first acquired the skull, it was broken in two between the eyes and the braincase. Another specimen was found in 1989 in the same formation, USNM 437769, comprising the frontal bone, a tusk, vertebrae fragments, and ribs.
Description
While modern sirenians are fully aquatic, the Prorastomus was predominantly terrestrial, judging from the structure of its skull. Judging from its crown-shaped molars and the shape of its snout, it fed on soft plants. The snout is long, narrow, and, at the tip, bulbous. The nasal bones are larger than other sirenians. The nasal ridge is well developed, indicating it had a good sense of smell. The frontal bones are smaller than usual for sirenians, though, as in other sirenians, it had a pronounced brow ridge. Since Pezosiren has a sagittal crest, it is possible the Prorastomus spe
|
https://en.wikipedia.org/wiki/Music%20and%20mathematics
|
Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory.
While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties".
History
Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers".
From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection.
Time, rhythm, and meter
Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics.
The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3).
Musical form
Musical
|
https://en.wikipedia.org/wiki/Earth-centered%2C%20Earth-fixed%20coordinate%20system
|
The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as X, Y, and Z measurements from its center of mass. Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.
The distance from a given point of interest to the center of Earth is called the geocentric distance, , which is a generalization of the geocentric radius, , not restricted to points on the reference ellipsoid surface.
The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: ; it is not to be confused for the geodetic altitude.
Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.
Structure
As with any spatial reference system, ECEF consists of an abstract coordinate system (in this case, a conventional three-dimensional right-handed system), and a geodetic datum that binds the coordinate system to actual locations on the Earth. The ECEF that is used for the Global Positioning System (GPS) is the geocentric WGS 84, which currently includes its own ellipsoid definition. Other local datums such as NAD 83 may also be used. Due to differences between datums, the ECEF coordinates for a location will be different for different datums, although the differences between most modern datums is relatively small, within a few meters.
The ECEF coordinate system has the following parameters:
The origin at the center of the chosen ellipsoid. In WGS 84, this is center of mass of the Earth.
The Z axis is the line between the North and South Poles, with positive values increasing northward. In WGS 84, this
|
https://en.wikipedia.org/wiki/Local%20tangent%20plane%20coordinates
|
Local tangent plane coordinates (LTP), also known as local ellipsoidal system, local geodetic coordinate system, or local vertical, local horizontal coordinates (LVLH), are a spatial reference system based on the tangent plane defined by the local vertical direction and the Earth's axis of rotation.
It consists of three coordinates: one represents the position along the northern axis, one along the local eastern axis, and one represents the vertical position.
Two right-handed variants exist: east, north, up (ENU) coordinates and north, east, down (NED) coordinates.
They serve for representing state vectors that are commonly used in aviation and marine cybernetics.
Axes
These frames are location dependent. For movements around the globe, like air or sea navigation, the frames are defined as tangent to the lines of geographical coordinates:
East–west tangent to parallels,
North–south tangent to meridians, and
Up–down in the direction normal to the oblate spheroid used as Earth's ellipsoid, which does not generally pass through the center of Earth.
Local east, north, up (ENU) coordinates
In many targeting and tracking applications the local East, North, Up (ENU) Cartesian coordinate system is far more intuitive and practical than ECEF or Geodetic coordinates. The local ENU coordinates are formed from a plane tangent to the Earth's surface fixed to a specific location and hence it is sometimes known as a "Local Tangent" or "local geodetic" plane. By convention the east axis is labeled , the north and the up .
Local north, east, down (NED) coordinates
In an airplane, most objects of interest are below the aircraft, so it is sensible to define down as a positive number. The North, East, Down (NED) coordinates allow this as an alternative to the ENU. By convention, the north axis is labeled , the east and the down . To avoid confusion between and , etc. in this article we will restrict the local coordinate frame to ENU.
The origin of this coordinate system i
|
https://en.wikipedia.org/wiki/List%20of%20Colombian%20flags
|
This is a list of flags used in Colombia. For more information about the national flag, visit the article Flag of Colombia.
National flags
Presidential standards
Military
Army
Navy
Air Force
Police
Civil Ensign
Departments
Municipalities
Political flags
Ethnic groups flags
Historical Flags
Flag Proposal
Burgees of Colombia
See also
Cundinamarca flags
Coat of arms of Colombia
¡Oh, Gloria Inmarcesible!
External links
Colombia
Flags
|
https://en.wikipedia.org/wiki/Marchenko%20equation
|
In mathematical physics, more specifically the one-dimensional inverse scattering problem, the Marchenko equation (or Gelfand-Levitan-Marchenko equation or GLM equation), named after Israel Gelfand, Boris Levitan and Vladimir Marchenko, is derived by computing the Fourier transform of the scattering relation:
Where is a symmetric kernel, such that which is computed from the scattering data. Solving the Marchenko equation, one obtains the kernel of the transformation operator from which the potential can be read off. This equation is derived from the Gelfand–Levitan integral equation, using the Povzner–Levitan representation.
Application to scattering theory
Suppose that for a potential for the Schrödinger operator , one has the scattering data , where are the reflection coefficients from continuous scattering, given as a function , and the real parameters are from the discrete bound spectrum.
Then defining
where the are non-zero constants, solving the GLM equation
for allows the potential to be recovered using the formula
See also
Lax pair
|
https://en.wikipedia.org/wiki/AN/FSQ-32
|
The AN/FSQ-32 SAGE Solid State Computer (AN/FSQ-7A before December 1958, colloq. "Q-32") was a planned military computer central for deployment to Super Combat Centers in nuclear bunkers and to some above-ground military installations. In 1958, Air Defense Command planned to acquire 13 Q-32 centrals for several Air Divisions/Sectors.
Background
In 1956, ARDC sponsored "development of a transistorized, or solid-state, computer" by IBM and when announced in June 1958, the planned "SAGE Solid State Computer...was estimated to have a computing capability of seven times" the AN/FSQ-7. ADC's November 1958 plan to field—by April 1964—the 13 solid state AN/FSQ-7A was for each to network "a maximum of 20 long-range radar inputs [40 LRI telephone lines] and a maximum dimension of just over 1000 miles in both north-south and east-west directions." "Low rate Teletype data" could be accepted on 32 telephone lines (e.g., from "Alert Network Number 1"). On 17 November 1958, CINCNORAD "decided to request the solid state computer and hardened facilities", and the remaining vacuum-tube AN/FSQ-8 centrals for combat centers were cancelled (one was retrofitted to function as an AN/FSQ-7).
" AN/FSQ-32 computer would be"* used:
1. for "a combat center" (as with the vacuum-tube AN/FSQ-8),
2. to accept "radar and weapons connections" for weapons direction as with the AN/FSQ-7--e.g., for backup CIM-10 Bomarc guidance or manned interceptor GCI if above-ground Direction Center(s) could not function, and
3. for "air traffic control functions".
"Air Defense and Air Traffic Control Integration" was planned for airways modernization after the USAF, CAA, and AMB agreed on August 22, 1958, to "collocate air route traffic control centers and air defense facilities" (e.g., jointly use some Air Route Surveillance Radars at SAGE radar stations). The May 22, 1959, agreement between the USAF, DoD, and FAA designated emplacement of ATC facilities "in the hardened structure of the nine U. S. SCC'
|
https://en.wikipedia.org/wiki/Evolution%20of%20the%20eye
|
Many scientists have found the evolution of the eye attractive to study because the eye distinctively exemplifies an analogous organ found in many animal forms. Simple light detection is found in bacteria, single-celled organisms, plants and animals. Complex, image-forming eyes have evolved independently several times.
Diverse eyes are known from the Burgess shale of the Middle Cambrian, and from the slightly older Emu Bay Shale.
Eyes vary in their visual acuity, the range of wavelengths they can detect, their sensitivity in no light, their ability to detect motion or to resolve objects, and whether they can discriminate colours.
History of research
In 1802, philosopher William Paley called it a miracle of "design." In 1859, Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection seemed at first glance "absurd in the highest possible degree".
However, he went on that despite the difficulty in imagining it, its evolution was perfectly feasible:
... if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection,
though insuperable by our imagination, should not be considered as subversive of the theory.
He suggested a stepwise evolution from "an optic nerve merely coated with pigment, and without any other mechanism" to "a moderately high stage of perfection", and gave examples of existing intermediate. Current research is investigating the genetic mechanisms underlying eye development and evolution.
Biologist D.E. Nilsson has independently theorized about four general stages in the evolution of a vertebrate eye from a pa
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%208B
|
Bone morphogenetic protein 8B is a protein that in humans is encoded by the BMP8B gene.
The protein encoded by this gene is a member of the TGF-β superfamily. It has close sequence homology to BMP7 and BMP5 and is believed to play a role in bone and cartilage development. It has been shown to be expressed in the hippocampus of murine embryos.
The bone morphogenetic proteins (BMPs) are a family of secreted signaling molecules that can induce ectopic bone growth. Many BMPs are part of the transforming growth factor-beta (TGFB) superfamily. BMPs were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Based on its expression early in embryogenesis, the BMP encoded by this gene has a proposed role in early development. In addition, the fact that this BMP is closely related to BMP5 and BMP7 has led to speculation of possible bone inductive activity.
|
https://en.wikipedia.org/wiki/Sound%20recording%20copyright%20symbol
|
The sound recording copyright symbol or phonogram symbol, (letter P in a circle), is the copyright symbol used to provide notice of copyright in a sound recording (phonogram) embodied in a phonorecord (LPs, audiotapes, cassette tapes, compact discs, etc.). It was first introduced in the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations. The United States added it to its copyright law as part of its adherence to the Geneva Phonograms Convention in 17 U.S.C. § 402, the codification of the Copyright Act of 1976.
The letter P in stands for phonogram, the legal term used in most English-speaking countries to refer to works known in U.S. copyright law as "sound recordings".
A sound recording has a separate copyright that is distinct from that of the underlying work (usually a musical work, expressible in musical notation and written lyrics), if any. The sound recording copyright notice extends to a copyright for just the sound itself and will not apply to any other rendition or version, even if performed by the same artist(s).
International treaties
The symbol first appeared in the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations, a multilateral treaty relating to copyright, in 1961. Article 11 of the Rome Convention provided:
When the Geneva Phonograms Convention, another multilateral copyright treaty, was signed in 1971, it included a similar provision in its Article 5:
United States law
The symbol was introduced into United States copyright law in 1971, when the US extended limited copyright protection to sound recordings. The United States anticipated signing onto the Geneva Phonograms Convention, which it had helped draft. On October 15, 1971, Congress enacted the Sound Recording Act of 1971, also known as the Sound Recording Amendment of 1971, which amended the 1909 Copyright Act by adding protection for sound recordings and prescribed a copyright n
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%206
|
Bone morphogenetic protein 6 is a protein that in humans is encoded by the BMP6 gene.
The protein encoded by this gene is a member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce the growth of bone and cartilage. BMP6 is able to induce all osteogenic markers in mesenchymal stem cells.
The bone morphogenetic proteins (BMPs) are a family of secreted signaling molecules that can induce ectopic bone growth. BMPs are part of the transforming growth factor-beta (TGFB) superfamily. BMPs were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Based on its expression early in embryogenesis, the BMP encoded by this gene has a proposed role in early development. In addition, the fact that this BMP is closely related to BMP5 and BMP7 has led to speculation of possible bone inductive activity.
As of April 2009, an additional function of BMP6 has been identified as described in Nature Genetics April; 41 [4]:386-8. BMP6 is the key regulator of hepcidin, the small peptide secreted by the liver which is the major regulator of iron metabolism in mammals.
|
https://en.wikipedia.org/wiki/Lute%20of%20Pythagoras
|
The lute of Pythagoras is a self-similar geometric figure made from a sequence of pentagrams.
Constructions
The lute may be drawn from a sequence of pentagrams.
The centers of the pentagraphs lie on a line and (except for the first and largest of them) each shares two vertices with the next larger one in the sequence.
An alternative construction is based on the golden triangle, an isosceles triangle with base angles of 72° and apex angle 36°. Two smaller copies of the same triangle may be drawn inside the given triangle, having the base of the triangle as one of their sides. The two new edges of these two smaller triangles, together with the base of the original golden triangle, form three of the five edges of the polygon. Adding a segment between the endpoints of these two new edges cuts off a smaller golden triangle, within which the construction can be repeated.
Some sources add another pentagram, inscribed within the inner pentagon of the largest pentagram of the figure. The other pentagons of the figure do not have inscribed pentagrams.
Properties
The convex hull of the lute is a kite shape with three 108° angles and one 36° angle. The sizes of any two consecutive pentagrams in the sequence are in the golden ratio to each other, and many other instances of the golden ratio appear within the lute.
History
The lute is named after the ancient Greek mathematician Pythagoras, but its origins are unclear. An early reference to it is in a 1990 book on the golden ratio by Boles and Newman.
See also
Spidron
|
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%205
|
Bone morphogenetic protein 5 is a protein that in humans is encoded by the BMP5 gene.
The protein encoded by this gene is member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce bone and cartilage development. BMP5 may play a role in certain cancers. Like other BMP's BMP5 is inhibited by chordin and noggin. It is expressed in the trabecular meshwork and optic nerve head and may have a role in the development and normal function. It is also expressed in the lung and liver.
This gene encodes a member of the bone morphogenetic protein family which is part of the transforming growth factor-beta superfamily. The superfamily includes large families of growth and differentiation factors. Bone morphogenetic proteins were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. These proteins are synthesized as prepropeptides, cleaved, and then processed into dimeric proteins. This protein may act as an important signaling molecule within the trabecular meshwork and optic nerve head, and may play a potential role in glaucoma pathogenesis. This gene is differentially regulated during the formation of various tumors.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.