source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Behavior%20of%20nuclear%20fuel%20during%20a%20reactor%20accident
|
This page describes how uranium dioxide nuclear fuel behaves during both normal nuclear reactor operation and under reactor accident conditions, such as overheating. Work in this area is often very expensive to conduct, and so has often been performed on a collaborative basis between groups of countries, usually under the aegis of the Organisation for Economic Co-operation and Development's Committee on the Safety of Nuclear Installations (CSNI).
Swelling
Cladding
Both the fuel and cladding can swell. Cladding covers the fuel to form a fuel pin and can be deformed. It is normal to fill the gap between the fuel and the cladding with helium gas to permit better thermal contact between the fuel and the cladding. During use the amount of gas inside the fuel pin can increase because of the formation of noble gases (krypton and xenon) by the fission process. If a Loss-of-coolant accident (LOCA) (e.g. Three Mile Island) or a Reactivity Initiated Accident (RIA) (e.g. Chernobyl or SL-1) occurs then the temperature of this gas can increase. As the fuel pin is sealed the pressure of the gas will increase (PV = nRT) and it is possible to deform and burst the cladding. It has been noticed that both corrosion and irradiation can alter the properties of the zirconium alloy commonly used as cladding, making it brittle. As a result, the experiments using unirradiated zirconium alloy tubes can be misleading.
According to one paper the following difference between the cladding failure mode of unused and used fuel was seen.
Unirradiated fuel rods were pressurized before being placed in a special reactor at the Japanese Nuclear Safety Research Reactor (NSRR) where they were subjected to a simulated RIA transient. These rods failed after ballooning late in the transient when the cladding temperature was high. The failure of the cladding in these tests was ductile, and it was a burst opening.
The used fuel (61 GW days/tonne of uranium) failed early in the transient with a brittle fra
|
https://en.wikipedia.org/wiki/Ichnotaxon
|
An ichnotaxon (plural ichnotaxa) is "a taxon based on the fossilized work of an organism", i.e. the non-human equivalent of an artifact. Ichnotaxa comes from the Greek , ichnos meaning track and , taxis meaning ordering.
Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils, more commonly known as trace fossils. They are assigned genus and species ranks by ichnologists, much like organisms in Linnaean taxonomy. These are known as ichnogenera and ichnospecies, respectively. "Ichnogenus" and "ichnospecies" are commonly abbreviated as "igen." and "isp.". The binomial names of ichnospecies and their genera are to be written in italics.
Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial.
Naming
Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae.
Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus.
History
Due to trace fossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa.
In 1961, the International Commission on Zoological Nomenclature ruled that most trace fossil taxa named after 1930 would be no longer available.
See also
Bird ichnology
Trace fossil classification
Glossary of scientific naming
|
https://en.wikipedia.org/wiki/Stark%20conjectures
|
In number theory, the Stark conjectures, introduced by and later expanded by , give conjectural information about the coefficient of the leading term in the Taylor expansion of an Artin L-function associated with a Galois extension K/k of algebraic number fields. The conjectures generalize the analytic class number formula expressing the leading coefficient of the Taylor series for the Dedekind zeta function of a number field as the product of a regulator related to S-units of the field and a rational number.
When K/k is an abelian extension and the order of vanishing of the L-function at s = 0 is one, Stark gave a refinement of his conjecture, predicting the existence of certain S-units, called Stark units, which generate abelian extensions of number fields.
Formulation
General case
The Stark conjectures, in the most general form, predict that the leading coefficient of an Artin L-function is the product of a type of regulator, the Stark regulator, with an algebraic number.
Abelian rank-one case
When the extension is abelian and the order of vanishing of an L-function at s = 0 is one, Stark's refined conjecture predicts the existence of Stark units, whose roots generate Kummer extensions of K that are abelian over the base field k (and not just abelian over K, as Kummer theory implies). As such, this refinement of his conjecture has theoretical implications for solving Hilbert's twelfth problem.
Computation
Stark units in the abelian rank-one case have been computed in specific examples, allowing verification of the veracity of his refined conjecture. These also provide an important computational tool for generating abelian extensions of number fields, forming the basis for some standard algorithms for computing abelian extensions of number fields.
The first rank-zero cases are used in recent versions of the PARI/GP computer algebra system to compute Hilbert class fields of totally real number fields, and the conjectures provide one solution to Hilbert's
|
https://en.wikipedia.org/wiki/Integral%20windup
|
Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). The specific problem is the excess overshooting.
Solutions
This problem can be addressed by
Initializing the controller integral to a desired value, for instance to the value before the problem
Increasing the setpoint in a suitable ramp
Conditional Integration: disabling the integral function until the to-be-controlled process variable (PV) has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the process output within feasible bounds.
Clegg Integrator: Zeroing the integral value every time the error is equal to, or crosses zero. This avoids having the controller attempt to drive the system to have the same error integral in the opposite direction as was caused by a perturbation, but induces oscillation if a non-zero control value required to maintain the process at setpoint.
Occurrence
Integral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process saturation: the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range.
This usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selec
|
https://en.wikipedia.org/wiki/Rodrigues%20parrot
|
The Rodrigues parrot or Leguat's parrot (Necropsittacus rodricanus) is an extinct species of parrot that was endemic to the Mascarene island of Rodrigues. The species is known from subfossil bones and from mentions in contemporary accounts. It is unclear to which other species it is most closely related, but it is classified as a member of the tribe Psittaculini, along with other Mascarene parrots. The Rodrigues parrot bore similarities to the broad-billed parrot of Mauritius, and may have been related. Two additional species have been assigned to its genus (N. francicus and N. borbonicus), based on descriptions of parrots from the other Mascarene islands, but their identities and validity have been debated.
The Rodrigues parrot was green, and had a proportionally large head and beak and a long tail. Its exact size is unknown, but it may have been around long. It was the largest parrot on Rodrigues, and it had the largest head of any Mascarene parrot. It may have looked similar to the great-billed parrot. By the time it was discovered, it frequented and nested on islets off southern Rodrigues, where introduced rats were absent, and fed on the seeds of the shrub Fernelia buxifolia. The species was last mentioned in 1761, and probably became extinct soon after, perhaps due to a combination of predation by introduced animals, deforestation, and hunting by humans.
Taxonomy
Birds thought to be the Rodrigues parrot were first mentioned by the French traveler François Leguat in his 1708 memoir, A New Voyage to the East Indies. Leguat was the leader of a group of nine French Huguenot refugees who colonised Rodrigues between 1691 and 1693 after they were marooned there. Subsequent accounts were written by the French sailor Julien Tafforet, who was marooned on the island in 1726, in his Relation de l'Île Rodrigue, and then by the French astronomer Alexandre Pingré, who travelled to Rodrigues to view the 1761 transit of Venus.
In 1867, the French zoologist Alphonse Milne-
|
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%205
|
Mothers against decapentaplegic homolog 5 also known as SMAD5 is a protein that in humans is encoded by the SMAD5 gene.
SMAD5, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic", based on a tradition of such unusual naming within the gene research community. It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members SMAD5 is involved in cell signalling and modulates signals of bone morphogenetic proteins (BMP's). The binding of ligands causes the oligomerization and phosphorylation of the SMAD5 protein. SMAD5 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase. It may play a role in the pathway where TGFβ is an inhibitor of hematopoietic progenitor cells.
|
https://en.wikipedia.org/wiki/Holevo%27s%20theorem
|
Holevo's theorem is an important limitative theorem in quantum computing, an interdisciplinary field of physics and computer science. It is sometimes called Holevo's bound, since it establishes an upper bound to the amount of information that can be known about a quantum state (accessible information). It was published by Alexander Holevo in 1973.
Accessible information
As for several concepts in quantum information theory, accessible information is best understood in terms of a 2-party communication. So we introduce two parties, Alice and Bob. Alice has a classical random variable X, which can take the values {1, 2, ..., n} with corresponding probabilities {p1, p2, ..., pn}. Alice then prepares a quantum state, represented by the density matrix ρX chosen from a set {ρ1, ρ2, ... ρn}, and gives this state to Bob. Bob's goal is to find the value of X, and in order to do that, he performs a measurement on the state ρX, obtaining a classical outcome, which we denote with Y. In this context, the amount of accessible information, that is, the amount of information that Bob can get about the variable X, is the maximum value of the mutual information I(X : Y) between the random variables X and Y over all the possible measurements that Bob can do.
There is currently no known formula to compute the accessible information. There are however several upper bounds, the best-known of which is the Holevo bound, which is specified in the following theorem.
Statement of the theorem
Let {ρ1, ρ2, ..., ρn} be a set of mixed states and let ρX be one of these states drawn according to the probability distribution P = {p1, p2, ..., pn}.
Then, for any measurement described by POVM elements {EY} and performed on , the amount of accessible information about the variable X knowing the outcome Y of the measurement is bounded from above as follows:
where and is the von Neumann entropy.
The quantity on the right hand side of this inequality is called the Holevo information or Holevo χ qua
|
https://en.wikipedia.org/wiki/Sugar%20phosphates
|
Sugar phosphates (sugars that have added or substituted phosphate groups) are often used in biological systems to store or transfer energy. They also form the backbone for DNA and RNA. Sugar phosphate backbone geometry is altered in the vicinity of the modified nucleotides.
Examples include:
Dihydroxyacetonephosphate
Glucose-6-phosphate
Phytic acid
Teichoic acid
Electronic structure of the sugar-phosphate backbone
The sugar-phosphate backbone has multiplex electronic structure and the electron delocalisation complicates its theoretical description. Some part of the electronic density is delocalised over the whole backbone and the extent of the delocalisation is affected by backbone conformation due to hyper-conjugation effects. Hyper-conjugation arises from donor-acceptor interactions of localised orbitals in 1,3 positions.
Phosphodiesters in DNA and RNA
The phosphodiester backbone of DNA and RNA consists of pairs of deoxyribose or ribose sugars linked by phosphates at the respective 3' and 5' positions. The backbone is negatively charged and hydrophilic, which allows strong interactions with water. Sugar-phosphate backbone forms the structural framework of nucleic acids, including DNA and RNA.
Sugar phosphates are defined as carbohydrates to which a phosphate group is bound by an ester or an either linkage, depending on whether it involves an alcoholic or a hemiacetalic hydroxyl, respectively. Solubility, acid hydrolysis rates, acid strengths, and ability to act as sugar group donors are the knowledge of physical and chemical properties required for the analysis of both types of sugar phosphates. The photosynthetic carbon reduction cycle is closely associated with sugar phosphates, and sugar phosphates are one of the key molecules in metabolism,(Sugar phosphates are major players in metabolism due to their task of storing and transferring energy. Not only ribose 5-phosphate but also fructose 6-phosphate are an intermediate of the pentose-phosphate pathway
|
https://en.wikipedia.org/wiki/Jos%C3%A9%20Enrique%20Moyal
|
José Enrique Moyal (; 1 October 1910 – 22 May 1998) was an Australian mathematician and mathematical physicist who contributed to aeronautical engineering, electrical engineering and statistics, among other fields.
Career
Moyal helped establish the phase space formulation of quantum mechanics in 1949 by bringing together the ideas of Hermann Weyl, John von Neumann, Eugene Wigner, and Hip Groenewold.
This formulation is statistical in nature and makes logical connections between quantum mechanics and classical statistical mechanics, enabling a natural comparison between the two formulations. Phase space quantization, also known as Moyal quantization, largely avoids the use of operators for quantum mechanical observables prevalent in the canonical formulation. Quantum-mechanical evolution in phase space is specified by a Moyal bracket.
Moyal grew up in Tel Aviv, and attended the Herzliya Hebrew Gymnasium. He studied in Paris in the 1930s, at the École Supérieure d'Electricité, Institut de Statistique, and, finally, at the Institut Henri Poincaré. His work was carried out in wartime England in the 1940s, while employed at the de Havilland Aircraft company.
Moyal was a professor of mathematics at the former School of Mathematics and Physics of Macquarie University, where he was a colleague of John Clive Ward. Previously, he had worked at the Argonne National Laboratory in Illinois.
He published pioneering work on stochastic processes.
Personal life
Moyal was married to Susanna Pollack (1912-2000), with whom he had two children, Orah Young (born in Tel Aviv) and David Moyal (born in Belfast). They divorced in 1956. He was married to Ann Moyal from 1962 until his death.
Works
J.E. Moyal, "Stochastic Processes and Statistical Physics" Journal of the Royal Statistical Society B'', 11, (1949), 150–210.
See also
Moyal bracket
Wigner–Weyl transform
Wigner quasiprobability distribution
|
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%207
|
Mothers against decapentaplegic homolog 7 or SMAD7 is a protein that in humans is encoded by the SMAD7 gene.
SMAD7 is a protein that, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of ligands. Like many other TGFβ family members, SMAD7 is involved in cell signalling. It is a TGFβ type 1 receptor antagonist. It blocks TGFβ1 and activin associating with the receptor, blocking access to SMAD2. It is an inhibitory SMAD (I-SMAD) and is enhanced by SMURF2.
Smad7 enhances muscle differentiation.
Structure
Smad proteins contain two conserved domains. The Mad Homology domain 1 (MH1 domain) is at the N-terminal and the Mad Homology domain 2 (MH2 domain) is at the C-terminal. Between them there is a linker region which is full of regulatory sites. The MH1 domain has DNA binding activity while the MH2 domain has transcriptional activity. The linker region contains important regulatory peptide motifs including potential phosphorylation sites for mitogen-activated protein kinases(MAPKs), Erk-family MAP kinases, the Ca2+ /calmodulin-dependent protein kinase II (CamKII) and protein kinase C (PKC). Smad7 does not have the MH1 domain. A proline-tyrosine (PY) motif presents at its linker region enables its interaction with the WW domains of the E3 ubiquitin ligase, the Smad ubiquitination-related factors (Smurf2). It resides predominantly in the nucleus at basal state and translocates to the cytoplasm upon TGF-β stimulation.
Function
SMAD7 inhibits TGF-β signaling by preventing formation of Smad2/Smad4 complexes which initiate the TGF-β signaling. It interacts with activated TGF-β type I receptor therefore block the association, phosphorylation and activation of Smad2. By occupying type I receptors for Activin and bone morphogenetic protein (BMP), it also plays a role in negative feedback of these pathways.
Upon TGF- β treatment, Smad7 binds to di
|
https://en.wikipedia.org/wiki/Rami%20Grossberg
|
Rami Grossberg () is a full professor of mathematics at Carnegie Mellon University and works in model theory.
Work
Grossberg's work in the past few years has revolved around the classification theory of non-elementary classes. In particular, he has provided, in joint work with Monica VanDieren, a proof of an upward "Morley's Categoricity Theorem" (a version of Shelah's categoricity conjecture) for Abstract Elementary Classes with the amalgamation property, that are tame. In another work with VanDieren, they also initiated the study of tame Abstract Elementary Classes. Tameness is both a crucial technical property in categoricity transfer proofs and an independent notion of interest in the area – it has been studied by Baldwin, Hyttinen, Lessmann, Kesälä, Kolesnikov, Kueker among others.
Other results include a best approximation to the main gap conjecture for AECs (with Olivier Lessmann), identifying AECs with JEP, AP, no maximal models and tameness as the uncountable analog to Fraïssé's constructions (with VanDieren), a stability spectrum theorem and the existence of Morley sequences for those classes (also with VanDieren).
In addition to this work on the Categoricity Conjecture, more recently, with Boney and Vasey, new understanding of frames in AECs and forking (in the abstract elementary class setting) has been obtained.
Some of Grossberg's work may be understood as part of the big project on Saharon Shelah's outstanding categoricity conjectures:
Conjecture 1. (Categoricity for ). Let be a sentence. If is categorical in a cardinal then is categorical in all cardinals . See Infinitary logic and Beth number.
Conjecture 2. (Categoricity for AECs) See and . Let K be an AEC. There exists a cardinal μ(K) such that categoricity in a cardinal greater than μ(K) implies categoricity in all cardinals greater than μ(K). Furthermore, μ(K) is the Hanf number of K.
Other examples of his results in pure model theory include: generalizing the Keisler–Shelah omitting
|
https://en.wikipedia.org/wiki/Kim%20Thomas
|
Kim Susannah Thomas (born 10 October 1967) is a former competitive rower from Great Britain.
Early life
Thomas was born in 1967 in Wandsworth, Great Britain. She is a member of the Leander Club at Henley-on-Thames. She received her education at Surbiton High School in Surbiton, and then studied engineering at Durham University. She then trained as a teacher concentrating on physics, but later focussed on mathematics.
Rowing career
She competed at the World Rowing Junior Championships in 1983, 1984, and 1985. In 1983 in Vichy, France, she came fifth with the junior women's eight. In 1984 in Jönköping, Sweden, she came sixth in the junior women's coxed four. A year later in the same boat class but with a different team, she came fifth.
In 1987, Thomas competed at senior level and was part of the coxless pairs with Alison Bonner that won the national title rowing for a Kingston and Weybridge Ladies composite, at the 1987 National Championships and at that year's World Rowing Championships, she competed in the women's pair with Alison Bonner and they came seventh. Thomas and Bonner competed at the 1988 Summer Olympics in the coxless pair and came eighths. At the 1989 World Rowing Championships at Lake Bled near Bled in SR Slovenia, Yugoslavia, she teamed up with Catherine Miller in the women's pair and they came in eleventh (and last) place.
At the 1992 Summer Olympics, she was a member of Great Britain's coxless four, and the team came eighths in the competition. She was a member of the Durham University Boat Club from 1989 to 1991.
In 1989, Thomas was the second recipient of The Sunday Times Sportswoman of the Year award.
Professional career
Thomas' first teaching role was at Kingston Grammar School, where she joined their mathematics department. Where she taught the son of Richard Henry Biffa of BIffa bins. After two years in that role, she went to Pangbourne College as head of mathematics.
At present, she is a teacher at Albyn School in Aberdeen, Scotland, an
|
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%209
|
Mothers against decapentaplegic homolog 9 also known as SMAD9, SMAD8, and MADH6 is a protein that in humans is encoded by the SMAD9 gene.
SMAD9, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members, SMAD9 is involved in cell signalling. When a bone morphogenetic protein binds to a receptor (BMP type 1 receptor kinase) it causes SMAD9 to interact with SMAD anchor for receptor activation (SARA).The binding of ligands causes the phosphorylation of the SMAD9 protein and the dissociation from SARA and the association with SMAD4. It is subsequently transferred to the nucleus where it forms complexes with other proteins and acts as a transcription factor. SMAD9 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase. There are two isoforms of the protein. Confusingly, it is also sometimes referred to as SMAD8 in the literature.
Nomenclature
The SMAD proteins are homologs of both the drosophila protein, mothers against decapentaplegic (MAD) and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene, MAD, in the mother, repressed the gene, decapentaplegic, in the embryo. The phrase "Mothers against" was added since mothers often form organizations opposing various issues e.g. Mothers Against Drunk Driving or (MADD); and based on a tradition of such unusual naming within the gene research community.
|
https://en.wikipedia.org/wiki/R-SMAD
|
R-SMADs are receptor-regulated SMADs. SMADs are transcription factors that transduce extracellular TGF-β superfamily ligand signaling from cell membrane bound TGF-β receptors into the nucleus where they activate transcription TGF-β target genes. R-SMADS are directly phosphorylated on their c-terminus by type 1 TGF-β receptors through their intracellular kinase domain, leading to R-SMAD activation.
R-SMADS include SMAD2 and SMAD3 from the TGF-β/Activin/Nodal branch, and SMAD1, SMAD5 and SMAD8 from the BMP/GDP branch of TGF-β signaling.
In response to signals by the TGF-β superfamily of ligands these proteins associate with receptor kinases and are phosphorylated at an SSXS motif at their extreme C-terminus. These proteins then typically bind to the common mediator Smad or co-SMAD SMAD4.
Smad complexes then accumulate in the cell nucleus where they regulate transcription of specific target genes:
SMAD2 and SMAD3 are activated in response to TGF-β/Activin or Nodal signals.
SMAD1, SMAD5 and SMAD8 (also known as SMAD9) are activated in response to BMPs bone morphogenetic protein or GDP signals.
SMAD6 and SMAD7 may be referred to as I-SMADs (inhibitory SMADS), which form trimers with R-SMADS and block their ability to induce gene transcription by competing with R-SMADs for receptor binding and by marking TGF-β receptors for degradation.
See also
TGF beta signaling pathway
|
https://en.wikipedia.org/wiki/Committed%20information%20rate
|
In a Frame Relay network, committed information rate (CIR) is the bandwidth for a virtual circuit guaranteed by an internet service provider to work under normal conditions. Committed data rate (CDR) is the payload portion of the CIR.
At any given time, the available bandwidth should not fall below this committed figure. The bandwidth is usually expressed in kilobits per second (kbit/s).
Above the CIR, an allowance of burstable bandwidth is often given, whose value can be expressed in terms of an additional rate, known as the excess information rate (EIR), or as its absolute value, peak information rate (PIR). The provider guarantees that the connection will always support the CIR rate, and sometimes the EIR rate provided that there is adequate bandwidth. The PIR, i.e. the CIR plus EIR, is either equal to or less than the speed of the access port into the network. Frame Relay carriers define and package CIRs differently, and CIRs are adjusted with experience.
See also
Information rate
Throughput
Notes
|
https://en.wikipedia.org/wiki/Flow-based%20programming
|
In computer programming, flow-based programming (FBP) is a programming paradigm that defines applications as networks of black box processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented.
FBP is a particular form of dataflow programming based on bounded buffers, information packets with defined lifetimes, named ports, and separate definition of connections.
Introduction
Flow-based programming defines applications using the metaphor of a "data factory". It views an application not as a single, sequential process, which starts at a point in time, and then does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called "information packets" (IPs). In this view, the focus is on the application data and the transformations applied to it to produce the desired outputs. The network is defined externally to the processes, as a list of connections which is interpreted by a piece of software, usually called the "scheduler".
The processes communicate by means of fixed-capacity connections. A connection is attached to a process by means of a port, which has a name agreed upon between the process code and the network definition. More than one process can execute the same piece of code. At any point in time, a given IP can only be "owned" by a single process, or be in transit between two processes. Ports may either be simple, or array-type, as used e.g. for the input port of the Collate component described below. It is the combination of ports with asynchronous processes that allows many long-running primitive functions of data processing, such as Sort, Merge, Summarize, etc., to be supported in the form of software black bo
|
https://en.wikipedia.org/wiki/Mineralocorticoid%20receptor
|
The mineralocorticoid receptor (or MR, MLR, MCR), also known as the aldosterone receptor or nuclear receptor subfamily 3, group C, member 2, (NR3C2) is a protein that in humans is encoded by the NR3C2 gene that is located on chromosome 4q31.1-31.2.
MR is a receptor with equal affinity for mineralocorticoids and glucocorticoids. It belongs to the nuclear receptor family where the ligand diffuses into cells, interacts with the receptor and results in a signal transduction affecting specific gene expression in the nucleus. The selective response of some tissues and organs to mineralocorticoids over glucocorticoids occurs because mineralocorticoid-responsive cells express Corticosteroid 11-beta-dehydrogenase isozyme 2, an enzyme which selectively inactivates glucocorticoids more readily than mineralocorticoids.
Function
MR is expressed in many tissues, such as the kidney, colon, heart, central nervous system (hippocampus), brown adipose tissue and sweat glands. In epithelial tissues, its activation leads to the expression of proteins regulating ionic and water transports (mainly the epithelial sodium channel or ENaC, Na+/K+ pump, serum and glucocorticoid induced kinase or SGK1) resulting in the reabsorption of sodium, and as a consequence an increase in extracellular volume, increase in blood pressure, and an excretion of potassium to maintain a normal salt concentration in the body.
The receptor is activated by mineralocorticoids such as aldosterone and its precursor deoxycorticosterone as well as glucocorticoids like cortisol. In intact animals, the mineralocorticoid receptor is "protected" from glucocorticoids by co-localization of an enzyme, corticosteroid 11-beta-dehydrogenase isozyme 2 (a.k.a. 11β-hydroxysteroid dehydrogenase 2; 11β-HSD2), that converts cortisol to inactive cortisone.
Activation of the mineralocorticoid receptor, upon the binding of its ligand aldosterone, results in its translocation to the cell nucleus, homodimerization and binding to horm
|
https://en.wikipedia.org/wiki/VOACAP
|
VOACAP (Voice of America Coverage Analysis Program) is a radio propagation model that uses empirical data to predict the point-to-point path loss and coverage of a given transceiver if given as inputs: two antennas (configuration and position), solar weather, and time/date. Written in Fortran, it was originally designed for Voice of America.
Some movies on the coverage during daytime can be found here.
Simulating HF propagation conditions
Currently versions based on the original source tree exist for Windows, Linux (voacapl) and OSX. The program core uses text files for I/O and a bunch of wrappers now exist.
Besides commercial visualization tools, there are also Open Source implementations with GUI:
VOACAP online using ITS' IONCAP model, available at http://www.voacap.com/prediction.html
the PropagationPython Project. aka "Proppy" which is an evolution and alternate to VOACAP using the new ITURHFProp prediction model (formerly REC533) and always in development by James Watson
For immediate results, VOACAP provides a web interface for both the coverage and the prediction.
See also
Shortwave
Radio propagation model
Radio propagation
|
https://en.wikipedia.org/wiki/Pseudorandom%20graph
|
In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. He defined a condition called "jumbledness": a graph is said to be -jumbled for real and with if
for every subset of the vertex set , where is the number of edges among (equivalently, the number of edges in the subgraph induced by the vertex set ). It can be shown that the Erdős–Rényi random graph is almost surely -jumbled. However, graphs with less uniformly distributed edges, for example a graph on vertices consisting of an -vertex complete graph and completely independent vertices, are not -jumbled for any small , making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Connection to local conditions
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting be the number of common neighbors of two vertices and , Thomason showed that, given a graph on vertices with minimum degree , if for every and , then is -jumbled. This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.
Chung–Graham–Wilson theorem
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: a graph on vertices with edge density and some can satisfy each of these conditions if
Discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Discrepa
|
https://en.wikipedia.org/wiki/Leo%20Harrington
|
Leo Anthony Harrington (born May 17, 1946) is a professor of mathematics at the University of California, Berkeley who works in
recursion theory, model theory, and set theory.
Having retired from being a Mathematician, Professor Leo Harrington is now a Philosopher.
His notable results include proving the Paris–Harrington theorem along with Jeff Paris,
showing that if the axiom of determinacy holds for all analytic sets then x# exists for all reals x,
and proving with Saharon Shelah that the first-order theory of the partially ordered set of recursively enumerable Turing degrees is undecidable.
|
https://en.wikipedia.org/wiki/Robin%20Hartshorne
|
Robin Cope Hartshorne ( ; born March 15, 1938) is an American mathematician who is known for his work in algebraic geometry.
Career
Hartshorne was a Putnam Fellow in Fall 1958 while he was an undergraduate at Harvard University (under the name Robert C. Hartshorne). He received a Ph.D. in mathematics from Princeton University in 1963 after completing a doctoral dissertation titled Connectedness of the Hilbert scheme under the supervision of John Coleman Moore and Oscar Zariski. He then became a Junior Fellow at Harvard University, where he taught for several years. In 1972, he was appointed to the faculty at the University of California, Berkeley, where he is a Professor Emeritus as of 2020.
Hartshorne is the author of the text Algebraic Geometry.
Awards
In 1979, Hartshorne was awarded the Leroy P. Steele Prize for "his expository research article Equivalence relations on algebraic cycles and subvarieties of small codimension, Proceedings of Symposia in Pure Mathematics, volume 29, American Mathematical Society, 1975, pp. 129-164; and his book Algebraic geometry, Springer-Verlag, Berlin and New York, 1977." In 2012, Hartshorne became a fellow of the American Mathematical Society.
Personal life
Hartshorne attended high school at Phillips Exeter Academy, graduating in 1955. Hartshorne is married to Edie Churchill and has two sons and an adopted daughter. He is a mountain climber and amateur flute and shakuhachi player.
Selected publications
Foundations of Projective Geometry, New York: W. A. Benjamin, 1967;
Ample Subvarieties of Algebraic Varieties, New York: Springer-Verlag. 1970;
Algebraic Geometry, New York: Springer-Verlag, 1977; corrected 6th printing, 1993. GTM 52,
Families of Curves in P3 and Zeuthen's Problem. Vol. 617. American Mathematical Society, 1997.
Geometry: Euclid and Beyond, New York: Springer-Verlag, 2000; corrected 2nd printing, 2002; corrected 4th printing, 2005.
Local Cohomology: A Seminar Given by A. Grothendieck, Harvard University. Fa
|
https://en.wikipedia.org/wiki/Hendrik%20Lenstra
|
Hendrik Willem Lenstra Jr. (born 16 April 1949, Zaandam) is a Dutch mathematician.
Biography
Lenstra received his doctorate from the University of Amsterdam in 1977 and became a professor there in 1978. In 1987, he was appointed to the faculty of the University of California, Berkeley; starting in 1998, he divided his time between Berkeley and the University of Leiden, until 2003, when he retired from Berkeley to take a full-time position at Leiden.
Three of his brothers, Arjen Lenstra, Andries Lenstra, and Jan Karel Lenstra, are also mathematicians. Jan Karel Lenstra is the former director of the Netherlands Centrum Wiskunde & Informatica (CWI). Hendrik Lenstra was the Chairman of the Program Committee of the International Congress of Mathematicians in 2010.
Scientific contributions
Lenstra has worked principally in computational number theory. He is well known for:
Co-discovering of the Lenstra–Lenstra–Lovász lattice basis reduction algorithm (in 1982);
Developing an polynomial-time algorithm for solving a feasibility integer programming problem when the number of variables is fixed (in 1983);
Discovering the elliptic curve factorization method (in 1987);
Computing all solutions to the inverse Fermat equation (in 1992);
The Cohen–Lenstra heuristics - a set of precise conjectures about the structure of class groups of quadratic fields.
Awards and honors
In 1984, Lenstra became a member of the Royal Netherlands Academy of Arts and Sciences. He won the Fulkerson Prize in 1985 for his research using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints. He was awarded the Spinoza Prize in 1998, and on 24 April 2009 he was made a Knight of the Order of the Netherlands Lion. In 2009, he was awarded a Gauss Lecture by the German Mathematical Society. In 2012, he became a fellow of the American Mathematical Society.
Publications
Euclidean Number Fields. Parts 1-3, Mathematical Intelligencer 1980
|
https://en.wikipedia.org/wiki/Time-weighted%20average%20price
|
In finance, time-weighted average price (TWAP) is the average price of a security over a specified time.
TWAP is also sometimes used to describe a TWAP card, that is a strategy that will attempt to execute an order and achieve the TWAP or better. A TWAP strategy underpins more sophisticated ways of buying and selling than simply executing orders en masse: for example, dumping a huge number of shares in one block is likely to affect market perceptions, with an adverse effect on the price.
Use
A TWAP strategy is often used to minimize a large order's impact on the market and result in price improvement. High-volume traders use TWAP to execute their orders over a specific time, so they trade to keep the price close to that which reflects the true market price. TWAP orders are a strategy of executing trades evenly over a specified time period. Volume-weighted average price (VWAP) balances execution with volume. Regularly, a VWAP trade will buy or sell 40% of a trade in the first half of the day and then the other 60% in the second half of the day. A TWAP trade would most likely execute an even 50/50 volume in the first and second half of the day.
Formula
TWAP is calculated using the following formula:
where:
is Time Weighted Average Price;
is the price of security at a time of measurement;
is change of time since previous price measurement;
is each individual measurement that takes place over the defined period of time.
Increased period of measurements results in a less up-to-date price.
See also
Volume-weighted average price
|
https://en.wikipedia.org/wiki/Decentralized%20computing
|
Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out, or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite of centralized computing, which was prevalent during the early days of computers.
A decentralized computer system has many benefits over a conventional centralized network. Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle (in relation to their full potential). A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness.
All computers have to be updated individually with new software, unlike a centralized computer system. Decentralized systems still enable file sharing and all computers can share peripherals such as printers and scanners as well as modems, allowing all the computers in the network to connect to the internet.
A collection of decentralized computers systems are components of a larger computer network, held together by local stations of equal importance and capability. These systems are capable of running independently of each other.
Origins of decentralized computing
The origins of decentralized computing originate from the work of David Chaum.
During 1979 he conceived the first concept of a decentralized computer system known as Mix Network. It provided an anonymous email communications network, which decentralized the authentication of the messages in a protocol that would become the precursor to Onion Routing, the protocol of the TOR browser. Through this initial development of an anonymous communications network, David Chaum applied his Mi
|
https://en.wikipedia.org/wiki/Tecplot
|
Tecplot is the name of a family of visualization & analysis software tools developed by American company Tecplot, Inc., which is headquartered in Bellevue, Washington. The firm was formerly operated as Amtec Engineering. In 2016, the firm was acquired by Vela Software, an operating group of Constellation Software, Inc. (TSX:CSU).
Tecplot 360
Tecplot 360 is a Computational Fluid Dynamics (CFD) and numerical simulation software package used in post-processing simulation results. Tecplot 360 is also used in chemistry applications to visualize molecule structure by post-processing charge density data.
Common tasks associated with post-processing analysis of flow solver (e.g. Fluent, OpenFOAM) data include calculating grid quantities (e.g. aspect ratios, skewness, orthogonality and stretch factors), normalizing data; Deriving flow field functions like pressure coefficient or vorticity magnitude, verifying solution convergence, estimating the order of accuracy of solutions, interactively exploring data through cut planes (a slice through a region), iso-surfaces (3-D maps of concentrations), particle paths (dropping an object in the "fluid" and watching where it goes).
Tecplot 360 may be used to visualize output from programming languages such as Fortran. Tecplot's native data format is PLT or SZPLT. Many other formats are also supported, including:
CFD Formats:
CGNS, FLOW-3D (Flow Science, Inc.), ANSYS CFX, ANSYS FLUENT .cas and .dat format and polyhedra, OpenFOAM, PLOT3D (Flow Science, Inc.), Tecplot and polyhedra, Ensight Gold and HDF5 (Hierarchical Data Format).
Data Formats:
HDF, Microsoft Excel (Windows only), comma- or space-delimited ASCII.
FEA Formats:
Abaqus, ANSYS, FIDAP Neutral, LSTC/DYNA LS-DYNA, NASTRAN MSC Software, Patran MSC Software, PTC Mechanica, SDRC/IDEAS universal and 3D Systems STL.
ParaView supports Tecplot format through a VisIt importer.
Tecplot RS
Tecplot RS is a tool tailored towards visualizing the results of
reservoir simulations,
|
https://en.wikipedia.org/wiki/SMAD%20%28protein%29
|
Smads (or SMADs) comprise a family of structurally similar proteins that are the main signal transducers for receptors of the transforming growth factor beta (TGF-B) superfamily, which are critically important for regulating cell development and growth. The abbreviation refers to the homologies to the Caenorhabditis elegans SMA ("small" worm phenotype) and MAD family ("Mothers Against Decapentaplegic") of genes in Drosophila.
There are three distinct sub-types of Smads: receptor-regulated Smads (R-Smads), common partner Smads (Co-Smads), and inhibitory Smads (I-Smads). The eight members of the Smad family are divided among these three groups. Trimers of two receptor-regulated SMADs and one co-SMAD act as transcription factors that regulate the expression of certain genes.
Sub-types
The R-Smads consist of Smad1, Smad2, Smad3, Smad5 and Smad8/9, and are involved in direct signaling from the TGF-B receptor.
Smad4 is the only known human Co-Smad, and has the role of partnering with R-Smads to recruit co-regulators to the complex.
Finally, Smad6 and Smad7 are I-Smads that work to suppress the activity of R-Smads. While Smad7 is a general TGF-B signal inhibitor, Smad6 associates more specifically with BMP signaling. R/Co-Smads are primarily located in the cytoplasm, but accumulate in the nucleus following TGF-β signaling, where they can bind to DNA and regulate transcription. However, I-Smads are predominantly found in the nucleus, where they can act as direct transcriptional regulators.
Discovery and nomenclature
Before Smads were discovered, it was unclear what downstream effectors were responsible for transducing TGF-B signals. Smads were first discovered in Drosophila, in which they are known as mothers against dpp (Mad), through a genetic screen for dominant enhancers of decapentaplegic (dpp), the Drosophila version of TGF-B. Studies found that Mad null mutants showed similar phenotypes to dpp mutants, suggesting that Mad played an important role in some aspect
|
https://en.wikipedia.org/wiki/Ubique%20%28company%29
|
Ubique was a software company based in Israel.
In 1994 the company launched the first social-networking software, which included instant messaging, voice over IP (Commonly known as VoIP), chat rooms, web-based events, collaborative browsing. It is best known for the Virtual Places software product and the technology used by
Lotus Sametime. It is now part of IBM Haifa Labs.
Technology
Virtual Places
Ubique's best-known product is Virtual Places, a presence-based chat program in which users explore web sites together. It is used by providers such as VPChat and Digital Space and eventually evolved into Lotus Sametime.
Virtual Places requires a server and client software. Users start Virtual Places along with a web
browser and sign into the Virtual Places server. Avatars are overlaid onto the web browser and
users are able to collaborate with each other while they all visit web sites in real time.
Some Virtual Places consumer-oriented communities are still alive on the Web and are using the old version of it.
Instant Messaging and Chat
With the technology developed for Virtual Places, Ubique created an instant messaging and
presence technology platform which evolved into Lotus Sametime.
History
1994 – Ubique Ltd was founded in Israel by Ehud Shapiro and a group of scientists from
the Weizmann Institute to develop real-time, distributed computing products. The
company developed a presence-based chat system known as Virtual Places along with real-time
instant messaging and presence technology software. These were the very early days of the web, which at the time had only static data. Ubique's mission was "to add people to the web".
1995 – America Online Inc. purchased Ubique with the intention to use Ubique's Virtual
Places technology to enhance and expand its existing live online interactive communication for both the AOL consumer online service and the new GNN brand service. Only the GNN-branded Virtual Places product was ever released.
1996 – GNN was disco
|
https://en.wikipedia.org/wiki/Varsity%20blind%20wine%20tasting%20match
|
The Varsity Blind Wine Tasting Match is a series of annual competitions in blind wine tasting between the Oxford University Blind Wine Tasting Society and the Cambridge University Blind Wine Tasting Society; the blind wine tasting teams of the University of Edinburgh and the University of St Andrews; and the blind wine tasting teams of the University of Bath and Bristol University. It is sponsored by champagne house Pol Roger. The Oxford/Cambridge competition has run since 1953. The current Oxford/Cambridge convenor is James Simpson, Master of Wine (MW). Will Lyons is a judge for the Edinburgh/St Andrews competition
The winning teams are invited to Épernay, France to visit the vineyards of Pol Roger and compete in an international tasting match against a French university. The taster with the highest individual score wins a bottle of Pol Roger's top cuvee, Sir Winston Churchill. The reserve taster with the higher score wins a subscription to Decanter Magazine. The losing team each wins a bottle of Non Vintage Pol Roger.
Judges
For the Oxford v Cambridge competition there are two judges, one nominated by each team. In 2008 and 2009, the Oxford judge was Jancis Robinson MW, and the Cambridge judge was Hugh Johnson. Past judges have included Jasper Morris MW (who also judged in 2014). The papers are marked anonymously and cross-checked by both judges in order to ensure impartiality.
Winners
Team Overall Competition Oxford v Cambridge
Cambridge (26 victories in total) including: 1994, 1998, 2004, 2005, 2007, 2009, 2010, 2011, 2014, 2019, 2020, 2022
Oxford (42 victories in total) including: 1992, 1993, 1995, 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2006, 2008, 2012, 2013, 2015, 2016, 2017, 2018, 2021, 2023
Top Individual Tasters
2009: Caroline Conner (Ox)
2010: James Flewellen (Ox)
2011: James Flewellen (Ox), 152 points
2012: Ren Lim (Ox)
2013: Tom Arnold (Ox) and Stefan Kuppen (Cam), 140 points [joint]
2014: Vaiva Imbrasaite (Cam), 195 points
2015: Swii
|
https://en.wikipedia.org/wiki/Grid%20method%20multiplication
|
The grid method (also known as the box method) of multiplication is an introductory approach to multi-digit multiplication calculations that involve numbers larger than ten. Because it is often taught in mathematics education at the level of primary school or elementary school, this algorithm is sometimes called the grammar school method.
Compared to traditional long multiplication, the grid method differs in clearly breaking the multiplication and addition into two steps, and in being less dependent on place value.
Whilst less efficient than the traditional method, grid multiplication is considered to be more reliable, in that children are less likely to make mistakes. Most pupils will go on to learn the traditional method, once they are comfortable with the grid method; but knowledge of the grid method remains a useful "fall back", in the event of confusion. It is also argued that since anyone doing a lot of multiplication would nowadays use a pocket calculator, efficiency for its own sake is less important; equally, since this means that most children will use the multiplication algorithm less often, it is useful for them to become familiar with a more explicit (and hence more memorable) method.
Use of the grid method has been standard in mathematics education in primary schools in England and Wales since the introduction of a National Numeracy Strategy with its "numeracy hour" in the 1990s. It can also be found included in various curricula elsewhere. Essentially the same calculation approach, but not with the explicit grid arrangement, is also known as the partial products algorithm or partial products method.
Calculations
Introductory motivation
The grid method can be introduced by thinking about how to add up the number of points in a regular array, for example the number of squares of chocolate in a chocolate bar. As the size of the calculation becomes larger, it becomes easier to start counting in tens; and to represent the calculation as a box whi
|
https://en.wikipedia.org/wiki/Colossal%20Typewriter
|
Colossal Typewriter by John McCarthy and Roland Silver was one of the earliest computer text editors. The program ran on the PDP-1 at Bolt, Beranek and Newman (BBN) by December 1960.
About this time, both authors were associated with the Massachusetts Institute of Technology, but it is unclear whether the editor ran on the TX-0 on loan to MIT from Lincoln Laboratory or on the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. A "Colossal Typewriter Program" is in the BBN Program Library, and, under the same name, in the DECUS Program Library as BBN- 6 (CT).
See also
Expensive Typewriter
TECO
RUNOFF
TJ-2
Notes
1960 software
Text editors
History of software
|
https://en.wikipedia.org/wiki/Felice%20Casorati%20%28mathematician%29
|
Felice Casorati (17 December 1835 – 11 September 1890) was an Italian mathematician who studied at the University of Pavia. He was born in Pavia and died in Casteggio.
He is best known for the Casorati–Weierstrass theorem in complex analysis. The theorem, named for Casorati and Karl Theodor Wilhelm Weierstrass, describes the remarkable behaviour of holomorphic functions near essential singularities, which is that every holomorphic function gets values from any complex neighbourhood, in any neighbourhood of the singularity.
The Casorati matrix is useful in the study of linear difference equations, just as the Wronskian is useful with linear differential equations. It is calculated based on n functions of the single input variable.
Works
, available at Gallica (also at GDZ). Freely available copies of volume 1 of his best-known monograph, the only one ever published.
External links
1835 births
1890 deaths
19th-century Italian mathematicians
Mathematical analysts
Scientists from Pavia
|
https://en.wikipedia.org/wiki/Triton%20%28content%20delivery%29
|
Triton was a digital delivery and digital rights management service created by Digital Interactive Streams, which abruptly went out of business in early October 2006.
Triton was a new competitor in the rapidly growing market for electronic distribution of video games. Triton was being used to serve budget-oriented games from such publishers as Strategy First and Global Star Software, and was most known for distributing Prey.
History
Triton was launched on November 10, 2004, under the name Game xStream. The service signed several smaller publishers shortly thereafter, and announced its first high-profile deal in May 2005, signing 3D Realms and its then in-development FPS Prey.
Game xStream was renamed to its current title of Triton on May 8, 2006.
In early October, 2006, owners of Prey who had purchased it via Triton began to complain about problems purchasing the game, activating it, and reaching customer service. 3D Realms' webmaster Joe Siegler managed to find out that Triton and Digital Interactive Streams had gone out of business suddenly and apparently without warning.
A follow-up from Royal O'Brien of Triton said that Prey owners who use Triton won't lose their game. A patch was in development to remove the dependency from the live system and allow you to back up/copy and play your games. However, customers who purchased the game through Triton will receive a retail copy.
Prey was released on Valve's Steam service, which allows any existing Prey owners to register their game through Steam by entering the activation code, including those who bought Prey through Triton. The game is, however, no longer available for purchase through Steam.
Technology
Although similar to competing services, the primary selling point of Triton was its "dynamic streaming" technology, which allows for games to be played before they have been completely downloaded - new content is sent to the client as it is needed. All games on the service required the user to be online to be
|
https://en.wikipedia.org/wiki/Fidelity%20of%20quantum%20states
|
In quantum mechanics, notably in quantum information theory, fidelity is a measure of the "closeness" of two quantum states. It expresses the probability that one state will pass a test to identify as the other. The fidelity is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.
Definition
The fidelity between two quantum states and , expressed as density matrices, is commonly defined as:
The square roots in this expression are well-defined because both and are for positive semidefinite matrices, and the square root of a positive semidefinite matrix is defined via the spectral theorem. The Euclidean inner product from the classical definition is replaced by the Hilbert–Schmidt inner product.
As will be discussed in the following sections, this expression can be simplified in various cases of interest. In particular, for pure states, and , it equals:This tells us that the fidelity between pure states has a straightforward interpretation in terms of probability of finding the state when measuring in a basis containing .
Some authors use an alternative definition and call this quantity fidelity. The definition of however is more common. To avoid confusion, could be called "square root fidelity". In any case it is advisable to clarify the adopted definition whenever the fidelity is employed.
Motivation from classical counterpart
Given two random variables with values (categorical random variables) and probabilities and , the fidelity of and is defined to be the quantity
.
The fidelity deals with the marginal distribution of the random variables. It says nothing about the joint distribution of those variables. In other words, the fidelity is the square of the inner product of and viewed as vectors in Euclidean space. Notice that if and only if . In general, . The measure is known as the Bhattacharyya coefficient.
Given a classical measure of the distinguishability of two probability d
|
https://en.wikipedia.org/wiki/Expensive%20Tape%20Recorder
|
Expensive Tape Recorder is a digital audio program written by David Gross while a student at the Massachusetts Institute of Technology. Gross developed the idea with Alan Kotok, a fellow member of the Tech Model Railroad Club. The recorder and playback system ran in the late 1950s or early 1960s on MIT's TX-0 computer on loan from Lincoln Laboratory.
The name
Gross referred to this project by this name casually in the context of Expensive Typewriter and other programs that took their names in the spirit of "Colossal Typewriter". It is unclear if the typewriters were named for the 3 million USD development cost of the TX-0. Or they could have been named for the retail price of the DEC PDP-1, a descendant of the TX-0, installed next door at MIT in 1961. The PDP-1 was one of the least expensive computers money could buy, about 120,000 in 1962 USD. The program has been referred to as a hack, perhaps in the historical sense or in the MIT hack sense. Or the term may have been applied to it in the sense of Hackers: Heroes of the Computer Revolution, a book by Steven Levy.
The project
Gross recalled and very briefly described the project in a 1984 Computer Museum meeting. A person associated with the Tixo Web site spoke with Gross and Kotok, and posted the only other description known.
Influence
According to Kotok, the project was, "digital recording more than 20 years ahead of its time." In 1984, when Jack Dennis asked if they could recognize Beethoven, Computer Museum meeting minutes record the authors as saying, "It wasn't bad, considering." Digital audio pioneer Thomas Stockham worked with Dennis and like Kotok helped develop a contemporary debugger. Whether he was first influenced by Expensive Tape Recorder or more by the work of Kenneth N. Stevens is unknown.
See also
PDP-1
Digital recording
Expensive Typewriter
Expensive Desk Calculator
Expensive Planetarium
Harmony Compiler
Notes
|
https://en.wikipedia.org/wiki/Immunodermatology
|
Immunodermatology studies skin as an organ of immunity in health and disease. Several areas have special attention, such as photo-immunology (effects of UV light on skin defense), inflammatory diseases such as Hidradenitis suppurativa, allergic contact dermatitis and atopic eczema, presumably autoimmune skin diseases such as vitiligo and psoriasis, and finally the immunology of microbial skin diseases such as retrovirus infections and leprosy. New therapies in development for the immunomodulation of common immunological skin diseases include biologicals aimed at neutralizing TNF-alfa and chemokine receptor inhibitors.
Testing sites
There are multiple universities currently do Immunodermatology:
University of Utah Health.
University of North Carolina.
See also
Dermatology
Immune response
|
https://en.wikipedia.org/wiki/Survival%20Under%20Atomic%20Attack
|
Survival Under Atomic Attack was the title of an official United States government booklet released by the Executive Office of the President, the National Security Resources Board (document 130), and the Civil Defense Office. Released at the onset of the Cold War era, the pamphlet was in line with rising fears that the Soviet Union would launch a nuclear attack against the United States, and outlined what to do in the event of an atomic attack.
The booklet introduced general public to the effects of nuclear weapons and was aimed at calming down the fears surrounding them. Survival Under Atomic Attack was the first entry in a series of government publications and communications that employed the strategy of "emotion management" in order to neutralize the horrifying aspects of nuclear weapons.
Purpose
Published in 1950 by the Government Printing Office, one year after the Soviet Union detonated their first atomic bomb, the booklet explains how to protect oneself, one's food and water supply, and one's home. It also covered how to prevent burns and what to do if exposed to radiation. The U.S Strategic bombing survey had assessed the civilian response in Hiroshima and Nagasaki beginning as early as August–September 1945 and its report was "Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved...". Secondly, the Atomic Bomb Casualty Commission was active from 1946 to 1975 studying the effects of the two bombs on survivors in both cities and thus represented four years of post-bombing study at the time of publication.
Center Insert
The four pages in the center of the brochure (15, 16, 17, 18) were designed to be torn out.
"Remove this sheet and keep it with you until you've memorized it."
Kill the Myths (15)
Atomic Weapons Will Not Destroy The Earth Atomic bombs hold more death and destruction than man ever before has wrapped up in a single package, but their over-all power still has very de
|
https://en.wikipedia.org/wiki/Rubin%20causal%20model
|
The Rubin causal model (RCM), also known as the Neyman–Rubin causal model, is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland. The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis, though he discussed it only in the context of completely randomized experiments. Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.
Introduction
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference."
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects. A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect or ATE) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or
|
https://en.wikipedia.org/wiki/Richard%20Altmann
|
Richard Altmann (12 March 1852 – 8 December 1900) was a German pathologist and histologist from Deutsch Eylau in the Province of Prussia.
Altmann studied medicine in Greifswald, Königsberg, Marburg, and Giessen, obtaining a doctorate at the University of Giessen in 1877. He then worked as a prosector at Leipzig, and in 1887 became an anatomy professor (extraordinary). He died in Hubertusburg in 1900 from a nervous disorder.
He improved fixation methods, for instance, his solution of potassium dichromate and osmium tetroxide. Using that along with a new staining technique of applying acid-fuchsin contrasted by picric acid amid delicate heating, he observed filaments in the nearly all cell types, developed from granules. He named the granules "bioblasts", and explained them as the elementary living units, having metabolic and genetic autonomy, in his 1890 book "Die Elementarorganismen" ("The Elementary Organism"). His explanation drew much skepticism and harsh criticism. Altmann's granules are now believed to be mitochondria.
He is credited with coining the term "nucleic acid" in 1889, replacing Friedrich Miescher's term "nuclein" when it was demonstrated that nuclein was acidic.
Books
Notes
|
https://en.wikipedia.org/wiki/Harmony%20Compiler
|
Harmony Compiler was written by Peter Samson at the Massachusetts Institute of Technology (MIT). The compiler was designed to encode music for the PDP-1 and built on an earlier program Samson wrote for the TX-0 computer.
Jack Dennis noticed and had mentioned to Samson that the sound on or off state of the TX-0's speaker could be enough to play music. They succeeded in building a WYSIWYG program for one voice before or by 1960.
For the PDP-1 which arrived at MIT in September 1961, Samson designed the Harmony Compiler which synthesizes four voices from input in a text-based notation. Although it created music in many genres, it was optimized for baroque music. PDP-1 music is merged from four channels and played back in stereo. Notes are on pitch and each has an undertone. The music does not stop for errors. Mistakes are greeted with a message from the typewriter's red ribbon, "To err is human, to forgive divine."
Samson joined the PDP-1 restoration project at the Computer History Museum in 2004 to recreate the music player.
|
https://en.wikipedia.org/wiki/318%20%28number%29
|
318 is the natural number following 317 and preceding 319.
In mathematics
318 is:
a sphenic number
a nontotient
the number of posets with 6 unlabeled elements
the sum of 12 consecutive primes, 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47.
In religion
In Genesis 14, Abraham takes 318 men to rescue his brother Lot.
|
https://en.wikipedia.org/wiki/Naval%20ensign
|
A naval ensign is an ensign (maritime flag) used by naval ships of various countries to denote their nationality. It can be the same or different from a country's civil ensign or state ensign.
It can also be known as a war ensign. A large version of a naval ensign that is flown on a warship's mast just before going into battle is called a battle ensign. An ensign differs from a jack, which is flown from a jackstaff at the bow of a vessel.
Most countries have only one national flag and ensign for all purposes. In other countries, a distinction is made between the land flag and the civil, state and naval ensigns. The British ensigns, for example, differ from the flag used on land (the Union Flag) and have different versions of plain and defaced Red and Blue ensigns for civilian and state use, as well as the naval ensign (White Ensign). Some naval ensigns differ in shape from the national flag, such as the Nordic naval ensigns, which have 'tongues'.
Countries having specific naval ensigns
Naval ensigns that are different from the civil ensign and the national flag:
Historical naval ensigns
|
https://en.wikipedia.org/wiki/UCT%20Mathematics%20Competition
|
The UCT Mathematics Competition is an annual mathematics competition for schools in the Western Cape province of South Africa, held at the University of Cape Town.
Around 7000 participants from Grade 8 to Grade 12 take part, writing a multiple-choice paper. Individual and pair entries are accepted, but all write the same paper for their grade.
The current holder of the School Trophy is Rondebosch Boys High School, with Diocesan College achieving second place in the 2022 competition. These two schools have held the top positions in the competition for a number of years.
The competition was established in 1977 by Mona Leeuwenberg and Shirley Fitton, who were teachers at Diocesan College and Westerford High School, and since 1987 has been run by Professor John Webb of the University of Cape Town.
Awards
Mona Leeuwenburg Trophy
The Mona Leeuwenburg Trophy is awarded to the school with the best overall performance in the competition.
UCT Trophy
The UCT Trophy is awarded to the school with the best performance that has not participated in the competition more than twice before.
Diane Tucker Trophy
The Diane Tucker Trophy is awarded to the girl with the best performance in the competition. This trophy was first made in year 2000.
Moolla Trophy
The Moolla Trophy was donated to the competition by the Moolla family. Saadiq, Haroon and Ashraf Moolla represented Rondebosch Boys' High School and achieved Gold Awards from 2003 to 2011. The trophy is awarded to a school from a disadvantaged community that shows a notable performance in the competition.
Lesley Reeler Trophy
The Lesley Reeler Trophy is awarded for the best individual performance over five years (grades 8 to 12).
|
https://en.wikipedia.org/wiki/Cleanskin%20%28wine%29
|
In Australia and New Zealand, cleanskin wine is a term for wine whose label does not indicate the winery or the winemaker's name. It is typically sold at a low price.
Cleanskin labels usually only show the grape variety and the year of bottling, as well as other information required by Australian law - alcohol content, volume, additives and standard drink information.
Cleanskin wines are typically sold cheaply in dozen lots for home consumption. They may be branded wines that were originally sold at a higher price and re-labelled as cleanskins, or they may be wines produced for the purpose of being sold as cleanskins. Consequently, the quality of various batches of cleanskin wine can vary significantly.
Cleanskin wine was introduced to Australia in the early 2000s as a way for the wine industry to cope with a massive oversupply of wine, and a resulting drop in prices. As a result, wine consumption in Australia has greatly increased as of 2006. Also, the price of cleanskin wine has dropped to around or below the price of beer or even bottled water.
The word "cleanskin" comes from the Australian term for unbranded cattle, and is also used to refer to undercover law enforcement agents.
|
https://en.wikipedia.org/wiki/AIDGAP%20series
|
AIDGAP is an acronym for Aid to Identification in Difficult Groups of Animals and Plants.
The AIDGAP series is a set of books published by the Field Studies Council. They are intended to enable students and interested non-specialists to identify groups of taxa in Britain which are not covered by standard field guides. In general, they are less demanding in level than the Synopses of the British Fauna.
All AIDGAP guides are initially produced as test versions, which are circulated widely to students, teaching staff and environmental professionals, with the feedback incorporated into the final published versions. In many cases the AIDGAP volume is the only non-technical work covering the group of taxa in question.
History of the series
The Field Studies Council recognised the widespread need for identification guides soon after its inception, and has since established a long tradition of publishing such material. Many of these were written by teaching staff writing their own keys to fill obvious gaps in the available literature (see for example A key to the land snails of the Flatford area, Suffolk (1959)). However, it became increasingly apparent that a change in approach was needed. Too few guides were available which were usable by those with little previous experience. Many groups of plants and animals appeared to be neglected.
The FSC initiated the AIDGAP project in 1976, with input from an advisory panel which included a range of organisations such as the Linnean Society, teachers in secondary education and professional illustrators. The two main objectives adopted by the panel were first to identify those groups of organisms regarded as 'difficult' due to a lack of a suitable key, and second to investigate ways of alleviating the difficulties of identification for each group. The panel also decided to incorporate a 'testing' stage during which the identification guides could be revised and improved.
In practice today, AIDGAP guides are produced as 'test
|
https://en.wikipedia.org/wiki/Delta%20robot
|
A delta robot is a type of parallel robot that consists of three arms connected to universal joints at the base. The key design feature is the use of parallelograms in the arms, which maintains the orientation of the end effector. In contrast, Stewart platform can change the orientation of its end effector.
Delta robots have popular usage in picking and packaging in factories because they can be quite fast, some executing up to 300 picks per minute.
History
The delta robot (a parallel arm robot) was invented in the early 1980s by a research team led by professor Reymond Clavel at the École Polytechnique Fédérale de Lausanne (EPFL, Switzerland). After a visit to a chocolate maker, a team member wanted to develop a robot to place pralines in their packages. The purpose of this new type of robot was to manipulate light and small objects at a very high speed, an industrial need at that time.
In 1987, the Swiss company Demaurex purchased a license for the delta robot and started the production of delta robots for the packaging industry. In 1991, Reymond Clavel presented his doctoral thesis 'Conception d'un robot parallèle rapide à 4 degrés de liberté', and received the golden robot award in 1999 for his work and development of the delta robot. Also in 1999, ABB Flexible Automation started selling its delta robot, the FlexPicker. By the end of 1999, delta robots were also sold by Sigpack Systems.
In 2017, researchers from Harvard's Microrobotics Lab miniaturized it with piezoelectric actuators to 0.43 grams for 15 mm x 15 mm x 20 mm, capable of moving a 1.3 g payload around a 7 cubic millimeter workspace with a 5 micrometers precision, reaching 0.45 m/s speeds with 215 m/s² accelerations and repeating patterns at 75 Hz.
Design
The delta robot is a parallel robot, i.e. it consists of multiple kinematic chains connecting the base with the end-effector. The robot can also be seen as a spatial generalisation of a four-bar linkage.
The key concept of the delta robot
|
https://en.wikipedia.org/wiki/Booster%20dose
|
A booster dose is an extra administration of a vaccine after an earlier (primer) dose. After initial immunization, a booster provides a re-exposure to the immunizing antigen. It is intended to increase immunity against that antigen back to protective levels after memory against that antigen has declined through time. For example, tetanus shot boosters are often recommended every 10 years, by which point memory cells specific against tetanus lose their function or undergo apoptosis.
The need for a booster dose following a primary vaccination is evaluated in several ways. One way is to measure the level of antibodies specific against a disease a few years after the primary dose is given. Anamnestic response, the rapid production of antibodies after a stimulus of an antigen, is a typical way to measure the need for a booster dose of a certain vaccine. If the anamnestic response is high after receiving a primary vaccine many years ago, there is most likely little to no need for a booster dose. People can also measure the active B and T cell activity against that antigen after a certain amount of time that the primary vaccine was administered or determine the prevalence of the disease in vaccinated populations.
If a patient receives a booster dose but already has a high level of antibody, then a reaction called an Arthus reaction could develop, a localized form of Type III hypersensitivity induced by high levels of IgG antibodies causing inflammation. The inflammation is often self-resolved over the course of a few days but could be avoided altogether by increasing the length of time between the primary vaccine and the booster dose.
It is not yet fully clear why some vaccines such as hepatitis A and B are effective for life, and some such as tetanus need boosters. The prevailing theory is that if the immune system responds to a primary vaccine rapidly, the body does not have time to sufficiently develop immunological memory against the disease, and memory cells will
|
https://en.wikipedia.org/wiki/Virtual%20screening
|
Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme.
Virtual screening has been defined as "automatically evaluating very large libraries of compounds" using computer programs. As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process. Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.
Methods
There are two broad categories of screening techniques: ligand-based and structure-based. The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening.
Ligand-based methods
Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. Different computational techniques explore the structural, electronic, molecular shape, and physicochemical similarities of different ligands that could imply their mode of action against a specific molecular receptor or cell lines. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind. Different 2D chemical similarity
|
https://en.wikipedia.org/wiki/BBGKY%20hierarchy
|
In statistical physics, the BBGKY hierarchy (Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy, sometimes called Bogoliubov hierarchy) is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an s-particle distribution function (probability density function) in the BBGKY hierarchy includes the (s + 1)-particle distribution function, thus forming a coupled chain of equations. This formal theoretic result is named after Nikolay Bogolyubov, Max Born, Herbert S. Green, John Gamble Kirkwood, and Jacques Yvon.
Formulation
The evolution of an N-particle system in absence of quantum fluctuations is given by the Liouville equation for the probability density function in 6N-dimensional phase space (3 space and 3 momentum coordinates per particle)
where are the coordinates and momentum for -th particle with mass , and the net force acting on the -th particle is
where is the pair potential for interaction between particles, and is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the s-th equation connects the s-particle probability density function
with the (s + 1)-particle probability density function:
The equation above for s-particle distribution function is obtained by integration of the Liouville equation over the variables . The problem with the above equation is that it is not closed. To solve , one has to know , which in turn demands to solve and all the way back to the full Liouville equation. However, one can solve , if could be modeled. One such case is the Boltzmann equation for , where is modeled based on the molecu
|
https://en.wikipedia.org/wiki/TI-Nspire%20series
|
The TI-Nspire is a graphing calculator line made by Texas Instruments, with the first version released on 25 September 2007. The calculators feature a non-QWERTY keyboard and a different key-by-key layout than Texas Instruments's previous flagship calculators such as the TI-89 series.
Development
The original TI-Nspire was developed out of the TI PLT SHH1 prototype calculator (which itself was derived from the Casio ClassPad 300), the TI-92 series of calculators released in 1995, and the TI-89 series of calculators released in 1998.
In 2011, Texas Instruments released the CX line of their TI-Nspire calculators which effectively replaced the previous generation. The updates included improvements to the original's keyboard layout, an addition of a rechargeable lithium-ion battery, 3D graphing capabilities and reduced form factor. TI got rid of the removable keypad with this generation and therefore, the TI-84 compatibility mode.
In 2019, the TI-Nspire CX II was added, with a boost in clock speed and changes to the existing operating system.
Versions
The TI-Nspire series uses a different operating system compared to Texas Instruments' other calculators. The TI-Nspire includes a file manager that lets users create and edit documents. As a result of being developed from PDA-esque devices, the TI-Nspire retains many of the same functional similarities to a computer.
TI-Nspire
The standard TI-Nspire calculator is comparable to the TI-84 Plus in features and functionality. It features a TI-84 mode by way of a replaceable snap-in keypad and contains a TI-84 Plus emulator. The likely target of this is secondary schools that make use of the TI-84 Plus currently or have textbooks that cover the TI-83 (Plus) and TI-84 Plus lines, and to allow them to transition to the TI-Nspire line more easily.
The TI-Nspire started development in 2004. It uses a proprietary SoC of the ARM9 variant for its CPU. The TI-Nspire and TI-Nspire CAS (Computer algebra system) calculators have 32
|
https://en.wikipedia.org/wiki/Trinucleotide%20repeat%20expansion
|
A trinucleotide repeat expansion, also known as a triplet repeat expansion, is the DNA mutation responsible for causing any type of disorder categorized as a trinucleotide repeat disorder. These are labelled in dynamical genetics as dynamic mutations. Triplet expansion is caused by slippage during DNA replication, also known as "copy choice" DNA replication. Due to the repetitive nature of the DNA sequence in these regions, 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from the sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand, a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally, the larger the expansion the more likely they are to cause disease or increase the severity of disease. Other proposed mechanisms for expansion and reduction involve the interaction of RNA and DNA molecules.
In addition to occurring during DNA replication, trinucleotide repeat expansion can also occur during DNA repair. When a DNA trinucleotide repeat sequence is damaged, it may be repaired by processes such as homologous recombination, non-homologous end joining, mismatch repair or base excision repair. Each of these processes involves a DNA synthesis step in which strand slippage might occur leading to trinucleotide repeat expansion.
The number of trinucleotide repeats appears to predict the progression, severity, and age of onset of Huntington's disease and similar trinucleotide repeat disorders. Other human diseases in which triplet repeat expansion occurs are fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia.
History
The first documentation of anticipation in genetic disorders was in the 1800s. However, fro
|
https://en.wikipedia.org/wiki/Hyrachyus
|
Hyrachyus (from Hyrax and "pig") is an extinct genus of perissodactyl mammal that lived in Eocene Europe, North America, and Asia. Its remains have also been found in Jamaica. It is closely related to Lophiodon.
Description
The 1.5-m-long beast was related to palaeotheres, and suspected to be the ancestor of modern tapirs and rhinoceroses. Physically, it would have looked very similar to modern tapirs, although it probably lacked the tapir's characteristic proboscis. Its teeth, however, resembled those of a rhinoceros, supporting the idea of its relationship with that group.
|
https://en.wikipedia.org/wiki/%C3%89tale%20topology
|
In algebraic geometry, the étale topology is a Grothendieck topology on the category of schemes which has properties similar to the Euclidean topology, but unlike the Euclidean topology, it is also defined in positive characteristic. The étale topology was originally introduced by Alexander Grothendieck to define étale cohomology, and this is still the étale topology's most well-known use.
Definitions
For any scheme X, let Ét(X) be the category of all étale morphisms from a scheme to X. This is the analog of the category of open subsets of X (that is, the category whose objects are varieties and whose morphisms are open immersions). Its objects can be informally thought of as étale open subsets of X. The intersection of two objects corresponds to their fiber product over X. Ét(X) is a large category, meaning that its objects do not form a set.
An étale presheaf on X is a contravariant functor from Ét(X) to the category of sets. A presheaf F is called an étale sheaf if it satisfies the analog of the usual gluing condition for sheaves on topological spaces. That is, F is an étale sheaf if and only if the following condition is true. Suppose that is an object of Ét(X) and that is a jointly surjective family of étale morphisms over X. For each i, choose a section xi of F over Ui. The projection map , which is loosely speaking the inclusion of the intersection of Ui and Uj in Ui, induces a restriction map . If for all i and j the restrictions of xi and xj to are equal, then there must exist a unique section x of F over U which restricts to xi for all i.
Suppose that X is a Noetherian scheme. An abelian étale sheaf F on X is called finite locally constant if it is a representable functor which can be represented by an étale cover of X. It is called constructible if X can be covered by a finite family of subschemes on each of which the restriction of F is finite locally constant. It is called torsion if F(U) is a torsion group for all étale covers U of X.
|
https://en.wikipedia.org/wiki/Splice%20site%20mutation
|
A splice site mutation is a genetic mutation that inserts, deletes or changes a number of nucleotides in the specific site at which splicing takes place during the processing of precursor messenger RNA into mature messenger RNA. Splice site consensus sequences that drive exon recognition are located at the very termini of introns. The deletion of the splicing site results in one or more introns remaining in mature mRNA and may lead to the production of abnormal proteins. When a splice site mutation occurs, the mRNA transcript possesses information from these introns that normally should not be included. Introns are supposed to be removed, while the exons are expressed.
The mutation must occur at the specific site at which intron splicing occurs: within non-coding sites in a gene, directly next to the location of the exon. The mutation can be an insertion, deletion, frameshift, etc. The splicing process itself is controlled by the given sequences, known as splice-donor and splice-acceptor sequences, which surround each exon. Mutations in these sequences may lead to retention of large segments of intronic DNA by the mRNA, or to entire exons being spliced out of the mRNA. These changes could result in production of a nonfunctional protein. An intron is separated from its exon by means of the splice site. Acceptor-site and donor-site relating to the splice sites signal to the spliceosome where the actual cut should be made. These donor sites, or recognition sites, are essential in the processing of mRNA. The average vertebrate gene consists of multiple small exons (average size, 137 nucleotides) separated by introns that are considerably larger.
Background
In 1993, Richard J. Roberts and Phillip Allen Sharp received the Nobel Prize in Physiology or Medicine for their discovery of "split genes". Using the model adenovirus in their research, they were able to discover splicing—the fact that pre-mRNA is processed into mRNA once introns were removed from the RNA segment.
|
https://en.wikipedia.org/wiki/Weierstrass%20product%20inequality
|
In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ x1, ..., xn ≤ 1 we have
where
The inequality is named after the German mathematician Karl Weierstrass.
Proof
The inequality with the subtractions can be proven easily via mathematical induction. The one with the additions is proven identically. We can choose as the base case and see that for this value of we get
which is indeed true. Assuming now that the inequality holds for all natural numbers up to , for we have:
which concludes the proof.
|
https://en.wikipedia.org/wiki/Dowker%E2%80%93Thistlethwaite%20notation
|
In the mathematical field of knot theory, the Dowker–Thistlethwaite (DT) notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait.
Definition
To generate the Dowker–Thistlethwaite notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal (each crossing is visited and labelled twice), with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker–Thistlethwaite notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn.
Example
For example, a knot diagram may have crossings labelled with the pairs (1, 6) (3, −12) (5, 2) (7, 8) (9, −4) and (11, −10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6 −12 2 8 −4 −10.
Uniqueness and counting
Dowker and Thistlethwaite have proved that the notation specifies prime knots uniquely, up to reflection.
In the more general case, a knot can be recovered from a Dowker–Thistlethwaite sequence, but the recovered knot may differ from the original by either being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker–Thistlethwaite notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation.
The ménage problem, posed by Tait, concerns counting the number of different number sequences possible in this notation.
See also
Alexander–Briggs notation
Conway notation
Gauss notation
|
https://en.wikipedia.org/wiki/Feedback%20linearization
|
Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form
where is the state, are the inputs. The approach involves transforming a nonlinear control system into an equivalent linear control system through a change of variables and a suitable control input. In particular, one seeks a change of coordinates and control input so that the dynamics of in the coordinates take the form of a linear, controllable control system,
An outer-loop control strategy for the resulting linear control system can then be applied to achieve the control objective.
Feedback linearization of SISO systems
Here, consider the case of feedback linearization of a single-input single-output (SISO) system. Similar results can be extended to multiple-input multiple-output (MIMO) systems. In this case, and . The objective is to find a coordinate transformation that transforms the system (1) into the so-called normal form which will reveal a feedback law of the form
that will render a linear input–output map from the new input to the output . To ensure that the transformed system is an equivalent representation of the original system, the transformation must be a diffeomorphism. That is, the transformation must not only be invertible (i.e., bijective), but both the transformation and its inverse must be smooth so that differentiability in the original coordinate system is preserved in the new coordinate system. In practice, the transformation can be only locally diffeomorphic and the linearization results only hold in this smaller region.
Several tools are required to solve this problem.
Lie derivative
The goal of feedback linearization is to produce a transformed system whose states are the output and its first derivatives. To understand the structure of this target system, we use the Lie derivative. Consider the time derivative of (2), which c
|
https://en.wikipedia.org/wiki/Cicho%C5%84%27s%20diagram
|
In set theory,
Cichoń's diagram or Cichon's diagram is a table of 10 infinite cardinal numbers related to the set theory of the reals displaying the provable relations between these
cardinal characteristics of the continuum. All these cardinals are greater than or equal to , the smallest uncountable cardinal, and they are bounded above by , the cardinality of the continuum. Four cardinals describe properties of the ideal of sets of measure zero; four more describe the corresponding properties of the ideal of meager sets (first category sets).
Definitions
Let I be an ideal of a fixed infinite set X, containing all finite subsets of X. We define the following "cardinal coefficients" of I:
The "additivity" of I is the smallest number of sets from I whose union is not in I any more. As any ideal is closed under finite unions, this number is always at least ; if I is a σ-ideal, then add(I) ≥ .
The "covering number" of I is the smallest number of sets from I whose union is all of X. As X itself is not in I, we must have add(I) ≤ cov(I).
The "uniformity number" of I (sometimes also written ) is the size of the smallest set not in I. By our assumption on I, add(I) ≤ non(I).
The "cofinality" of I is the cofinality of the partial order (I, ⊆). It is easy to see that we must have non(I) ≤ cof(I) and cov(I) ≤ cof(I).
Furthermore, the "bounding number" or "unboundedness number" and the "dominating number" are defined as follows:
where "" means: "there are infinitely many natural numbers n such that …", and "" means "for all except finitely many natural numbers n we have …".
Diagram
Let be the σ-ideal of those subsets of the real line that are meager (or "of the first category") in the euclidean topology, and let
be the σ-ideal of those subsets of the real line that are of Lebesgue measure zero. Then the following inequalities hold:
Where an arrow from to is to mean that . In addition, the following relations hold:
It turns out that the inequalit
|
https://en.wikipedia.org/wiki/Conceptus
|
A conceptus (from Latin: concipere, to conceive) is an embryo and its appendages (adnexa), the associated membranes, placenta, and umbilical cord; the products of conception or, more broadly, "the product of conception at any point between fertilization and birth." The conceptus includes all structures that develop from the zygote, both embryonic and extraembryonic. It includes the embryo as well as the embryonic part of the placenta and its associated membranes: amnion, chorion (gestational sac), and yolk sac.
|
https://en.wikipedia.org/wiki/Kenneth%20Kunen
|
Herbert Kenneth Kunen (August 2, 1943August 14, 2020) was a professor of mathematics at the University of Wisconsin–Madison who worked in set theory and its applications to various areas of mathematics, such as set-theoretic topology and measure theory. He also worked on non-associative algebraic systems, such as loops, and used computer software, such as the Otter theorem prover, to derive theorems in these areas.
Personal life
Kunen was born in New York City in 1943 and died in 2020. He lived in Madison, Wisconsin, with his wife Anne, with whom he had two sons, Isaac and Adam.
Education
Kunen completed his undergraduate degree at the California Institute of Technology and received his Ph.D. in 1968 from Stanford University, where he was supervised by Dana Scott.
Career and research
Kunen showed that if there exists a nontrivial elementary embedding j : L → L of the constructible universe, then 0# exists.
He proved the consistency of a normal, -saturated ideal on from the consistency of the existence of a huge cardinal. He introduced the method of iterated ultrapowers, with which he proved that if is a measurable cardinal with or is a strongly compact cardinal then there is an inner model of set theory with many measurable cardinals. He proved Kunen's inconsistency theorem showing the impossibility of a nontrivial elementary embedding , which had been suggested as a large cardinal assumption (a Reinhardt cardinal).
Away from the area of large cardinals, Kunen is known for intricate forcing and combinatorial constructions. He proved that it is consistent that Martin's axiom first fails at a singular cardinal and constructed
under the continuum hypothesis a compact L-space supporting a nonseparable measure. He also showed that has no increasing chain of length in the standard Cohen model
where the continuum is . The concept of a Jech–Kunen tree is named after him and Thomas Jech.
Bibliography
The journal Topology and its Applications has dedicated a spec
|
https://en.wikipedia.org/wiki/History%20of%20research%20ships
|
The research ship had origins in the early voyages of exploration. By the time of James Cook's Endeavour, the essentials of what today we would call a research ship are clearly apparent. In 1766, the Royal Society hired Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The Endeavour was a sturdy boat, well designed and equipped for the ordeals she would face, and fitted out with facilities for her research personnel, Joseph Banks. And, as is common with contemporary research vessels, Endeavour carried out more than one kind of research, including comprehensive hydrographic survey work.
Some other notable early research vessels were HMS Beagle, RV Calypso, HMS Challenger, and the Endurance and Terra Nova.
The race to the poles
19th century
At the end of the 19th century there was intense international interest in exploring the North and South Poles. The search operations for the lost Franklin expedition were barely forgotten as Russia, Great Britain, Germany and Sweden set new scientific tasks for the Arctic Ocean. In 1868, the Swedish ship Sofia carried out temperature measurements and oceanographic observation in the sea area around Svalbard. During this year the Greenland, built in Norway, operated in the same area under the German command of Carl Koldeway. In 1868 to 1869, the ship owner A. Rosenthal gave scientists the opportunity to come aboard on his whaling trips and by 1869, the ship Germania, which was escorted by the Hansa and led the Second German North Polar Expedition, was built. The Germania returned safely from the expedition and was used later for further research. The Hansa, in contrast, was crushed by the ice and sunk. In 1874, the Austrian-Hungarian Tegetthoff as well as the American schooner Polaris under the command of Captain Hull met the same fate.
The Royal Navy ships Alert and Discovery of the British Arctic Expedition of 1875-76 were more successful. In 1875 they left Portsmouth in order to
|
https://en.wikipedia.org/wiki/Parking%20sensor
|
Parking sensors are proximity sensors for road vehicles designed to alert the driver of obstacles while parking. These systems use either electromagnetic or ultrasonic sensors.
Ultrasonic systems
These systems feature ultrasonic proximity detectors to measure the distances to nearby objects via sensors located in the front and/or rear bumper fascias or visually minimized within adjacent grills or recesses.
The sensors emit acoustic pulses, with a control unit measuring the return interval of each reflected signal and calculating object distances. The system in turns warns the driver with acoustic tones, the frequency indicating object distance, with faster tones indicating closer proximity and a continuous tone indicating a minimal pre-defined distance. Systems may also include visual aids, such as LED or LCD readouts to indicate object distance. A vehicle may include a vehicle pictogram on the car's infotainment screen, with a representation of the nearby objects as coloured blocks.
Rear sensors may be activated when reverse gear is selected and deactivated as soon as any other gear is selected. Front sensors may be activated manually and deactivated automatically when the vehicle reaches a pre-determined speed to avoid subsequent nuisance warnings.
As an ultrasonic systems relies on the reflection of sound waves, the system may not detect flat objects or object insufficiently large to reflect sound (e.g., a narrow pole or a longitudinal object pointed directly at the vehicle or near an object). Objects with flat surfaces angled from the vertical may deflect return sound waves away from the sensors, hindering detection. Also soft object with strong sound absorption may have weaker detection, e.g. wool or moss.
Electromagnetic systems
The electromagnetic parking sensor (EPS) was re-invented and patented in 1992 by Mauro Del Signore. Electromagnetic sensors rely on the vehicle moving slowly and smoothly towards the object to be avoided. Once an obstacle is
|
https://en.wikipedia.org/wiki/TRANZ%20330
|
The TRANZ 330 is a popular point-of-sale device manufactured by VeriFone in 1985. The most common application for these units is bank and credit card processing, however, as a general purpose computer, they can perform other novel functions. Other applications include gift/benefit card processing, prepaid phone cards, payroll and employee timekeeping, and even debit and ATM cards. They are programmed in a proprietary VeriFone TCL language (Terminal Control Language), which is unrelated to the Tool Command Language used in UNIX environments.
Point of sale companies
Embedded systems
Payment systems
Banking equipment
|
https://en.wikipedia.org/wiki/Blame%20It%20on%20the%20Weatherman
|
"Blame It on the Weatherman" is a song by Irish girl group B*Witched, written by Ray "Madman" Hedges, Martin Brannigan, Tracy Ackerman, and Andy Caine. It was released as the fourth single from their self-titled debut studio album on 15 March 1999.
Like the other three singles from the album, "Blame It on the Weatherman" reached number one on the UK Singles Chart. With this, B*Witched became the first act ever to have their first four singles all debut at number one in the UK (a record since beaten by fellow Irish band Westlife) and today remain the only girl group to do so. In Ireland, it reached number eight, while in New Zealand, it became the group's first single to miss the top 10, stalling at number 29. The song was certified silver in the UK with sales of 200,000.
Music video
The music video was directed by Michael Geoghegan. It features B*Witched floating on a large upside-down articulated lorry through the flooded city of London, picking up numerous floating items from the water and also rescuing a puppy. For the video, the band wore a mixture of their trademark denim and leather, designed by Scott Henshall, who then dressed them for their Royal Variety Performance in 1999.
Track listings
UK CD1
"Blame It on the Weatherman" – 3:33
"Together We'll Be Fine" – 3:18
"Blame It on the Weatherman" (orchestral version) – 3:31
UK CD2
"Blame It on the Weatherman" (original) – 3:33
"Blame It on the Weatherman" (Amen Club Mix) – 7:10
"Blame It on the Weatherman" (Chicane vocal edit) – 5:01
UK cassette single
"Blame It on the Weatherman" – 3:33
"Blame It on the Weatherman" (orchestral version) – 3:31
European and Australian CD single
"Blame It on the Weatherman" – 3:33
"Together We'll Be Fine" – 3:18
"Blame It on the Weatherman" (Amen Club Mix) – 7:10
"Blame It on the Weatherman" (Chicane vocal edit) – 5:01
Credits and personnel
Credits are lifted from the B*Witched album booklet.
Studio
Produced in Ray "Madman" Hedges' Mothership
Personnel
Ray "
|
https://en.wikipedia.org/wiki/Pornography
|
Pornography (colloquially known as porn or porno) has been defined as sexual subject material "such as a picture, video, or text" that is intended for sexual arousal. Indicated for the consumption by adults, pornography depictions have evolved from cave paintings, some forty millennia ago, to virtual reality presentations. A general distinction of adult content is made classifying it as pornography or erotica.
The oldest artifacts considered pornographic were discovered in Germany in 2008 CE and are dated to be at least 35,000 years old. Throughout the history of erotic depictions various people made attempts to suppress them under obscenity laws, censor, or make them illegal. Such grounds and even the definition of pornography have differed in various historical, cultural, and national contexts. The Indian Sanskrit text Kama Sutra (3rd century CE) contained prose, poetry, and illustrations regarding sexual behavior, and the book was celebrated; while the British English text Fanny Hill (1748), considered "the first original English prose pornography," has been one of the most prosecuted and banned books. In the late 19th century, a film by Thomas Edison that depicted a kiss was denounced as obscene in the United States, whereas Eugène Pirou's 1896 film Bedtime for the Bride was received very favorably in France. Starting from the mid-twentieth century on, societal attitudes towards sexuality became more lenient in the Western world where legal definitions of obscenity were made limited. In 1969, Blue Movie became the first film to depict unsimulated sex that received a wide theatrical release in the United States. This was followed by the "Golden Age of Porn" (1969–1984). The introduction of home video and the World Wide Web in the late 20th century led to global growth in the pornography business. Beginning in the 21st century, greater access to the internet and affordable smartphones made pornography more mainstream.
Pornography has been vouched to provision a
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93Kuranishi%20prolongation%20theorem
|
Given an exterior differential system defined on a manifold M, the Cartan–Kuranishi prolongation theorem says that after a finite number of prolongations the system is either in involution (admits at least one 'large' integral manifold), or is impossible.
History
The theorem is named after Élie Cartan and Masatake Kuranishi.
Applications
This theorem is used in infinite-dimensional Lie theory.
See also
Cartan-Kähler theorem
|
https://en.wikipedia.org/wiki/Oxaloacetate%20decarboxylase
|
Oxaloacetate decarboxylase is a carboxy-lyase involved in the conversion of oxaloacetate into pyruvate.
It is categorized under .
Oxaloacetate decarboxylase activity in a given organism may be due to activity of malic enzyme, pyruvate kinase, malate dehydrogenase, pyruvate carboxylase and PEP carboxykinase or the activity of "real" oxaloacetate decarboxylases. The latter enzymes catalyze the irreversible decarboxylation of oxaloacetate and can be classified into (i) the divalent cation-dependent oxaloacetate decarboxylases and (ii) the membrane-bound sodium-dependent and biotin-containing oxaloacetate decarboxylases from enterobacteria.
Kinetic Properties
An oxaloacetate decarboxylase from the family of divalent cation dependent decarboxylases was isolated from Corynebacterium glutamicum in 1995 by Jetten et al. This enzyme selectively catalyzed the decarboxylation of oxaloacetate to pyruvate and CO2 with a Km of 2.1mM, Vmax of 158 umol, and kcat of 311 s^-1. Mn2+ was required for enzymatic activity with a Km of 1.2mM for Mn2+.
A oxaloacetate decarboxylase found in mitochondria and soluble cytoplasm was isolated and purified from rat liver cells in 1974 by Wojtcak et al. The enzyme was not activated by divalent cations nor inhibited by chelating agents. The determined Km value was 0.55mM and the pH optimum for the enzyme between 6.5 and 7.5.
Cytoplasmic Enzymes
Found in different microorganisms such as Pseudomonas, Acetobacter, C. glutamicum, Veillonella parvula, and A. vinelandii, cytoplasmic oxaloacetate decarboxylases are dependent on the presence of divalent cations such as , , , , or . These enzymes are inhibited by acetyl-CoA and ADP.
Membrane-Bound Enzymes
Membrane bound oxaloacetate decarboxylase was the first enzyme of the Na+ transport decarboxylase family demonstrated to act as primary Na+ pump. This enzyme family includes methylmalonyl-CoA decarboxylase, malonate decarboxylase, and glutanoyl-CoA decarboxylase, all of which are found exclusively
|
https://en.wikipedia.org/wiki/Fructosephosphates
|
Fructosephosphates are sugar phosphates based upon fructose, and are common in the biochemistry of cells.
Fructosephosphates play integral roles in many metabolic pathways, particularly glycolysis, gluconeogenesis and the pentose phosphate pathway.
The major biologically active fructosephosphates are:
Fructose 1-phosphate
Fructose 2-phosphate
Fructose 3-phosphate
Fructose 6-phosphate
Fructose 1,6-bisphosphate
Fructose 2,6-bisphosphate
See also
Fructose bisphosphatase
|
https://en.wikipedia.org/wiki/2-Phosphoglyceric%20acid
|
2-Phosphoglyceric acid (2PG), or 2-phosphoglycerate, is a glyceric acid which serves as the substrate in the ninth step of glycolysis. It is catalyzed by enolase into phosphoenolpyruvate (PEP), the penultimate step in the conversion of glucose to pyruvate.
In glycolysis
See also
3-Phosphoglyceric acid
|
https://en.wikipedia.org/wiki/Comet%20assay
|
The single cell gel electrophoresis assay (SCGE, also known as comet assay) is an uncomplicated and sensitive technique for the detection of DNA damage at the level of the individual eukaryotic cell. It was first developed by Östling & Johansson in 1984 and later modified by Singh et al. in 1988. It has since increased in popularity as a standard technique for evaluation of DNA damage/repair, biomonitoring and genotoxicity testing. It involves the encapsulation of cells in a low-melting-point agarose suspension, lysis of the cells in neutral or alkaline (pH>13) conditions, and electrophoresis of the suspended lysed cells. The term "comet" refers to the pattern of DNA migration through the electrophoresis gel, which often resembles a comet.
The comet assay (single-cell gel electrophoresis) is a simple method for measuring deoxyribonucleic acid (DNA) strand breaks in eukaryotic cells. Cells embedded in agarose on a microscope slide are lysed with detergent and high salt to form nucleoids containing supercoiled loops of DNA linked to the nuclear matrix. Electrophoresis at high pH results in structures resembling comets, observed by fluorescence microscopy; the intensity of the comet tail relative to the head reflects the number of DNA breaks. The likely basis for this is that loops containing a break lose their supercoiling and become free to extend toward the anode. This is followed by visual analysis with staining of DNA and calculating fluorescence to determine the extent of DNA damage. This can be performed by manual scoring or automatically by imaging software.
Procedure
Encapsulation
A sample of cells, either derived from an in vitro cell culture or from an in vivo test subject is dispersed into individual cells and suspended in molten low-melting-point agarose at 37 °C. This mono-suspension is cast on a microscope slide. A glass cover slip is held at an angle and the mono-suspension applied to the point of contact between the coverslip and the slide. As the c
|
https://en.wikipedia.org/wiki/Instruction%20step
|
An instruction step is a method of executing a computer program one step at a time to determine how it is functioning. This might be to determine if the correct program flow is being followed in the program during the execution or to see if variables are set to their correct values after a single step has completed.
Hardware instruction step
On earlier computers, a knob on the computer console may have enabled step-by-step execution mode to be selected and execution would then proceed by pressing a "single step" or "single cycle" button. Program status word / Memory or general purpose register read-out could then be accomplished by observing and noting the console lights.
Software instruction step
On later platforms with multiple users, this method was impractical and so single step execution had to be performed using software techniques.
Software techniques
Instrumentation - requiring code to be added during compile or assembly to achieve statement stepping. Code can be added manually to achieve similar results in interpretive languages such as JavaScript.
instruction set simulation - requiring no code modifications for instruction or statement stepping
In some software products which facilitate debugging of High level languages, it is possible to execute an entire HLL statement at a time. This frequently involves many machine instructions and execution pauses after the last instruction in the sequence, ready for the next 'instruction' step. This requires integration with the compilation output to determine the scope of each statement.
Full Instruction set simulators however could provide instruction stepping with or without any source, since they operate at machine code level, optionally providing full trace and debugging information to whatever higher level was available through such integration. In addition they may also optionally allow stepping through each assembly (machine) instruction generated by a HLL statement.
Programs composed of multiple 'modu
|
https://en.wikipedia.org/wiki/Dorsal%20metacarpal%20arteries
|
Most of the dorsal metacarpal arteries arise from the dorsal carpal arch and run downward on the second, third, and fourth dorsal interossei of the hand and bifurcate into the dorsal digital arteries. Near their origin, they anastomose with the deep palmar arch by perforating arteries. They also anastomose with common palmar digital arteries (from the superficial palmar arch), also via perforating arteries.
The first dorsal metacarpal artery arises directly from the radial artery before it crosses through the two heads of the first dorsal interosseous muscle.
|
https://en.wikipedia.org/wiki/List%20of%20silicon%20producers
|
This is a list of silicon producers. The industry involves several very different stages of production. Production starts at silicon metal, which is the material used to gain high purity silicon. High purity silicon in different grades of purity is used for growing silicon ingots, which are sliced to wafers in a process called wafering. Compositionally pure polycrystalline silicon wafers are useful for photovoltaics. Dislocation-free and extremely flat single-crystal silicon wafers are required in the manufacture of computer chips.
Polysilicon producers
Polysilicon producers:
Elkem
JFE Steel
Nitol Solar (Russia), bankrupt since 2019
SunEdison
SolarWorld
High-purity silicon
Producers of high-purity silicon, an intermediate in the manufacture of polysilicon
Hemlock Semiconductor Corporation
Renewable Energy Corporation (REC)
SunEdison
Tokuyama Corporation
Wacker Chemie AG
Silicon wafer manufacturers
A partial list of major producers of wafers (made of high purity silicon, mono- or polycrystalline) includes:
GlobalWafers
Okmetic
Renewable Energy Corporation
Shin-Etsu Handotai
Siltronic
SUMCO
WaferPro
Prolog Semicor
See also
List of photovoltaics companies
|
https://en.wikipedia.org/wiki/6LoWPAN
|
6LoWPAN (acronym of "IPv6 over Low-Power Wireless Personal Area Networks") was a working group of the Internet Engineering Task Force (IETF).
It was created with the intention of applying the Internet Protocol (IP) even to the smallest devices, enabling low-power devices with limited processing capabilities to participate in the Internet of Things.
The 6LoWPAN group defined encapsulation, header compression, neighbor discovery and other mechanisms that allow IPv6 to operate over IEEE 802.15.4 based networks. Although IPv4 and IPv6 protocols do not generally care about the physical and MAC layers they operate over, the low power devices and small packet size defined by IEEE 802.15.4 make it desirable to adapt to these layers.
The base specification developed by the 6LoWPAN IETF group is (updated by with header compression, with neighbor discovery optimization, with selective fragment recovery and with smaller changes in and ). The problem statement document is . IPv6 over Bluetooth Low Energy using 6LoWPAN techniques is described in .
Application areas
The targets for IPv6 networking for low-power radio communication are devices that need wireless connectivity to many other devices at lower data rates for devices with very limited power consumption. One real-world example is Tado°'s individual room heating controllers. The header compression mechanisms in are used to allow IPv6 packets to travel over such networks.
IPv6 is also in use on the smart grid enabling smart meters and other devices to build a micro mesh network before sending the data back to the billing system using the IPv6 backbone. Some of these networks run over IEEE 802.15.4 radios, and therefore use the header compression and fragmentation as specified by RFC6282.
Thread
Thread is a standard from a group of more than fifty companies for a protocol running over 6LoWPAN to enable home automation. The specification is available at no cost , but paid membership is required to implement the p
|
https://en.wikipedia.org/wiki/Blood%20type%20%28non-human%29
|
Animal erythrocytes have cell surface antigens that undergo polymorphism and give rise to blood types. Antigens from the human ABO blood group system are also found in apes and Old World monkeys, and the types trace back to the origin of humanoids. Other animal blood sometimes agglutinates (to varying levels of intensity) with human blood group reagents, but the structure of the blood group antigens in animals is not always identical to those typically found in humans. The classification of most animal blood groups therefore uses different blood typing systems to those used for classification of human blood.
Simian blood groups
Two categories of blood groups, human-type and simian-type, have been found in apes and monkeys, and they can be tested by methods established for grouping human blood. Data is available on blood groups of common chimpanzees, baboons, and macaques.
Rh blood group
The Rh system is named after the rhesus monkey, following experiments by Karl Landsteiner and Alexander S. Wiener, which showed that rabbits, when immunised with rhesus monkey red cells, produce an antibody that also agglutinates the red blood cells of many humans.
Chimpanzee and Old World monkey blood group systems
Two complex chimpanzee blood group systems, V-A-B-D and R-C-E-F systems, proved to be counterparts of the human MNS and Rh blood group systems, respectively.
Two blood group systems have been defined in Old World monkeys: the Drh system of macaques and the Bp system of baboons, both linked by at least one species shared by either of the blood group systems.
Canine blood groups
Over 13 canine blood groups have been described. Eight DEA (dog erythrocyte antigen) types are recognized as international standards. Of these DEA types, DEA 4 and DEA 6 appear on the red blood cells of ~98% of dogs. Dogs with only DEA 4 or DEA 6 can thus serve as blood donors for the majority of the canine population. Any of these DEA types may stimulate an immune response in a recipient of
|
https://en.wikipedia.org/wiki/Maze%20runner
|
In electronic design automation, maze runner is a connection routing method that represents the entire routing space as a grid. Parts of this grid are blocked by components, specialised areas, or already present wiring. The grid size corresponds to the wiring pitch of the area. The goal is to find a chain of grid cells that go from point A to point B.
A maze runner may use the Lee algorithm. It uses a wave propagation style (a wave are all cells that can be reached in n steps) throughout the routing space. The wave stops when the target is reached, and the path is determined by backtracking through the cells.
See also
Autorouter
|
https://en.wikipedia.org/wiki/Steane%20code
|
The Steane code is a tool in quantum error correction introduced by Andrew Steane in 1996. It is a CSS code (Calderbank-Shor-Steane), using the classical binary [7,4,3] Hamming code to correct for qubit flip errors (X errors) and the dual of the Hamming code, the [7,3,4] code, to correct for phase flip errors (Z errors). The Steane code encodes one logical qubit in 7 physical qubits and is able to correct arbitrary single qubit errors.
Its check matrix in standard form is
where H is the parity-check matrix of the Hamming code and is given by
The Steane code is the first in the family of quantum Hamming codes, codes with parameters for integers . It is also a quantum color code.
Expression in the stabilizer formalism
In a quantum error-correcting code, the codespace is the subspace of the overall Hilbert space where all logical states live. In an -qubit stabilizer code, we can describe this subspace by its Pauli stabilizing group, the set of all -qubit Pauli operators which stabilize every logical state. The stabilizer formalism allows us to define the codespace of a stabilizer code by specifying its Pauli stabilizing group. We can efficiently describe this exponentially large group by listing its generators.
Since the Steane code encodes one logical qubit in 7 physical qubits, the codespace for the Steane code is a -dimensional subspace of its -dimensional Hilbert space.
In the stabilizer formalism, the Steane code has 6 generators:
Note that each of the above generators is the tensor product of 7 single-qubit Pauli operations. For instance, is just shorthand for , that is, an identity on the first three qubits and an gate on each of the last four qubits. The tensor products are often omitted in notation for brevity.
The logical and gates are
The logical and states of the Steane code are
Arbitrary codestates are of the form .
|
https://en.wikipedia.org/wiki/International%20Congress%20of%20Human%20Genetics
|
The International Congress of Human Genetics is the foremost meeting of the international human genetics community. The first Congress was held in 1956 in Copenhagen, and has met every five years since then with the exception of the 2021 meeting which was postponed for two years because of the global COVID-19 pandemic. The Congress is held under the auspices of the International Federation of Human Genetics Societies, an umbrella organization founded by the American Society of Human Genetics, the European Society of Human Genetics and the Human Genetics Society of Australasia. Congresses have been held in such diverse venues as Berlin, Brisbane, Chicago, The Hague, Jerusalem, Mexico City, Paris, Rio de Janeiro, Vienna and Washington.
The purview of the International Congress of Human Genetics is all aspects of human genetics, including research, clinical practice, and education. The Congress now attracts thousands of participants, including M.D. medical geneticists, Ph.D. human geneticists and genetic counselors from 80 or more countries. It is by far the largest human genetics meeting in the world.
External links
History of the International Congress of Human Genetics
Genetics organizations
Recurring events established in 1956
Medical conferences
|
https://en.wikipedia.org/wiki/Active%20appearance%20model
|
An active appearance model (AAM) is a computer vision algorithm for matching a statistical model of object shape and appearance to a new image. They are built during a training phase. A set of images, together with coordinates of landmarks that appear in all of the images, is provided to the training supervisor.
The model was first introduced by Edwards, Cootes and Taylor in the context of face analysis at the 3rd International Conference on Face and Gesture Recognition, 1998. Cootes, Edwards and Taylor further described the approach as a general method in computer vision at the European Conference on Computer Vision in the same year. The approach is widely used for matching and tracking faces and for medical image interpretation.
The algorithm uses the difference between the current estimate of appearance and the target image to drive an optimization process.
By taking advantage of the least squares techniques, it can match to new images very swiftly.
It is related to the active shape model (ASM). One disadvantage of ASM is that it only uses shape constraints (together with some information about the image structure near the landmarks), and does not take advantage of all the available information – the texture across the target object. This can be modelled using an AAM.
|
https://en.wikipedia.org/wiki/Oleanane
|
Oleanane is a natural triterpenoid. It is commonly found in woody angiosperms and as a result is often used as an indicator of these plants in the fossil record. It is a member of the oleanoid series, which consists of pentacyclic triterpenoids (such as beta-amyrin and taraxerol) where all rings are six-membered.
Structure
Oleanane is a pentacyclic triterpenoid, a class of molecules made up of six connected isoprene units. The naming of both the ring structures and individual carbon atoms in oleanane is the same as in steroids. As such, it consists of a A, B, C, D, and E ring, all of which are six-membered rings.
The structure of oleanane contains a number of different methyl groups, that vary in orientation between different oleananes. For example, in 18-alpha-oleanane contains a downward facing methyl group for the 18th carbon atom, while 18-beta-oleanane contains an upward facing methyl group at the same position.
A and B rings of the oleanane structure are identical to that of hopane. As a result, both molecules produce a fragment of m/z 191. Because this fragment is often used to identify hopanes, oleanane can be mis-identified in hopane analysis.
Synthesis
Like other triterpenoids, are formed from six combined isoprene units. These isoprene units can be combined via a number of different pathways. In eukaryotes (including plants), this pathway is the mevalonate (MVA) pathway. For the formation of steroids and other triterpenoids the isoprenoids are combined into a precursor known as squalene, which then undergoes enzymatic cyclization to produce the various different triterpenoids, including oleanane.
Once the oleananes have been transported into rocks or sediments they will undergo further alteration before they are measured.
Measurement in Rock Samples
Oleananes can be identified in extracts from rock samples (or plants) using GC/MS. A GC/MS is a gas chromatograph coupled with a mass spectrometer. The sample is first injected into the system, then ru
|
https://en.wikipedia.org/wiki/Academic%20Dictionary%20of%20Lithuanian
|
The Academic Dictionary of Lithuanian ( or ) is a comprehensive thesaurus of the Lithuanian language and one of the most extensive lexicographical works in the world. The 20 volumes encompassing 22,000 pages were published between 1941 and 2002 by the Institute of the Lithuanian Language. An online and a CD version was made available in 2005. It contains about 236,000 headwords, or 500,000 if counting sub-headwords, reflecting modern and historical language both from published texts since the first published book in 1547 until 2001 and recorded from the vernacular. Definitions, usage notes, and examples are given for most words. The entry length varies from one sentence to almost a hundred pages. For example, 46 pages are devoted to 298 different meanings of taisyti (to fix) and its derivatives.
History
Lithuanian philologist Kazimieras Būga started collecting material for a dictionary in 1902. When he returned from Russia to Lithuania in 1920, he started writing a dictionary that would contain all known Lithuanian words as well as hydronyms, toponyms, and surnames. However, he died in 1924 having published only two fascicules with a lengthy introduction and the dictionary up to the word anga. Būga attempted to write down everything that was known to science about each word, including etymology and history. He was critical of his own efforts realizing that the dictionary was not comprehensive or consistent, and considered the publication to be only a "draft" of a better dictionary in the future.
Būga collected about 600,000 index cards with words, but Juozas Balčikonis, who was selected by the Ministry of Education to continue the work on the dictionary in 1930, realized that more data is needed and organized a campaign to collect additional words from literary works as well as the spoken language. The focus was on older texts, mostly ignoring contemporary literature and periodicals. Balčikonis asked Lithuanian public (teachers, students, etc.) to record words fro
|
https://en.wikipedia.org/wiki/R%C3%A9union%20swamphen
|
The Réunion swamphen (Porphyrio caerulescens), also known as the Réunion gallinule or (French for "blue bird"), is a hypothetical extinct species of rail that was endemic to the Mascarene island of Réunion. While only known from 17th- and 18th-century accounts by visitors to the island, it was scientifically named in 1848, based on the 1674 account by Sieur Dubois. A considerable literature was subsequently devoted to its possible affinities, with current researchers agreeing it was derived from the swamphen genus Porphyrio. It has been considered mysterious and enigmatic due to the lack of any physical evidence of its existence.
This bird was described as entirely blue in plumage with a red beak and legs. It was said to be the size of a Réunion ibis or chicken, which could mean in length, and it may have been similar to the takahē. While easily hunted, it was a fast runner and able to fly, though it did so reluctantly. It may have fed on plant matter and invertebrates, as do other swamphens, and was said to nest among grasses and aquatic ferns. It was only found on the Plaine des Cafres plateau, to which it may have retreated during the latter part of its existence, whereas other swamphens inhabit lowland swamps. While the last unequivocal account is from 1730, it may have survived until 1763, but overhunting and the introduction of cats likely drove it to extinction.
Taxonomy
Visitors to the Mascarene island of Réunion during the 17th and 18th centuries reported blue birds ( in French). The first such account is that of the French traveller Sieur Dubois, who was on Réunion from 1669 to 1672, which was published in 1674. The British naturalist Hugh Edwin Strickland stated in 1848 that he would have thought Dubois' account referred to a member of the swamphen genus Porphyrio if not for its large size and other features (and noted the term had also been erroneously used for bats on Réunion in an old account). Strickland expressed hope that remains of this and ot
|
https://en.wikipedia.org/wiki/ORiNOCO
|
ORiNOCO was the brand name for a family of wireless networking technology by Proxim Wireless (previously Lucent). These integrated circuits (codenamed Hermes) provide wireless connectivity for 802.11-compliant Wireless LANs.
Variants
Lucent offered several variants of the PC Card, referred to by different color-based monikers:
White/Bronze: WaveLAN IEEE Standard 2 Mbit/s PC Cards with 802.11 support.
Silver: WaveLAN IEEE Turbo 11 Mbit/s PC Cards with 802.11b and 64-bit WEP support.
Gold: WaveLAN IEEE Turbo 11 Mbit/s PC Cards with 802.11b and 128-bit WEP support.
Later models dropped the 'Turbo' moniker due to 802.11b 11 Mbit/s becoming widespread.
Proxim, after taking over Lucent's wireless division, rebranded all their wireless cards to ORiNOCO - even cards not based on Lucent/Agere's Hermes chipset. Proxim still offers ORiNOCO-based cards under the 'Classic' brand.
Rebranded products
The WaveLAN chipsets that power ORiNOCO-branded cards were commonly used to power other wireless networking devices, and are compatible with a number of other access points, routers and wireless cards. The following brand and models utilise the chipset, or are rebrands of an ORiNOCO product:
3Com AirConnect
Apple AirPort and AirMac cards (original only, not AirPort Extreme). Modified to remove the antenna stub.
AVAYA World Card
Cabletron RoamAbout 802.11 DS
Compaq WL100 11 Mbit/s Wireless Adapter
D-Link DWL-650
ELSA AirLancer MC-11
Enterasys RoamAbout
Ericsson WLAN Card C11
Farallon SkyLINE
Fujitsu RoomWave
HyperLink Wireless PC Card 11Mbit/s
Intel PRO/Wireless 2011
Lucent Technologies WaveLAN/IEEE Orinoco
Melco WLI-PCM-L11
Microsoft Wireless Notebook Adapter MN-520
NCR WaveLAN/IEEE Adapter
Proxim LAN PC CARD HARMONY 80211B
Samsung 11Mbit/s WLAN Card
Symbol LA4111 Spectrum24 Wireless LAN PC Card
Toshiba Wireless LAN Mini PCI Card
Preferred wireless chipset for wardriving
The ORiNOCO (and their derivatives) is preferred by wardrivers, due to their high
|
https://en.wikipedia.org/wiki/WaveLAN
|
WaveLAN was a brand name for a family of wireless networking technology sold by NCR, AT&T, Lucent Technologies, and Agere Systems as well as being sold by other companies under OEM agreements. The WaveLAN name debuted on the market in 1990 and was in use until 2000, when Agere Systems renamed their products to ORiNOCO. WaveLAN laid the important foundation for the formation of IEEE 802.11 working group and the resultant creation of Wi-Fi.
WaveLAN has been used on two different families of wireless technology:
Pre-IEEE 802.11 WaveLAN, also called Classic WaveLAN
IEEE 802.11-compliant WaveLAN, also known as WaveLAN IEEE and ORiNOCO
History
WaveLAN was originally designed by NCR Systems Engineering, later renamed into WCND (Wireless Communication and Networking Division) at Nieuwegein, in the province Utrecht in the Netherlands, a subsidiary of NCR Corporation, in 1986–7, and introduced to the market in 1990 as a wireless alternative to Ethernet and Token Ring. The next year NCR contributed the WaveLAN design to the IEEE 802 LAN/MAN Standards Committee. This led to the founding of the 802.11 Wireless LAN Working Committee which produced the original IEEE 802.11 standard, which eventually became the basis of the certification mark Wi-Fi. When NCR was acquired by AT&T in 1991, becoming the AT&T GIS (Global Information Solutions) business unit, the product name was retained, as happened two years later when the product was transferred to the AT&T GBCS (Global Business Communications Systems) business unit, and again when AT&T spun off their GBCS business unit as Lucent in 1995. The technology was also sold as WaveLAN under an OEM agreement by Epson, Hitachi, and NEC, and as the RoamAbout DS by DEC. It competed directly with Aironet's non-802.11 ARLAN lineup, which offered similar speeds, frequency ranges and hardware.
Several companies also marketed wireless bridges and routers based on the WaveLAN ISA and PC cards, like the C-Spec OverLAN, KarlNet KarlBridge, Perso
|
https://en.wikipedia.org/wiki/Paley%20graph
|
In mathematics, Paley graphs are dense undirected graphs constructed from the members of a suitable finite field by connecting pairs of elements that differ by a quadratic residue. The Paley graphs form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley graphs allow graph-theoretic tools to be applied to the number theory of quadratic residues, and have interesting properties that make them useful in graph theory more generally.
Paley graphs are named after Raymond Paley. They are closely related to the Paley construction for constructing Hadamard matrices from quadratic residues .
They were introduced as graphs independently by and . Sachs was interested in them for their self-complementarity properties, while Erdős and Rényi studied their symmetries.
Paley digraphs are directed analogs of Paley graphs that yield antisymmetric conference matrices. They were introduced by (independently of Sachs, Erdős, and Rényi) as a way of constructing tournaments with a property previously known to be held only by random tournaments: in a Paley digraph, every small subset of vertices is dominated by some other vertex.
Definition
Let q be a prime power such that q = 1 (mod 4). That is, q should either be an arbitrary power of a Pythagorean prime (a prime congruent to 1 mod 4) or an even power of an odd non-Pythagorean prime. This choice of q implies that in the unique finite field Fq of order q, the element −1 has a square root.
Now let V = Fq and let
.
If a pair {a,b} is included in E, it is included under either ordering of its two elements. For, a − b = −(b − a), and −1 is a square, from which it follows that a − b is a square if and only if b − a is a square.
By definition G = (V, E) is the Paley graph of order q.
Example
For q = 13, the field Fq is just integer arithmetic modulo 13. The numbers with square roots mod 13 are:
±1 (square roots ±1 for +1, ±5 for −1)
±3 (square roots ±4 for +3, ±6 for −
|
https://en.wikipedia.org/wiki/Lithuanian%20dictionaries
|
Dictionaries of Lithuanian language have been printed since the first half of the 17th century.
History
The first Lithuanian language dictionary was compiled by Konstantinas Sirvydas and printed in 1629 as a trilingual (Polish–Latin–Lithuanian) dictionary. Five editions of it were printed until 1713, but it was used and copied by other lexicographers until the 19th century.
The first German–Lithuanian–German dictionary, to address the necessities of Lithuania Minor, was published by Friedrich W. Haack in 1730. A better German–Lithuanian–German dictionary, with a sketch of grammar and history of the language, more words, and systematic orthography, was published by Philipp Ruhig in 1747. In 1800, printed an expanded and revised version of Ruhig's dictionary. Its foreword was the last work of Immanuel Kant printed during his life.
There also existed a number of notable unpublished dictionaries.
At the beginning of the 19th century linguists recognized the conservative character of Lithuanian, and it came into the focus of the comparative linguistics of Indo-European languages. To address the needs of linguists, Georg H. F. Nesselmann published a Lithuanian–German dictionary in 1851.
The culmination of Lithuanian linguists' efforts is the 20-volume Academic Dictionary of Lithuanian.
|
https://en.wikipedia.org/wiki/Wood%20Green%20ricin%20plot
|
The Wood Green ricin plot was an alleged bioterrorism plot to attack the London Underground with ricin poison. The Metropolitan Police Service arrested six suspects on 5 January 2003, with one more arrested two days later.
Within two days, the Biological Weapon Identification Group at the Defence Science and Technology Laboratory in Porton Down were sure that there was no trace of ricin on any of the articles that were found. This fact was initially misreported to other government departments as well as to the public, who only became aware of this in 2005. Reporting restrictions were in place before the public's perceptions could be corrected.
The only subsequent conviction was of Kamel Bourgass, sentenced to 17 years imprisonment for conspiring "together with other persons unknown to commit public nuisance by the use of poisons and/or explosives to cause disruption, fear or injury" on the basis of five pages of his hand-written notes on how to make ricin, cyanide and botulinum. Bourgass had already been sentenced to life imprisonment for the murder of detective Stephen Oake, whom he stabbed to death during his arrest in Manchester. Bourgass also stabbed three other police officers in that incident; they all survived. All other suspects were either released without charge, acquitted, or had their trials abandoned. Bourgass had attended meetings of Al-Muhajiroun leading up to the plot.
Public reaction
Prime Minister Tony Blair referred to the case in a speech shortly after the arrests, in breach of the principle of sub judice which prevents politicians prejudicing impending court cases.
Physicians throughout the United Kingdom were warned to watch for signs that patients had been poisoned by ricin, and the public health director for London urged the public not to be alarmed following some media reports. Britain's largest circulation tabloid newspaper, The Sun, reported the discovery of a "factory of death", and other newspapers warned on their front pages "250,0
|
https://en.wikipedia.org/wiki/Joint%20application%20design
|
Joint application design (JAD) is a process used in the life cycle area of the dynamic systems development method (DSDM) to collect business requirements while developing new information systems for a company. "The JAD process also includes approaches for enhancing user participation, expediting development, and improving the quality of specifications." It consists of a workshop where "knowledge workers and IT specialists meet, sometimes for several days, to define and review the business requirements for the system." The attendees include high level management officials who will ensure the product provides the needed reports and information at the end. This acts as "a management process which allows Corporate Information Services (IS) departments to work more effectively with users in a shorter time frame".
Through JAD workshops, the knowledge workers and IT specialists are able to resolve any difficulties or differences between the two parties regarding the new information system. The workshop follows an agenda in order to guarantee that all uncertainties between parties are covered and to help prevent any miscommunications. Miscommunications can create repercussions if not addressed until later on in the process. (See below for Key Participants and Key Steps to an Effective JAD). In the end, this process will result in a new information system that is feasible and appealing to both the designers and end users.
"Although the JAD design is widely acclaimed, little is actually known about its effectiveness in practice." According to the Journal of Systems and Software, a field study was done at three organizations using JAD practices to determine how JAD influenced system development outcomes. The results of the study suggest that organizations realized modest improvement in systems development outcomes by using the JAD method. JAD use was most effective in small, clearly focused projects and less effective in large, complex projects. Since 2010, the Int
|
https://en.wikipedia.org/wiki/Sierra%20Wireless
|
Sierra Wireless (a subsidiary of Semtech Corporation) is a Canadian multinational wireless communications equipment designer, manufacturer and services provider headquartered in Richmond, British Columbia, Canada. It also maintains offices and operations in the United States, Korea, Japan, Taiwan, India, France, Australia and New Zealand.
The company sells mobile computing and machine-to-machine (M2M) communications products that work over cellular networks, 2G, 3G, 4G and 5G mobile broadband wireless routers and gateways, modules, as well as software, tools, and services.
Sierra Wireless products and technologies are used in a variety of markets and industries, including automotive and transportation, energy, field service, healthcare, industrial and infrastructure, mobile computing and consumers, networking, sales and payment, and security. It also maintains a network of experts in mobile broadband and M2M integration to support customers worldwide.
The company's products are sold directly to original equipment manufacturers (OEMs), as well as indirectly through distributors and resellers.
History
Sierra Wireless was founded in 1993 in Vancouver, Canada. In August 2003, it completed an acquisition of privately held, high-speed CDMA wireless modules supplier, AirPrime, issuing approximately 3.7 million shares to AirPrime shareholders. On March 6, 2007, the company announced its purchase of Hayward, California-based AirLink Communications, a privately held developer of high-value fixed, portable, and mobile wireless data solutions. Prior to the May 2007 completion of its sale to Sierra Wireless for a total of $27 million in cash and stock, AirLink had reported $24.8 million in revenues and gross margin of 44 percent.
In August 2008, Sierra Wireless purchased the assets of Junxion, a Seattle based producer of Linux-based mobile wireless access points and network routers for enterprise and government customers.
In December 2008, Sierra Wireless made a friendl
|
https://en.wikipedia.org/wiki/Journal%20of%20Machine%20Learning%20Research
|
The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning. It was established in 2000 and the first editor-in-chief was Leslie Kaelbling. The current editors-in-chief are Francis Bach (Inria) and David Blei (Columbia University).
History
The journal was established as an open-access alternative to the journal Machine Learning. In 2001, forty editorial board members of Machine Learning resigned, saying that in the era of the Internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. The open access model employed by the Journal of Machine Learning Research allows authors to publish articles for free and retain copyright, while archives are freely available online.
Print editions of the journal were published by MIT Press until 2004 and by Microtome Publishing thereafter. From its inception, the journal received no revenue from the print edition and paid no subvention to MIT Press or Microtome Publishing.
In response to the prohibitive costs of arranging workshop and conference proceedings publication with traditional academic publishing companies, the journal launched a proceedings publication arm in 2007 and now publishes proceedings for several leading machine learning conferences, including the International Conference on Machine Learning, COLT, AISTATS, and workshops held at the Conference on Neural Information Processing Systems.
Further reading
|
https://en.wikipedia.org/wiki/Machine%20Learning%20%28journal%29
|
Machine Learning is a peer-reviewed scientific journal, published since 1986.
In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet.
Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
Selected articles
|
https://en.wikipedia.org/wiki/Nicolas%20Appert%20Award
|
The Nicolas Appert Award is awarded by the Chicago Section of the Institute of Food Technologists for preeminence in and contributions to the field of food technology. The award has been given annually since 1942 and is named after Nicolas Appert, the French inventor of airtight food preservation. Award winners receive a bronze medal with a front view of Appert and a $5000 honorarium. This is considered one of the highest honors in food technology.
Winners
Source: IFT
|
https://en.wikipedia.org/wiki/Godunov%27s%20scheme
|
In numerical analysis and computational fluid dynamics, Godunov's scheme is a conservative numerical scheme, suggested by Sergei Godunov in 1959, for solving partial differential equations. One can think of this method as a conservative finite volume method which solves exact, or approximate Riemann problems at each inter-cell boundary. In its basic form, Godunov's method is first order accurate in both space and time, yet can be used as a base scheme for developing higher-order methods.
Basic scheme
Following the classical finite volume method framework, we seek to track a finite set of discrete unknowns,
where the and form a discrete set of points for the hyperbolic problem:
where the indices and indicate the derivatives in time and space, respectively. If we integrate the hyperbolic problem over a control volume we obtain a method of lines (MOL) formulation for the spatial cell averages:
which is a classical description of the first order, upwinded finite volume method.
Exact time integration of the above formula from time to time yields the exact update formula:
Godunov's method replaces the time integral of each
with a forward Euler method which yields a fully discrete update formula for each of the unknowns . That is, we approximate the integrals with
where is an approximation to the exact solution of the Riemann problem. For consistency, one assumes that
and that is increasing in the first argument, and decreasing in the second argument. For scalar problems where , one can use the simple Upwind scheme, which defines .
The full Godunov scheme requires the definition of an approximate, or an exact Riemann solver, but in its most basic form, is given by:
Linear problem
In the case of a linear problem, where , and without loss of generality, we'll assume that , the upwinded Godunov method yields:
which yields the classical first-order, upwinded Finite Volume scheme whose stability requires .
Three-step algorithm
Following Hirsch, the sche
|
https://en.wikipedia.org/wiki/Dok-7
|
Dok-7 is a non-catalytic cytoplasmic adaptor protein that is expressed specifically in muscle and is essential for the formation of neuromuscular synapses. Further, Dok-7 contains pleckstrin homology (PH) and phosphotyrosine-binding (PTB) domains that are critical for Dok-7 function. Finally, mutations in Dok-7 are commonly found in patients with limb-girdle congenital myasthenia.
Dok-7 regulates neuromuscular synapse formation by activating MuSK
The formation of neuromuscular synapses requires the muscle-specific receptor tyrosine kinase (MuSK). In mice genetically mutant for MuSK, acetylcholine receptors (AChRs) fail to cluster and motor neurons fail to differentiate. Because Dok-7 mutant mice are indistinguishable from MuSK mutant mice, these observations suggest Dok-7 might regulate MuSK activation. Indeed, Dok-7 binds phosphorylated MuSK and activates MuSK in purified protein preparations and in muscle in-vivo by transgenic overexpression. Furthermore, the nerve-derived organizing factor agrin fails to stimulate MuSK activation in muscle cells genetically null for Dok-7. Thus, Dok-7 is both necessary and sufficient for the activation of MuSK.
Dok-7 signaling
The requirement for MuSK in the formation of the NMJ was primarily demonstrated by mouse "knockout" studies. In mice which are deficient for either agrin or MuSK, the neuromuscular junction does not form.
Upon activation by its ligand agrin, MuSK signals via the proteins called Dok-7 and rapsyn, to induce "clustering" of acetylcholine receptors (AChR). Cell signaling downstream of MuSK requires Dok-7. Mice which lack this protein fail to develop endplates. Further, forced expression of Dok-7 induces the tyrosine phosphorylation, and thus the activation of MuSK. Dok-7 interacts with MuSK by way of protein "domain" called a "PTB domain."
In addition to the AChR, MuSK, and Dok-7 other proteins are then gathered, to form the endplate to the neuromuscular junction. The nerve terminates onto the endplate,
|
https://en.wikipedia.org/wiki/Analog%20temperature%20controlled%20crystal%20oscillator
|
In physics, an Analog Temperature Controlled Crystal Oscillator or Analogue Temperature Compensated Crystal Oscillator (ATCXO) uses analog sampling techniques to correct the temperature deficiencies of a crystal oscillator circuit, its package and its environment.
Typically the correction techniques involve the physical and electrical characterisation of the motional inductance and terminal capacitance of a crystal blank, the knowledge of which is used to create a correction polynomial, or algorithm, which in turn is implemented in circuit blocks. These are usually simulated in a mathematical modeling software tool such as SPICE, to verify that the original measured data can be corrected adequately. Once the system performance has been verified, these circuits are then implemented in a silicon die, usually in a bulk CMOS technology. Once fabricated, this die is then embedded into an oscillator module along with the crystal blank. Due to the sub accuracy of this type of crystal oscillator specialist packaging must be used to ensure good ageing and temperature shock characteristics. Example applications are for use in low power or battery operated consumer electronic products such as GSM or CDMA mobile phones, or GPS satellite navigation systems.
|
https://en.wikipedia.org/wiki/POW-R
|
POW-R (Psychoacoustically Optimized Wordlength Reduction) is a set of commercial dithering and noise shaping algorithms used in digital audio bit-depth reduction.
Developed by a consortium of four companies – The POW-R Consortium – the algorithms were first made available in 1999 in digital audio hardware products.
POW-R is now licensed for use by many companies, particularly those in the digital audio workstation (DAW) arena, where it currently has significant market share.
History
POW-R was developed between 1997 and 1998 after an unfavorable change in the licensing terms of a leading bit-depth reduction algorithm of the time prompted some of its licensees to put together a consortium to develop a viable alternative algorithm.
Formed by four audio engineering companies: Lake Technology (Dolby Labs), Weiss Engineering, Millennia Media and Z-Systems, the consortium set out with the goal to create 'the most sonically transparent dithering algorithm possible'.
In 1999, the first products containing POW-R were released by consortium companies. Other companies became interested in using POW-R in their products, and the algorithms are now licensed to a number of leading DAW vendors including Apple, Avid (Digidesign), Sonic Studio, Ableton, Magix / Sequoia / Samplitude, and others.
Reception
One of the first products to include POW-R was a hardware dithering unit from Weiss engineering; in a review of this product in 1999, mastering engineer Bob Katz spoke highly of the new algorithm declaring it 'an incredible achievement'.
Technical details
Technically, the entire POW-R suite is not noise shaping; rather, the original POW-R algorithm is based on narrow-band Nyquist dither, while other POW-R algorithms include noise shaping and white noise. Unlike noise-shaping algorithms based on an ‘Absolute threshold of hearing’ model (i.e. the quietest sound that can be heard on otherwise silent conditions), POW-R has been designed to give optimal performance at normal listenin
|
https://en.wikipedia.org/wiki/Paxos%20%28computer%20science%29
|
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors.
Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures.
Consensus protocols are the basis for the state machine replication approach to distributed computing, as suggested by Leslie Lamport and surveyed by Fred Schneider. State machine replication is a technique for converting an algorithm into a fault-tolerant, distributed implementation. Ad-hoc techniques may leave important cases of failures unresolved. The principled approach proposed by Lamport et al. ensures all cases are handled safely.
The Paxos protocol was first submitted in 1989 and named after a fictional legislative consensus system used on the Paxos island in Greece, where Lamport wrote that the parliament had to function "even though legislators continually wandered in and out of the parliamentary Chamber". It was later published as a journal article in 1998.
The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although no deterministic fault-tolerant consensus protocol can guarantee progress in an asynchronous network (a result proved in a paper by Fischer, Lynch and Paterson), Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke.
Paxos is usually used where durability is required (for example, to replicate a file or a database), in which the amount of durable state could be large. The protocol attempts to make progress even during periods when some bounded number of replicas are unresponsive. There is also a mechanism to drop a permanently failed replica or to add a new replica.
History
The top
|
https://en.wikipedia.org/wiki/Virtual%20workplace
|
A virtual workplace is a work environment where employees can perform their duties remotely, using technology such as laptops, smartphones, and video conferencing tools. A virtual workplace is not located in any one physical space. It is usually a network of several workplaces technologically connected (via a private network or the Internet) without regard to geographic boundaries. Employees are thus able to interact in a collaborative working environment regardless of where they are located. A virtual workplace integrates hardware, people, and online processes.
The phenomenon of a virtual workplace has grown in the 2000s as advances in technology have made it easier for employees to work from anywhere with an internet connection.
The virtual workplace industry includes companies that offer remote work solutions, such as virtual meeting (teleconference) software and project management tools. Consulting firms can also help companies transition to a virtual workplace if needed. The latest technology evolution in the space is virtual office software which allows companies to gather all their team members in one virtual workplace. Companies in a variety of industries, including technology, finance, and healthcare, are turning to virtual workplaces to increase employee flexibility and productivity, reduce office costs, and attract and retain top talent. Recently, there have been four industries that consider remote work suitable: communications and information technology, educational services, media and communications, and professional and business services.
History
As information technology began to play a greater role in the daily operations of organizations, virtual workplaces developed as an augmentation or alternative to traditional work environments of rooms, cubicles and office buildings.
In 2010, the Telework Enhancement Act of 2010 required each Executive agency in the United States to establish a policy allowing remote work to the maximum extent possible, s
|
https://en.wikipedia.org/wiki/The%20Secret%20Island%20of%20Dr.%20Quandary
|
The Secret Island of Dr. Quandary is an educational computer puzzle game developed by MECC, which pits the player against a variety of mathematical and logical puzzles. It was released in 1992 for MS-DOS and Macintosh.
Story
The player starts as a human playing in a shooting gallery in Dr. Quandary's carnival, and is given a live-action figure when the shooting game is defeated. However, it is a ruse for Dr. Quandary to put the player in the doll and transport them to his secret island, where the player must gather and brew the Fixer Elixir in order to escape.
Puzzles
There are a variety of puzzles in the game, most requiring some mathematical or logic skills, with some memory challenges thrown in as well. There are also varieties of traditional puzzles, such as Tangrams, Bulls and Cows, Taxman, the Tower of Hanoi, and Nim. Beating each puzzle nets the player an ingredient for the Fixer Elixir, the recipe of which can be a puzzle in itself for the harder difficulty levels.
|
https://en.wikipedia.org/wiki/Treatise%20on%20Natural%20Philosophy
|
Treatise on Natural Philosophy was an 1867 text book by William Thomson (later Lord Kelvin) and Peter Guthrie Tait, published by Oxford University Press.
The Treatise was often referred to as and , as explained by Alexander Macfarlane:
Maxwell had facetiously referred to Thomson as and Tait as . Hence the Treatise on Natural Philosophy came to be commonly referred to as and in conversation with mathematicians.
Reception
The first volume was received by an enthusiastic review in Saturday Review:
The grand result of all concurrent research in modern times has been to confirm what was but perhaps a dream of genius, or an instinct of the keen Greek intellect, that all the operations of nature are rooted and grounded in number and figure.
The Treatise was also reviewed as Elements of Natural Philosophy (1873).
Thomson & Tait's Treatise on Natural Philosophy was reviewed by J. C. Maxwell in Nature of 3 July 1879 indicating the importance given to kinematics: "The guiding idea … is that geometry itself is part of the science of motion."
In 1892 Karl Pearson noted that and perpetuated a "subjectivity of force" that originated with Newton.
In 1902 Alexander Macfarlane ascribed much of the inspiration of the book to William Rankine's 1865 paper "Outlines of the Science of Energetics":
The main object of Thomson and Tait's Treatise on Natural Philosophy was to fill up Rankine's outlines, — expound all branches of physics from the standpoint of the doctrine of energy. The plan contemplated four volumes; the printing of the first volume began in 1862 and was completed in 1867. The other three volumes never appeared. When a second edition was called for, the matter of the first volume was increased by a number of appendices and appeared as two separately bound parts. The volume which did appear, although judged rather difficult reading even by accomplished mathematicians, has achieved great success. It has been translated in French and German; it has educated the new
|
https://en.wikipedia.org/wiki/Full-employment%20theorem
|
In computer science and mathematics, a full employment theorem is a term used, often humorously, to refer to a theorem which states that no algorithm can optimally perform a particular task done by some class of professionals. The name arises because such a theorem ensures that there is endless scope to keep discovering new techniques to improve the way at least some specific task is done.
For example, the full employment theorem for compiler writers states that there is no such thing as a provably perfect size-optimizing compiler, as such a proof for the compiler would have to detect non-terminating computations and reduce them to a one-instruction infinite loop. Thus, the existence of a provably perfect size-optimizing compiler would imply a solution to the halting problem, which cannot exist. This also implies that there may always be a better compiler since the proof that one has the best compiler cannot exist. Therefore, compiler writers will always be able to speculate that they have something to improve. A similar example in practical computer science is the idea of no free lunch in search and optimization, which states that no efficient general-purpose solver can exist, and hence there will always be some particular problem whose best known solution might be improved.
Similarly, Gödel's incompleteness theorems have been called full employment theorems for mathematicians. Tasks such as virus writing and detection, and spam filtering and filter-breaking are also subject to Rice's theorem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.