source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Wiener%E2%80%93Ikehara%20theorem
|
The Wiener–Ikehara theorem is a Tauberian theorem introduced by . It follows from Wiener's Tauberian theorem, and can be used to prove the prime number theorem (Chandrasekharan, 1969).
Statement
Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that
converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c,
has an extension as a continuous function for ℜ(s) ≥ 1.
Then the limit as x goes to infinity of e−x A(x) is equal to c.
One Particular Application
An important number-theoretic application of the theorem is to Dirichlet series of the form
where a(n) is non-negative. If the series converges to an analytic function in
with a simple pole of residue c at s = b, then
Applying this to the logarithmic derivative of the Riemann zeta function, where the coefficients in the Dirichlet series are values of the von Mangoldt function, it is possible to deduce the Prime number theorem from the fact that the zeta function has no zeroes on the line
|
https://en.wikipedia.org/wiki/Nehalem%20%28microarchitecture%29
|
Nehalem is the codename for Intel's 45 nm microarchitecture released in November 2008. It was used in the first-generation of the Intel Core i5 and i7 processors, and succeeds the older Core microarchitecture used on Core 2 processors. The term "Nehalem" comes from the Nehalem River.
Nehalem is built on the 45 nm process, is able to run at higher clock speeds, and is more energy-efficient than Penryn microprocessors. Hyper-threading is reintroduced, along with a reduction in L2 cache size, as well as an enlarged L3 cache that is shared among all cores. Nehalem is an architecture that differs radically from NetBurst, while retaining some of the latter's minor features.
Nehalem later received a die-shrink to 32 nm with Westmere, and was fully succeeded by "second-generation" Sandy Bridge in January 2011.
Technology
Cache line block on L2/L3 cache was reduced from 128 bytes in NetBurst & Conroe/Penryn to 64 bytes per line in this generation (same size as Yonah and Pentium M).
Hyper-threading reintroduced.
Intel Turbo Boost 1.0.
2–24 MiB L3 cache
Instruction Fetch Unit (IFU) containing second-level branch predictor with two level Branch Target Buffer (BTB) and Return Stack Buffer (RSB). Nehalem also supports all predictor types previously used in Intel's processors like Indirect Predictor and Loop Detector.
sTLB (second level unified translation lookaside buffer) (i.e. both instructions and data) that contains 512 entries for small pages only, and is again 4 way associative.
3 integer ALU, 2 vector ALU and 2 AGU per core.
Native (all processor cores on a single die) quad- and octa-core processors
Intel QuickPath Interconnect in high-end models replacing the legacy front side bus
64 KB L1 cache per core (32 KB L1 data and 32 KB L1 instruction), and 256 KB L2 cache per core.
Integration of PCI Express and DMI into the processor in mid-range models, replacing the northbridge
Integrated memory controller supporting two or three memory channels of DDR3 SDRAM
|
https://en.wikipedia.org/wiki/Demihypercube
|
In geometry, demihypercubes (also called n-demicubes, n-hemicubes, and half measure polytopes) are a class of n-polytopes constructed from alternation of an n-hypercube, labeled as hγn for being half of the hypercube family, γn. Half of the vertices are deleted and new facets are formed. The 2n facets become 2n (n−1)-demicubes, and 2n (n−1)-simplex facets are formed in place of the deleted vertices.
They have been named with a demi- prefix to each hypercube name: demicube, demitesseract, etc. The demicube is identical to the regular tetrahedron, and the demitesseract is identical to the regular 16-cell. The demipenteract is considered semiregular for having only regular facets. Higher forms do not have all regular facets but are all uniform polytopes.
The vertices and edges of a demihypercube form two copies of the halved cube graph.
An n-demicube has inversion symmetry if n is even.
Discovery
Thorold Gosset described the demipenteract in his 1900 publication listing all of the regular and semiregular figures in n-dimensions above three. He called it a 5-ic semi-regular. It also exists within the semiregular k21 polytope family.
The demihypercubes can be represented by extended Schläfli symbols of the form h{4,3,...,3} as half the vertices of {4,3,...,3}. The vertex figures of demihypercubes are rectified n-simplexes.
Constructions
They are represented by Coxeter-Dynkin diagrams of three constructive forms:
... (As an alternated orthotope) s{21,1,...,1}
... (As an alternated hypercube) h{4,3n−1}
.... (As a demihypercube) {31,n−3,1}
H.S.M. Coxeter also labeled the third bifurcating diagrams as 1k1 representing the lengths of the three branches and led by the ringed branch.
An n-demicube, n greater than 2, has n(n−1)/2 edges meeting at each vertex. The graphs below show less edges at each vertex due to overlapping edges in the symmetry projection.
In general, a demicube's elements can be determined from the original n-cube: (with Cn,m = mth-face count i
|
https://en.wikipedia.org/wiki/Submucosa
|
The submucosa (or tela submucosa) is a thin layer of tissue in various organs of the gastrointestinal, respiratory, and genitourinary tracts. It is the layer of dense irregular connective tissue that supports the mucosa (mucous membrane) and joins it to the muscular layer, the bulk of overlying smooth muscle (fibers running circularly within layer of longitudinal muscle).
The submucosa (sub- + mucosa) is to a mucous membrane what the subserosa (sub- + serosa) is to a serous membrane.
Structure
Blood vessels, lymphatic vessels, and nerves (all supplying the mucosa) will run through here. In the intestinal wall, tiny parasympathetic ganglia are scattered around forming the submucous plexus (or "Meissner's plexus") where preganglionic parasympathetic neurons synapse with postganglionic nerve fibers that supply the muscularis mucosae. Histologically, the wall of the alimentary canal shows four distinct layers (from the lumen moving out): mucosa, submucosa, muscularis externa, and either a serous membrane or an adventitia.
In the gastrointestinal tract and the respiratory tract the submucosa contains the submucosal glands that secrete mucus.
Clinical significance
Identification of the submucosa plays an important role in diagnostic and therapeutic endoscopy, where special fibre-optic cameras are used to perform procedures on the gastrointestinal tract. Abnormalities of the submucosa, such as gastrointestinal stromal tumors, usually show integrity of the mucosal surface.
The submucosa is also identified in endoscopic ultrasound to identify the depth of tumours and to identify other abnormalities. An injection of dye, saline, or epinephrine into the submucosa is imperative in the safe removal of certain polyps.
Endoscopic mucosal resection involves removal of the mucosal layer, and in order to be done safely, a submucosal injection of dye is performed to ensure integrity at the beginning of the procedure.
Female uterine submucosal layers are liable to develop fibr
|
https://en.wikipedia.org/wiki/Machine%20guidance
|
The term Machine Guidance is used to describe a wide range of techniques which improve the productivity of agricultural, mining and construction equipment. It is most commonly used to describe systems which incorporate GPS, Motion Measuring Units (MMU) and other devices to provide on-board systems with information about the movement of the machine in either 3, 5 or 7 axis of rotation. Feedback to the operator is provided through audio and visual displays which allows improved control of the machine in relation to the intended or designed direction of travel.
See also
List of emerging technologies
|
https://en.wikipedia.org/wiki/Monoblock%20LNB
|
Low-noise block downconverters (LNBs) are electronic devices coupled to satellite dishes for TV reception or general telecommunication that convert electromagnetic waves into digital signals that can be used to transform information into human or machine interpretable data, e.g., optical images, video, code, communications, etc.
Monoblock (or monobloc) low-noise block downconverters are a special type of LNBs representing a single device that contains several (typically 2-4) LNB units and a Digital Satellite Equipment Control (DiSEqC) switch. The latter allows the recipient to receive signals from several neighboring satellites each communicating different channels or signals which increases the potential bandwidth of the receiver.
The two, three, or four LNBs can be automatically addressed with any DiSEqC 1.0 or higher receiver. In some cases, they can also be addressed with ToneBurst/MiniDiSEqC. However, they are only available for satellites with a fixed 3-degree, 4°, 4.3°, or 6° other spacing.
Most receivers which are commercially available are compatible with at least DiSeqC 1.0 allowing dynamic switching between 4 satellites (all of contemporary Monoblock LNBs), as the recipient manually switches settings, e.g., flipping channels using a TV remote control.
Availability examples
In Europe, for example, there are monoblock single, twin, and quad LNBs for the Ku band, which have a pre-defined spacing of 6 degrees (for Astra 19.2°E/Hot Bird 13°E).
In March 2007, a new type of monoblock, called the Duo LNB was introduced by CanalDigitaal in the Netherlands for the simultaneous reception of Astra 19.2°E/Astra 23.5°E with a spacing of just 4.3 degrees. Unlike most other monoblocks, the Duo LNB was intended for use with 60 cm dishes, whereas most monoblocks may require a larger, 80 cm or 1 m dish.
The Duo LNB is available in twin and quad versions. Triple monoblock LNBs are available in single, twin, and quad versions.
There are also triple monoblock LNB uni
|
https://en.wikipedia.org/wiki/HBsAg
|
HBsAg (also known as the Australia antigen) is the surface antigen of the hepatitis B virus (HBV). Its presence in blood indicates current hepatitis B infection.
Structure and function
The viral envelope of an enveloped virus has different surface proteins from the rest of the virus which act as antigens. These antigens are recognized by antibody proteins that bind specifically to one of these surface proteins.
Immunoassay
Today, these antigen-proteins can be genetically manufactured (e.g. transgene E. coli) to produce material for a simple antigen test, which detects the presence of HBV.
It is present in the sera of patients with viral hepatitis B (with or without clinical symptoms). Patients who developed antibodies against HBsAg (anti-HBsAg seroconversion) are usually considered non-infectious. HBsAg detection by immunoassay is used in blood screening, to establish a diagnosis of hepatitis B infection in the clinical setting (in combination with other disease markers) and to monitor antiviral treatment.
In histopathology, the presence of HBsAg is more commonly demonstrated by the use of the Shikata orcein technique, which uses a natural dye to bind to the antigen in infected liver cells.
Positive HBsAg tests can be due to recent vaccination against Hepatitis B virus but this positivity is unlikely to persist beyond 14 days post-vaccination.
History
It is commonly referred to as the Australia Antigen. This is because it was first isolated by the American research physician and Nobel Prize winner Baruch S. Blumberg in the serum of an Australian Aboriginal person. It was discovered to be part of the virus that caused serum hepatitis by virologist Alfred Prince in 1968.
Heptavax, a "first-generation" hepatitis B vaccine in the 1980s, was made from HBsAg extracted from the blood plasma of hepatitis patients. Current vaccine are made from recombinant HBsAg grown in yeast.
See also
HBcAg
HBeAg
|
https://en.wikipedia.org/wiki/Netherlands%20Forensic%20Institute
|
The Netherlands Forensic Institute (Dutch Nederlands Forensisch Instituut) is the national forensics institute of the Netherlands, located in the Ypenburg quarter of The Hague.
It is an autonomous division of the Dutch Ministry of Security and Justice and falls under the Directorate-General for the Administration of Justice and Law Enforcement.
History
On 30 July 1945, the government decided to set up a Justice Laboratory. Three years later, on 4 November 1948, the laboratory became a department of the Ministry of Justice.
A similar institution was founded in 1951: Gerechtelijk Geneeskundig Laboratorium (Judicial Medical Laboratory), which was later renamed Laboratorium voor Gerechtelijke Pathologie Laboratory for Judicial Pathology which were located at the building in The Hague which was later used by Europol.
Pathologist Dr. Jan Zeldenrust was the first CEO of this laboratory.
On 1 November 1999, the two laboratories merged into the Nederlands Forensisch Instituut (Netherlands Forensic Institute).
The laboratory was based in Rijswijk until October 2004, when it moved to the Ypenburg quarter of The Hague.
See also
Deventer murder case
|
https://en.wikipedia.org/wiki/Replicative%20transposition
|
Replicative transposition is a mechanism of transposition in molecular biology, proposed by James A. Shapiro in 1979, in which the transposable element is duplicated during the reaction, so that the transposing entity is a copy of the original element. In this mechanism, the donor and receptor DNA sequences form a characteristic intermediate "theta" configuration, sometimes called a "Shapiro intermediate". Replicative transposition is characteristic to retrotransposons and occurs from time to time in class II transposons.
|
https://en.wikipedia.org/wiki/Bass%20diffusion%20model
|
The Bass model or Bass diffusion model was developed by Frank Bass. It consists of a simple differential equation that describes the process of how new products get adopted in a population. The model presents a rationale of how current adopters and potential adopters of a new product interact. The basic premise of the model is that adopters can be classified as innovators or as imitators, and the speed and timing of adoption depends on their degree of innovation and the degree of imitation among adopters. The Bass model has been widely used in forecasting, especially new products' sales forecasting and technology forecasting. Mathematically, the basic Bass diffusion is a Riccati equation with constant coefficients equivalent to Verhulst—Pearl Logistic growth.
In 1969, Frank Bass published his paper on a new product growth model for consumer durables. Prior to this, Everett Rogers published Diffusion of Innovations, a highly influential work that described the different stages of product adoption. Bass contributed some mathematical ideas to the concept. While the Rogers model describes all four stages of the product lifecycle (Introduction, Growth, Maturity, Decline), The Bass model focuses on the first two (Introduction and Growth). Some of the Bass-Model extensions present mathematical models for the last two (Maturity and Decline).
Model formulation
Where:
is the installed base fraction
is the rate of change of the installed base fraction, i.e.
is the coefficient of innovation
is the coefficient of imitation
Expressed as an ordinary differential equation,
Sales (or new adopters) at time is the rate of change of installed base, i.e., multiplied by the ultimate market potential . Under the condition , we have that
We have the decomposition where is the number of innovators at time , and is the number of imitators at time .
The time of peak sales
The times of the inflection points at the new adopters' curve t**
or in another
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Cell%20Biology
|
The Max Planck Institute for Cell Biology was located in Ladenburg, Germany. It was founded 1947 as Max Planck Institute for Oceanic biology in Wilhelmshaven, after renaming in 1968, it was moved to Ladenburg 1977 under the direction of Hans-Georg Schweiger. It was closed 1 July 2003. It was one of 80 institutes in the Max Planck Society (Max Planck Gesellschaft).
External links
Homepage of the Max Planck Institute for Cell Biology
Molecular biology institutes
Cell Biology (closed)
1947 establishments in Germany
2003 disestablishments in Germany
Research institutes in Lower Saxony
Buildings and structures in Wilhelmshaven
Ladenburg
|
https://en.wikipedia.org/wiki/Jan%20Denef
|
Jan Denef (born 4 September 1951) is a Belgian mathematician. He is an Emeritus Professor of Mathematics at the Katholieke Universiteit Leuven (KU Leuven).
Denef obtained his PhD from KU Leuven in 1975 with a thesis on Hilbert's tenth problem; his advisors were Louis Philippe Bouckaert and Willem Kuijk.
He is a specialist of model theory, number theory and algebraic geometry. He is well known for his early work on Hilbert's tenth problem and for developing the theory of motivic integration in a series of papers with François Loeser. He has also worked on computational number theory.
Recently he proved a conjecture of Jean-Louis Colliot-Thélène which generalizes the Ax–Kochen theorem.
In 2002 Denef was an Invited Speaker at the International Congresses of Mathematicians in Beijing. His Hirsch-index is 24.
|
https://en.wikipedia.org/wiki/Humanistic%20informatics
|
Humanistic informatics is one of several names chosen for the study of the relationship between human culture and technology. The term is fairly common in Europe, but is little known in the English-speaking world, though digital humanities (also known as humanities computing) is in many cases roughly equivalent.
Humanistic informatics departments were generally started in the 1990s when universities rarely taught humanities-based approaches to the rapidly developing computerized society. For this reason, the field was quite broadly defined, and included courses in humanities computing, basic introductions to how computers work, historical developments of technology, technology and learning, digital art and literature and digital culture. Today several departments have declared more specialized areas of research, such as digital arts and culture at the University of Bergen, and socio-cultural communication with and without technology at the University of Aalborg.
Digital humanities is a primary topic, and there are several universities in the US and the UK that have digital arts and humanities research and development centers. One aspect of digital humanities that will grow will be the intersection of new digital media and the humanities, particularly in the gaming industry which has developed both casual and serious gaming and game design strategies to foster learning in the humanities and all other academic disciplines. A key principle in all digital interactive media or games is the storyline; the narrative or quest or goal of the game is primary to both literary works and games. Characters and players go on the quest, and playing the game becomes the narrative. Game design principles, also relevant in literature and the fine arts, include visual literacy and empowering players/learners to align with great artists and writers who believe in the creative process.
|
https://en.wikipedia.org/wiki/Shalom%3A%20Knightmare%20III
|
is a 1987 adventure video game developed and published by Konami for the MSX home computer. It was re-released digitally for Microsoft Windows. It is the third and final entry in the Knightmare trilogy. Set a century after the events of The Maze of Galious, the plot follows a Japanese high school student teleported into the Grecian Kingdom who must prevent the resurrection of the ancient demon lord Gog. Gameplay revolves around interaction with characters and exploration, while taking part in battles against enemies and bosses. The game was created by the MSX division at Konami under the management of Shigeru Fukutake. The process of making original titles for the platform revolved around the person who came up with the characters. Development proceeded with a team of four or five members, lasting somewhere between four and six months. It received a mixed reception from contemporary critics and retrospective commentarists.
Gameplay
Shalom: Knightmare III is an adventure game.
Synopsis
Setting and characters
Plot
Development and release
Shalom: Knightmare III was developed by the MSX division at Konami under the management of Shigeru Fukutake, who revealed its creation process in a 1988 interview with the Japanese publication Micom BASIC Magazine. Fukutate explained that the staffer who came up with the characters was in charge of designing and facilitating the development of the project, as the process of making original titles for the MSX revolved around the person who came up with the characters being assigned to do both planning and the story. Fukutate further explained that the planner would then lead a team of four or five members to proceed with development, which would last somewhere between four and six months. The game was published for the MSX exclusively in Japan by Konami on December 23, 1987. Its ending theme was featured alongside music tracks from other Koanmi games in a compilation album titled Konami Ending Collection, distributed in Japan b
|
https://en.wikipedia.org/wiki/Motivic%20integration
|
Motivic integration is a notion in algebraic geometry that was introduced by Maxim Kontsevich in 1995 and was developed by Jan Denef and François Loeser. Since its introduction it has proved to be quite useful in various branches of algebraic geometry, most notably birational geometry and singularity theory. Roughly speaking, motivic integration assigns to subsets of the arc space of an algebraic variety, a volume living in the Grothendieck ring of algebraic varieties. The naming 'motivic' mirrors the fact that unlike ordinary integration, for which the values are real numbers, in motivic integration the values are geometric in nature.
|
https://en.wikipedia.org/wiki/Backcoating
|
Backcoating is the lamination of two sheets of paper back to back to create a superior paper for folding origami models.
Notes and references
Paper art
Origami
|
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Loeser
|
François Loeser (born August 25, 1958) is a French mathematician. He is Professor of Mathematics at the Pierre-and-Marie-Curie University in Paris. From 2000 to 2010 he was Professor at École Normale Supérieure. Since 2015, he is a senior member of the Institut Universitaire de France.
He was awarded the CNRS Silver Medal in 2011 and the Charles-Louis de Saulces de Freycinet Prize of the French Academy of Sciences in 2007. He was awarded an ERC Advanced Investigator Grant in 2010 and has been a Plenary Speaker at the European Congress of Mathematics in Amsterdam in 2008. In 2014 Loeser was an Invited Speaker at the International Congresses of Mathematicians in Seoul. In 2015 he was elected as a fellow of the American Mathematical Society "for contributions to algebraic and arithmetic geometry and to model theory".
He was elected member of Academia Europaea in 2019.
He is a specialist of algebraic geometry and is best known for his work on motivic integration, part of it in collaboration with Jan Denef.
|
https://en.wikipedia.org/wiki/N-jet
|
An N-jet is the set of (partial) derivatives of a function up to order N.
Specifically, in the area of computer vision, the N-jet is usually computed from a scale space representation of the input image , and the partial derivatives of are used as a basis for expressing various types of visual modules. For example, algorithms for tasks such as feature detection, feature classification, stereo matching, tracking and object recognition can be expressed in terms of N-jets computed at one or several scales in scale space.
See also
Scale space implementation
Jet (mathematics)
|
https://en.wikipedia.org/wiki/Metric%20derivative
|
In mathematics, the metric derivative is a notion of derivative appropriate to parametrized paths in metric spaces. It generalizes the notion of "speed" or "absolute velocity" to spaces which have a notion of distance (i.e. metric spaces) but not direction (such as vector spaces).
Definition
Let be a metric space. Let have a limit point at . Let be a path. Then the metric derivative of at , denoted , is defined by
if this limit exists.
Properties
Recall that ACp(I; X) is the space of curves γ : I → X such that
for some m in the Lp space Lp(I; R). For γ ∈ ACp(I; X), the metric derivative of γ exists for Lebesgue-almost all times in I, and the metric derivative is the smallest m ∈ Lp(I; R) such that the above inequality holds.
If Euclidean space is equipped with its usual Euclidean norm , and is the usual Fréchet derivative with respect to time, then
where is the Euclidean metric.
|
https://en.wikipedia.org/wiki/Comparison%20of%20genealogy%20software
|
This article compares several selected client-based genealogy programs. Web-based genealogy software is not included.
General information
General features
Genealogy software products differ in the way they support data acquisition (e.g. drag and drop data entry for images, flexible data formats, free defined custom attributes for persons and connections between persons, rating of sources) and interaction (e.g. 3D-view, name filters, full text search and dynamic pan and zoom navigation), in reporting (e.g.: fan charts, automatic narratives, relationship between arbitrary people, place of birth on virtual globes, statistics about number of children per family), validation (e.g.: consistency checks, research assistants connected to online genealogy databases), exporting (e.g.: export as web page, book or wall chart) and integration (e.g.: synchronisation with tablet version). Some software might include also fun and entertainment features (e.g. quizzes or slideshows).
Get acquainted with the terms of use and privacy policy of your app provider to understand what happens with your data. For example, the Family Tree Builder by MyHeritage claims a royalty-free, worldwide, perpetual and non-exclusive license to host, copy, post and distribute your content.
Genealogical features
Besides exchange between systems (e.g. GEDCOM support for import and export), flexible data handling (e.g. custom attributes and multiple kinds of relations between people) is relevant.
Languages
Available user interface languages
Notes
|
https://en.wikipedia.org/wiki/Transient%20synovitis
|
Transient synovitis of hip (also called toxic synovitis; see below for more synonyms) is a self-limiting condition in which there is an inflammation of the inner lining (the synovium) of the capsule of the hip joint. The term irritable hip refers to the syndrome of acute hip pain, joint stiffness, limp or non-weightbearing, indicative of an underlying condition such as transient synovitis or orthopedic infections (like septic arthritis or osteomyelitis). In everyday clinical practice however, irritable hip is commonly used as a synonym for transient synovitis. It should not be confused with sciatica, a condition describing hip and lower back pain much more common to adults than transient synovitis but with similar signs and symptoms.
Transient synovitis usually affects children between three and ten years old (but it has been reported in a 3-month-old infant and in some adults). It is the most common cause of sudden hip pain and limp in young children. Boys are affected two to four times as often as girls. The exact cause is unknown. A recent viral infection (most commonly an upper respiratory tract infection) or a trauma have been postulated as precipitating events, although these are reported only in 30% and 5% of cases, respectively.
Transient synovitis is a diagnosis of exclusion. The diagnosis can be made in the typical setting of pain or limp in a young child who is not generally unwell and has no recent trauma. There is a limited range of motion of the hip joint. Nevertheless, children with transient synovitis of the hip can usually weight bear. This is an important clinical differentiating sign from septic arthritis. Blood tests may show mild inflammation. An ultrasound scan of the hip joint can show a fluid collection (effusion). Treatment is with nonsteroidal anti-inflammatory drugs and limited weight-bearing. The condition usually clears by itself within seven to ten days, but a small group of patients will continue to have symptoms for several weeks
|
https://en.wikipedia.org/wiki/Paleontological%20Society
|
The Paleontological Society, formerly the Paleontological Society of America, is an international organisation devoted to the promotion of paleontology. The Society was founded in 1908 in Baltimore, Maryland, and was incorporated in April 1968 in the District of Columbia. The Society publishes the bi-monthly Journal of Paleontology and the quarterly Paleobiology, holds an annual meeting in the autumn in conjunction with the Geological Society of America, sponsors conferences and lectures, and provides grants and scholarships.
The Society has five geographic sections—Pacific Coast (founded March 1911), North-Central (founded May 1974), Northeastern (founded March 1977), Southeastern (founded November 1979), Rocky Mountain (founded October 1985), and South-Central (founded November 1988).
Medals and awards
The Society recognizes distinguished accomplishments through three awards, one recognized by a medal, the other two by inscribed plaques normally presented annually:
The Paleontological Society Medal, given since 1963, is the most prestigious honor bestowed by the Society and is awarded to a person whose eminence is based on advancement of knowledge in paleontology.
The Charles Schuchert Award, given since 1973, is presented to a person under the age of 40 whose work reflects excellence and promise in the science of paleontology. The award is named for Charles Schuchert (1858-1942), an invertebrate paleontologist from Yale University who was a leader in the development of paleogeography.
The Harrell L. Strimple Award, awarded since 1984, is given for contributions to paleontology by an amateur (someone who does not derive his/her livelihood from the study of fossils).
See also
John Mason Clarke, the society's first President
List of presidents of the Paleontological Society
|
https://en.wikipedia.org/wiki/Galahad%20library
|
The Galahad library is a thread-safe library of packages for the solution of mathematical optimization problems. The areas covered by the library are unconstrained and bound-constrained optimization, quadratic programming, nonlinear programming, systems of nonlinear equations and inequalities, and non-linear least squares problems. The library is mostly written in the Fortran 90 programming language.
The name of the library originates from its major package for general nonlinear programming, LANCELOT-B, the successor of the original augmented Lagrangian package LANCELOT of Conn, Gould and Toint.
Other packages in the library include:
a filter-based method for systems of linear and nonlinear equations and inequalities,
an active-set method for nonconvex quadratic programming,
a primal-dual interior-point method for nonconvex quadratic programming,
a presolver for quadratic programs,
a Lanczos method for trust-region subproblems,
an interior-point method to solve linear programs or separable convex programs or alternatively, to compute the analytic center of a set defined by such constraints, if it exists.
Packages in the GALAHAD library accept problems modeled in either the Standard Input Format (SIF), or the AMPL modeling language. For problems modeled in the SIF, the GALAHAD library naturally relies upon the CUTEr package, an optimization toolbox providing all low-level functionalities required by solvers.
The library is available on several popular computing platforms, including Compaq (DEC) Alpha, Cray, HP, IBM RS/6000, Intel-like PCs, SGI and Sun. It is designed to be easily adapted to other platforms. Support is provided for many operating systems, including Tru64, Linux, HP-UX, AIX, IRIX and Solaris, and for a variety of popular Fortran 90 compilers on these platforms and operating systems.
The GALAHAD Library is authored and maintained by N.I.M. Gould, D. Orban and Ph.L. Toint.
|
https://en.wikipedia.org/wiki/Uniform%20polytope
|
In geometry, a uniform polytope of dimension three or higher is a vertex-transitive polytope bounded by uniform facets. The uniform polytopes in two dimensions are the regular polygons (the definition is different in 2 dimensions to exclude vertex-transitive even-sided polygons that alternate two different lengths of edges).
This is a generalization of the older category of semiregular polytopes, but also includes the regular polytopes. Further, star regular faces and vertex figures (star polygons) are allowed, which greatly expand the possible solutions. A strict definition requires uniform polytopes to be finite, while a more expansive definition allows uniform honeycombs (2-dimensional tilings and higher dimensional honeycombs) of Euclidean and hyperbolic space to be considered polytopes as well.
Operations
Nearly every uniform polytope can be generated by a Wythoff construction, and represented by a Coxeter diagram. Notable exceptions include the great dirhombicosidodecahedron in three dimensions and the grand antiprism in four dimensions. The terminology for the convex uniform polytopes used in uniform polyhedron, uniform 4-polytope, uniform 5-polytope, uniform 6-polytope, uniform tiling, and convex uniform honeycomb articles were coined by Norman Johnson.
Equivalently, the Wythoffian polytopes can be generated by applying basic operations to the regular polytopes in that dimension. This approach was first used by Johannes Kepler, and is the basis of the Conway polyhedron notation.
Rectification operators
Regular n-polytopes have n orders of rectification. The zeroth rectification is the original form. The (n−1)-th rectification is the dual. A rectification reduces edges to vertices, a birectification reduces faces to vertices, a trirectification reduces cells to vertices, a quadirectification reduces 4-faces to vertices, a quintirectification reduced 5-faces to vertices, and so on.
An extended Schläfli symbol can be used for representing rectified fo
|
https://en.wikipedia.org/wiki/Programmer%20art
|
Programmer art refers to temporary assets added by the programmer to test functionality.
When creating the graphics, speed is a priority and aesthetics are secondary (if they are given any consideration at all). In fact, programmer art might be intentionally bad, to draw attention to the fact that the graphics are merely placeholders and should not be shipped with the final product. This practice might also speed its replacement.
Common forms of programmer art include stick figure sprites in platformer games, and fuchsia textures in games using 3D models. Games with a "top-down" perspective tend to use alphanumeric characters and simple 2D graphics to represent characters and landscape elements.
Not all programmers decide to replace the assets in their software prior to release, though. This is especially common in indie games, since indie developers generally lack the resources to commission large amounts of assets for their games.
Computer art
|
https://en.wikipedia.org/wiki/Benjamin%20syndrome
|
Benjamin syndrome is a type of multiple congenital anomaly/intellectual disability (MCA/MR) syndrome. It is characterized by hypochromic anemia with intellectual disability and various craniofacial and other anomalies. It can also include heart murmur, dental caries and splenic tumors.
It was first described in the medical literature in 1911. Symptoms include megalocephaly, external ear deformities, dental caries, micromelia, hypoplastic bone deformities, hypogonadism, hypochromic anemia with occasional tumors, and intellectual disability.
|
https://en.wikipedia.org/wiki/Formation%20matrix
|
In statistics and information theory, the expected formation matrix of a likelihood function is the matrix inverse of the Fisher information matrix of , while the observed formation matrix of is the inverse of the observed information matrix of .
Currently, no notation for dealing with formation matrices is widely used, but in books and articles by Ole E. Barndorff-Nielsen and Peter McCullagh, the symbol is used to denote the element of the i-th line and j-th column of the observed formation matrix. The geometric interpretation of the Fisher information matrix (metric) leads to a notation of following the notation of the (contravariant) metric tensor in differential geometry. The Fisher information metric is denoted by so that using Einstein notation we have .
These matrices appear naturally in the asymptotic expansion of the distribution of many statistics related to the likelihood ratio.
See also
Fisher information
Shannon entropy
Notes
|
https://en.wikipedia.org/wiki/Convection%20heater
|
A convection heater (otherwise known as a convector heater) is a heater that uses convection currents to heat and circulate air. These currents circulate throughout the body of the appliance and across its heating element. This process, following the principle of thermal conduction, heats up the air, reducing its density relative to colder air and causing it to rise.
As heated air molecules rise, they displace cooler air molecules down towards the heating appliance. The displaced cool air is heated as a result, decreases in density, rises, and repeats the cycle.
History
Ancient heating systems, including hearths, furnaces, and stoves, operated primarily through convection. Fixed central hearths, which were first excavated and retrieved in Greece, date back to 2500BC, while crude fireplaces were used as early as the 800sAD and in the 13th century, when castles in Europe were built with fireplaces with a crude form of chimney.
Developments in convection heating technology included the publication of the very first manual on fireplace design called Mechanique du Feu in 1713, the creation of stoves with thermostatic control in 1849, and the rise of numerous cast iron stove manufacturers during the American Civil War.
The Model "S", illustrated by the Sala Heater & Mantel Co. in Dallas, Texas in 1924, is an example of an early model of a convection space heater. This model consisted of three stoves and was considered to be a highly efficient radiant type of gas heater at the time. It utilized radiant heat, and supplemented its power by drawing cold air through the facing, heating it, and forcing it out through the register. This allowed air circulation while maintaining a cool exterior on the appliance.
These early developments, along with the technological advancements made possible by electricity and inventions of tools like thermostats, gave way for the design of modern convection heaters.
Types
Convection heaters are commonly classified according to their
|
https://en.wikipedia.org/wiki/Lib.ru
|
Lib.ru, also known as Maksim Moshkow's Library (, started to operate in November 1994) is the oldest electronic library in the Russian Internet segment.
Founded and supported by Maksim Moshkow, it receives contributions mainly from users who send texts they scanned and processed (OCR, proofreading). This method of acquisition provides the library a broad and efficient augmentability, though sometimes it adversely affects the quality (errors, omissions).
The structure of the library includes a section where one can publish his own literary texts ("Samizdat" journal, named after the samizdat of the Soviet era), a project for music publishing ("Music hosting"), a travel notes project ("Foreign countries") and some other sections.
Maksim Moshkow's Library received several Ru-net Awards, including the National Internet Award (2003).
The headline on the site Lib.ru says "With support from the Federal Press and Mass Communications Agency". According to Moshkow, his project received $35,000 from that organization in September 2005, which indicates some level of government support for the online publishing of in-copyright works.
Maksim Moshkow's project could be compared to some Wikimedia Foundation projects and is sometimes referred to as Russia's Project Gutenberg.
KM Online vs. Maksim Moshkow's Library
On 1 April 2004, the "KM Online" media company, which is known for forming its own library by copying texts from the other electronic libraries, issued a lawsuit against Maksim Moshkow's Library in the name of Eduard Gevorkian, Marina Alekseyeva (pen-name "Alexandra Marinina"), Vasili Golovachov and Elena Katasonova. It was later discovered that only Gevorkian had had real claims against Moshkow. Moshkow's lawyer was Andrey Mironov from the Artemy Lebedev Studio, while KM's interests were presented by the so-called "National Society for Digital Technologies (NOCIT)".
This case became a precedent in the Russian legal practice which illustrated pressure on an electro
|
https://en.wikipedia.org/wiki/Pure%20spinor
|
In the domain of mathematics known as representation theory, pure spinors (or simple spinors) are spinors that are annihilated, under the Clifford algebra representation, by a maximal isotropic subspace of a vector space with respect to a scalar product .
They were introduced by Élie Cartan in the 1930s and further developed by Claude Chevalley.
They are a key ingredient in the study of spin structures and higher dimensional generalizations of twistor theory, introduced by Roger Penrose in the 1960s.
They have been applied to the study of supersymmetric Yang-Mills theory in 10D, superstrings, generalized complex structures
and parametrizing solutions of integrable hierarchies.
Clifford algebra and pure spinors
Consider a complex vector space , with either even dimension or odd dimension , and a nondegenerate complex scalar product
, with values on pairs of vectors .
The Clifford algebra is the quotient of the full tensor algebra
on by the ideal generated by the relations
Spinors are modules of the Clifford algebra, and so in particular there is an action of the
elements of on the space of spinors. The complex subspace that annihilates
a given nonzero spinor has dimension . If then is said to be a pure spinor. In terms of stratification of spinor modules by orbits of the spin group , pure spinors correspond to the smallest orbits, which are the Shilov boundary of the stratification by the orbit types of the spinor representation on the irreducible spinor (or half-spinor) modules.
Pure spinors, defined up to projectivization, are called projective pure spinors. For of even dimension , the space of projective pure spinors is the homogeneous space
; for of odd dimension , it is .
Irreducible Clifford module, spinors, pure spinors and the Cartan map
The irreducible Clifford/spinor module
Following Cartan and Chevalley,
we may view as a direct sum
where is a totally isotropic subspace of dimension , and is its dual space, wit
|
https://en.wikipedia.org/wiki/Dangerously%20irrelevant%20operator
|
In statistical mechanics and quantum field theory, a dangerously irrelevant operator (or dangerous irrelevant operator) is an operator which is irrelevant at a renormalization group fixed point, yet affects the infrared (IR) physics significantly (e.g. because the vacuum expectation value (VEV) of some field depends sensitively upon the coefficient of this operator).
Critical phenomena
In the theory of critical phenomena, free energy of a system near the critical point depends analytically on the coefficients of generic (not dangerous) irrelevant operators, while the dependence on the coefficients of dangerously irrelevant operators is non-analytic ( p. 49).
The presence of dangerously irrelevant operators leads to the violation of the hyperscaling relation between the critical exponents and in dimensions. The simplest example ( p. 93) is the critical point of the Ising ferromagnet in dimensions, which is a gaussian theory (free massless scalar ), but the leading irrelevant perturbation is dangerously irrelevant. Another example occurs for the Ising model with random-field disorder, where the fixed point occurs at zero temperature, and the temperature perturbation is dangerously irrelevant ( p. 164).
Quantum field theory
Let us suppose there is a field with a potential depending upon two parameters, and .
Let us also suppose that is positive and nonzero and > . If is zero, there is no stable equilibrium. If the scaling dimension of is , then the scaling dimension of is where is the number of dimensions. It is clear that if the scaling dimension of is negative, is an irrelevant parameter. However, the crucial point is, that the VEV
.
depends very sensitively upon , at least for small values of . Because the nature of infrared physics also depends upon the VEV, it looks very different even for a tiny change in not because the physics in the vicinity of changes much — it hardly changes at all — but because the VEV we are expanding about has ch
|
https://en.wikipedia.org/wiki/Spirou%20%28comics%29
|
Spirou (, ; ; Walloon for "squirrel", "mischievous"; ) is a Belgian comic strip character and protagonist in the comic strip series Spirou & Fantasio and Le Petit Spirou, and the eponymous character of the Belgian comic strip magazine Spirou.
History
The character was originally created by Robert Velter (Rob-Vel) for the launch of (Spirou magazine) in 1938.
Spirou was originally an elevator operator and bell-boy at the fictional Moustique Hotel. At some point he became a reporter for the eponymous magazine, though he remained dressed in his trademark red uniform.
Spirou's design was changed through the years by the various writers and artists who created his adventures but he has kept his spiky red-hair and clothes of the same colour even after ditching his hotel uniform.
Character
In contrast to Tintin, Spirou is more frequently shown doing some reporting in several of his adventures. While he and reporter colleague Fantasio occasionally pursue stories, in most cases they simply find themselves in the centre of adventures. An honest and brave young man of indeterminate age, he tries to fight injustice around him and help people. He is usually more level-headed than Fantasio, who always accompanies him, along with the pet squirrel Spip, and during the period of Franquin authorship, the Marsupilami.
Spin-off
A six-year-old version of Spirou is the star of the spin-off series Le Petit Spirou, which is concerned with his tribulations at school and the anatomy of girls. This later series and its star are generally acknowledged to have little in common with the old one.
In 2018 a theme park in Monteux, , inspired by this character, was opened.
In popular culture
Spirou is part of the Comic Book Route and can be found at Place Sainctelette in Brussels.
|
https://en.wikipedia.org/wiki/Disintegration%20theorem
|
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure.
Motivation
Consider the unit square in the Euclidean plane , . Consider the probability measure defined on by the restriction of two-dimensional Lebesgue measure to . That is, the probability of an event is simply the area of . We assume is a measurable subset of .
Consider a one-dimensional subset of such as the line segment . has -measure zero; every subset of is a -null set; since the Lebesgue measure space is a complete measure space,
While true, this is somewhat unsatisfying. It would be nice to say that "restricted to" is the one-dimensional Lebesgue measure , rather than the zero measure. The probability of a "two-dimensional" event could then be obtained as an integral of the one-dimensional probabilities of the vertical "slices" : more formally, if denotes one-dimensional Lebesgue measure on , then
for any "nice" . The disintegration theorem makes this argument rigorous in the context of measures on metric spaces.
Statement of the theorem
(Hereafter, will denote the collection of Borel probability measures on a topological space .)
The assumptions of the theorem are as follows:
Let and be two Radon spaces (i.e. a topological space such that every Borel probability measure on it is inner regular, e.g. separably metrizable spaces; in particular, every probability measure on it is outright a Radon measure).
Let .
Let be a Borel-measurable function. Here one should think of as a function to "disintegrate" , in the sense of partitioning into . For example, for the motivating example above, one can define , , which gives that , a slice we want to cap
|
https://en.wikipedia.org/wiki/Department%20of%20Defense%20Information%20Assurance%20Certification%20and%20Accreditation%20Process
|
The DoD Information Assurance Certification and Accreditation Process (DIACAP) is a deprecated United States Department of Defense (DoD) process meant to ensure companies and organizations applied risk management to information systems (IS). DIACAP defined a DoD-wide formal and standard set of activities, general tasks and a management structure process for the certification and accreditation (C&A) of a DoD IS which maintained the information assurance (IA) posture throughout the system's life cycle.
As of May 2015, the DIACAP was replaced by the "Risk Management Framework (RMF) for DoD Information Technology (IT)". Although re-accreditations via DIACAP continued through late 2016, systems that had not yet started accreditation by May 2015 were required to transition to the RMF processes. The DoD RMF aligns with the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF).
History
DIACAP resulted from an NSA directed shift in underlying security approaches. An interim version of the DIACAP was signed July 6, 2006, and superseded the interim DITSCAP guidance. The final version is called Department of Defense Instruction 8510.01, and was signed on March 12, 2014 (previous version was November 28, 2007).
DODI 8500.01 Cybersecurity
http://www.dtic.mil/whs/directives/corres/pdf/850001_2014.pdf,
DODI 8510.01 Risk Management Framework (RMF) for DoD Information Technology (IT)
https://fas.org/irp/doddir/dod/i8510_01.pdf
DIACAP differed from DITSCAP in several ways—in particular, in its embrace of the idea of information assurance controls (defined in DoDD 8500.1 and DoDI 8500.2) as the primary set of security requirements for all automated information systems (AISs). Applicable IA Controls were assigned based on the system's mission assurance category (MAC) and confidentiality level (CL).
Process
System Identification Profile
DIACAP Implementation Plan
Validation
Certification Determination
DIACAP Scorecard
POA&M
Authorizatio
|
https://en.wikipedia.org/wiki/Ascending%20and%20Descending
|
Ascending and Descending is a lithograph print by the Dutch artist M. C. Escher first printed in March 1960. The original print measures . The lithograph depicts a large building roofed by a never-ending staircase. Two lines of identically dressed men appear on the staircase, one line ascending while the other descends. Two figures sit apart from the people on the endless staircase: one in a secluded courtyard, the other on a lower set of stairs. While most two-dimensional artists use relative proportions to create an illusion of depth, Escher here and elsewhere uses conflicting proportions to create the visual paradox.
Ascending and Descending was influenced by, and is an artistic implementation of, the Penrose stairs, an impossible object; Lionel Penrose had first published his concept in the February 1958 issue of the British Journal of Psychology. Escher developed the theme further in his print Waterfall, which appeared in 1961.
The two concentric processions on the stairs use enough people to emphasise the lack of vertical rise and fall. In addition, the shortness of the tunics worn by the people makes it clear that some are stepping up and some are stepping down.
The structure is embedded in human activity. By showing an unaccountable ritual of what Escher calls an 'unknown' sect, Escher has added an air of mystery to the people who ascend and descend the stairs. Therefore, the stairs themselves tend to become incorporated into that mysterious appearance.
There are 'free' people and Escher said of these: 'recalcitrant individuals refuse, for the time being, to take part in the exercise of treading the stairs. They have no use for it at all, but no doubt, sooner or later they will be brought to see the error of their non-conformity.'
Escher suggests that not only the labours, but the very lives of these monk-like people are carried out in an inescapable, coercive and bizarre environment. Another possible source for the look of the people is the Dutch idio
|
https://en.wikipedia.org/wiki/Ternary%20complex
|
A ternary complex is a protein complex containing three different molecules that are bound together. In structural biology, ternary complex can also be used to describe a crystal containing a protein with two small molecules bound, for example cofactor and substrate; or a complex formed between two proteins and a single substrate. In Immunology, ternary complex can refer to the MHC–peptide–T-cell-receptor complex formed when T cells recognize epitopes of an antigen.
Some other example can be taken like ternary complex while eukaryotic translation, in which ternary complex is composed of eIF-3 & eIF-2 + Ribosome 40s subunit+ tRNAi.
A ternary complex can be a complex formed between two substrate molecules and an enzyme. This is seen in multi-substrate enzyme-catalyzed reactions where two substrates and two products can be formed. The ternary complex is an intermediate between the product formation in this type of enzyme-catalyzed reactions. An example for a ternary complex is seen in random-order mechanism or a compulsory-order mechanism of enzyme catalysis for multi substrates.
The term ternary complex can also refer to a polymer formed by electrostatic interactions.
|
https://en.wikipedia.org/wiki/Ephemeral%20plant
|
An ephemeral plant is a plant with a very short life cycle or very short period of active growth, often one that grows only during brief periods when conditions are favorable. Several types of ephemeral plants exist. The first, spring ephemeral, refers to perennial plants that emerge quickly in the spring and die back to their underground parts after a short growth and reproduction phase. Desert ephemerals are plants which are adapted to take advantage of the short wet periods in arid climates. Mud-flat ephemerals take advantage of short periods of low water. In areas subjected to recurring human disturbance, such as plowing, weedy ephemerals are very short-lived plants whose entire life cycle takes less than a growing season. In each case, the species has a life cycle timed to exploit a short period when resources are freely available. An evergreen plant could be considered the opposite of an ephemeral plant.
Spring ephemerals
Spring ephemerals are perennial woodland wildflowers which develop aerial parts (i.e. stems, leaves, and flowers) of the plant early each spring and then quickly bloom, and produce seed. The leaves often wither leaving only underground structures (i.e. roots, rhizomes, and bulbs) for the remainder of the year. This strategy is very common in herbaceous communities of deciduous forests as it allows small herbaceous plants to take advantage of the high levels of sunlight reaching the forest floor prior to the formation of a canopy by woody plants. Examples include: spring beauties, trilliums, harbinger of spring and the genus of Dicentra particularly D. cucullaria, Dutchman's breeches and D. canadensis, squirrel corn.
In the herb layer of beech forest and hornbeam-sessile oak forest, tuberous, bulbous and rhizomous plants are abundant. They comprise the spring geophytes (tuberous, bulbous and rhizomous).
Desert ephemerals
Desert ephemerals, such as Arabidopsis thaliana, are plants which are adapted to take advantage of the very short favou
|
https://en.wikipedia.org/wiki/Molluscivore
|
A molluscivore is a carnivorous animal that specialises in feeding on molluscs such as gastropods, bivalves, brachiopods and cephalopods. Known molluscivores include numerous predatory (and often cannibalistic) molluscs, (e.g.octopuses, murexes, decollate snails and oyster drills), arthropods such as crabs and firefly larvae, and, vertebrates such as fish, birds and mammals. Molluscivory is performed in a variety ways with some animals highly adapted to this method of feeding behaviour. A similar behaviour, durophagy, describes the feeding of animals that consume hard-shelled or exoskeleton bearing organisms, such as corals, shelled molluscs, or crabs.
Description
Molluscivory can be performed in several ways:
In some cases, the mollusc prey are simply swallowed entire, including the shell, whereupon the prey is killed through suffocation and or exposure to digestive enzymes. Only cannibalistic sea slugs, snail-eating cone shells of the taxon Coninae, and some sea anemones use this method.
One method, used especially by vertebrate molluscivores, is to break the shell, either by exerting force on the shell until it breaks, often by biting the shell, like with oyster crackers, mosasaurs, and placodonts, or hammering at the shell, e.g. oystercatchers and crabs, or by simply dashing the mollusc on a rock (e.g. song thrushes, gulls, and sea otters).
Another method is to remove the shell from the prey. Molluscs are attached to their shell by strong muscular ligaments, making the shell's removal difficult. Molluscivorous birds, such as oystercatchers and the Everglades snail kite, insert their elongate beak into the shell to sever these attachment ligaments, facilitating removal of the prey. The carnivorous terrestrial pulmonate snail known as the "decollate snail" ("decollate" being a synonym for "decapitate") uses a similar method: it reaches into the opening of the prey's shell and bites through the muscles in the prey's neck, whereupon it immediately begins d
|
https://en.wikipedia.org/wiki/Free-air%20concentration%20enrichment
|
Free-Air Carbon dioxide Enrichment (FACE) is a method used by ecologists and plant biologists that raises the concentration of in a specified area and allows the response of plant growth to be measured. Experiments using FACE are required because most studies looking at the effect of elevated concentrations have been conducted in labs and where there are many missing factors including plant competition. Measuring the effect of elevated using FACE is a more natural way of estimating how plant growth will change in the future as the concentration rises in the atmosphere. FACE also allows the effect of elevated on plants that cannot be grown in small spaces (trees for example) to be measured. However, FACE experiments carry significantly higher costs relative to greenhouse experiments.
Method
Horizontal or vertical pipes are placed in a circle around the experimental plot, which can be between 1m and 30m in diameter, and these emit enriched air around the plants. The concentration of is maintained at the desired level through placing sensors in the plot which feedback to a computer which then adjusts the flow of from the pipes.
Usage
FACE circles have been used across in parts of the United States in temperate forests and also in stands of aspen in Italy. The method is also utilized for agricultural research. For example, FACE circles have been used to measure the response of soybean plants to increased levels of ozone and carbon dioxide at research facilities at the University of Illinois at Urbana–Champaign. FACE technologies have yet to be implemented in old growth forests, or key biomes for carbon sequestration, such as tropical forests, or boreal forests and identifying future research priorities for these regions is considered an urgent concern.
Examples of this method being used globally include TasFACE, which is investigating the effects of elevated CO2 on a native grassland in Tasmania, Australia. The National Wheat FACE array is presently being es
|
https://en.wikipedia.org/wiki/Feza%20G%C3%BCrsey%20Institute
|
Feza Gürsey Institute () is a joint institute of Boğaziçi University and TÜBİTAK (Scientific and Technological Research Council of Turkey) on physics research, founded in 1983 by Erdal İnönü with the name Research Institute for Basic Sciences. It now continues as the Feza Gürsey Institute, having been renamed in honor of Feza Gürsey, a distinguished Turkish physicist. The institute is located within the Kandilli Campus of the Boğaziçi University in Istanbul, Turkey. Currently it hosts researchers in mathematics and theoretical physics.
External links
Feza Gürsey Institute, official website of the institute
Physics research institutes
Research institutes in Turkey
Scientific and Technological Research Council of Turkey
Boğaziçi University
|
https://en.wikipedia.org/wiki/Unique%20key
|
In relational database management systems, a unique key is a candidate key. All the candidate keys of a relation can uniquely identify the records of the relation, but only one of them is used as the primary key of the relation. The remaining candidate keys are called unique keys because they can uniquely identify a record in a relation. Unique keys can consist of multiple columns. Unique keys are also called alternate keys. Unique keys are an alternative to the primary key of the relation. In SQL, the unique keys have a UNIQUE constraint assigned to them in order to prevent duplicates (a duplicate entry is not valid in a unique column). Alternate keys may be used like the primary key when doing a single-table select or when filtering in a where clause, but are not typically used to join multiple tables.
Summary
Keys provide the means for database users and application software to identify, access and update information in a database table. There may be several keys in any given table. For example, in a table of employees, both employee number and login name are individually unique. The enforcement of a key constraint (i.e. a uniqueness constraint) in a table is also a data integrity feature of the database. The DBMS prevents updates that would cause duplicate key values and thereby ensures that tables always comply with the desired rules for uniqueness. Proper selection of keys when designing a database is therefore an important aspect of database integrity.
A relational database table may have one or more available unique keys (formally called candidate keys). One of those keys per table may be designated the primary key; other keys are called alternate keys.
Any key may consist of one or more attributes. For example, a Social Security Number might be a single attribute key for an employee; a combination of flight number and date might be a key consisting of two attributes for a scheduled flight.
There are several types of keys used in database modeling and i
|
https://en.wikipedia.org/wiki/Arjen%20Lenstra
|
Arjen Klaas Lenstra (born 2 March 1956, in Groningen) is a Dutch mathematician, cryptographer and computational number theorist. He is currently a professor at the École Polytechnique Fédérale de Lausanne (EPFL) where he heads of the Laboratory for Cryptologic Algorithms.
Career
He studied mathematics at the University of Amsterdam. He is currently a professor at the EPFL (Lausanne), in the Laboratory for Cryptologic Algorithms, and previously worked for Citibank and Bell Labs.
Research
Lenstra is active in cryptography and computational number theory, especially in areas such as integer factorization. With Mark Manasse, he was the first to seek volunteers over the internet for a large scale volunteer computing project. Such projects became more common after the Factorization of RSA-129 which was a high publicity distributed factoring success led by Lenstra along with Derek Atkins, Michael Graff and Paul Leyland. He was also a leader in the successful factorizations of several other RSA numbers.
Lenstra was also involved in the development of the number field sieve. With coauthors, he showed the great potential of the algorithm early on by using it to factor the ninth Fermat number, which was far out of reach by other factoring algorithms of the time. He has since been involved with several other number field sieve factorizations including the current record, RSA-768.
Lenstra's most widely cited scientific result is the first polynomial time algorithm to factor polynomials with rational coefficients in the seminal paper that introduced the LLL lattice reduction algorithm with Hendrik Willem Lenstra and László Lovász.
Lenstra is also co-inventor of the XTR cryptosystem.
On 1 March 2005, Arjen Lenstra, Xiaoyun Wang, and Benne de Weger of Eindhoven University of Technology demonstrated construction of two X.509 certificates with different public keys and the same MD5 hash, a demonstrably practical hash collision. The construction included private keys f
|
https://en.wikipedia.org/wiki/Valentine%20Bargmann
|
Valentine "Valya" Bargmann (April 6, 1908 – July 20, 1989) was a German-American mathematician and theoretical physicist.
Biography
Born in Berlin, Germany, to a German Jewish family, Bargmann studied there from 1925 to 1933. After the National Socialist Machtergreifung, he moved to Switzerland to the University of Zürich where he received his Ph.D. under Gregor Wentzel.
He emigrated to the U.S., barely managing immigration acceptance as his German passport was to be revoked—with only two days of validity left.
At the Institute for Advanced Study in Princeton (1937–46) he worked as an assistant to Albert Einstein, publishing with him and Peter Bergmann on classical five-dimensional Kaluza–Klein theory (1941). He taught at Princeton University since 1946, to the rest of his career.
He pioneered understanding of the irreducible unitary representations of SL2(R) and the Lorentz group (1947). He further formulated the Bargmann–Wigner equations with Eugene Wigner (1948), for particles of arbitrary spin, building up on work of several theorists who pioneered quantum mechanics.
Bargmann's theorem (1954) on projective unitary representations of Lie groups gives a condition for when a projective unitary representation of a Lie group comes from an ordinary unitary representation of its universal cover.
Bargmann further discovered the Bargmann–Michel–Telegdi equation (1959) describing relativistic precession; Bargmann's limit of the maximum number of QM bound states of a potential (1952);
the notion of Bargmann potentials for the radial Schrödinger equations with bound states but no non-trivial scattering, which play a basic rôle in the theory of Solitons, and the holomorphic representation in the Segal–Bargmann space (1961), including the Bargmann kernel.
Bargmann was elected a Fellow of the American Academy of Arts and Sciences in 1968. In 1978, he received the Wigner Medal, together with Wigner himself, in the founding year of the prize. In 1979, Bargmann was electe
|
https://en.wikipedia.org/wiki/Shell%20shoveling
|
Shell shoveling, in network security, is the act of redirecting the input and output of a shell to a service so that it can be remotely accessed, a reverse shell.
In computing, the most basic method of interfacing with the operating system is the shell. On Microsoft Windows based systems, this is a program called cmd.exe or COMMAND.COM. On Unix or Unix-like systems, it may be any of a variety of programs such as bash, ksh, etc. This program accepts commands typed from a prompt and executes them, usually in real time, displaying the results to what is referred to as standard output, usually a monitor or screen.
In the shell shoveling process, one of these programs is set to run (perhaps silently or without notifying someone observing the computer) accepting input from a remote system and redirecting output to the same remote system; therefore the operator of the shoveled shell is able to operate the computer as if they were present at the console.
See also
Console redirection
CTTY (DOS command)
Serial over LAN redirection (SOL)
|
https://en.wikipedia.org/wiki/Langer%20correction
|
The Langer correction, named after the mathematician Rudolf Ernest Langer, is a correction to the WKB approximation for problems with radial symmetry.
Description
In 3D systems
When applying WKB approximation method to the radial Schrödinger equation,
where the effective potential is given by
( the azimuthal quantum number related to the angular momentum operator), the eigenenergies and the wave function behaviour obtained are different from the real solution.
In 1937, Rudolf E. Langer suggested a correction
which is known as Langer correction or Langer replacement. This manipulation is equivalent to inserting a 1/4 constant factor whenever appears. Heuristically, it is said that this factor arises because the range of the radial Schrödinger equation is restricted from 0 to infinity, as opposed to the entire real line. By such a changing of constant term in the effective potential, the results obtained by WKB approximation reproduces the exact spectrum for many potentials. That the Langer replacement is correct follows from the WKB calculation of the Coulomb eigenvalues with the replacement which reproduces the well known result.
In 2D systems
Note that for 2D systems, as the effective potential takes the form
so Langer correction goes:
This manipulation is also equivalent to insert a 1/4 constant factor whenever appears.
Justification
An even more convincing calculation is the derivation of Regge trajectories (and hence eigenvalues) of the radial Schrödinger equation with Yukawa potential by both a perturbation method (with the old factor) and independently the derivation by the WKB method (with Langer replacement)-- in both cases even to higher orders. For the perturbation calculation see Müller-Kirsten book and for the WKB calculation Boukema.
See also
Einstein–Brillouin–Keller method
|
https://en.wikipedia.org/wiki/Hormonal%20imprinting
|
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals.
|
https://en.wikipedia.org/wiki/Science%20Citation%20Index%20Expanded
|
The Science Citation Index Expanded – previously titled Science Citation Index – is a citation index originally produced by the Institute for Scientific Information (ISI) and created by Eugene Garfield.
It was officially launched in 1964 and is now owned by Clarivate (previously the Intellectual Property and Science business of Thomson Reuters).
The indexing database covers more than 9,200 notable and significant journals, across 178 disciplines, from 1900 to the present. These are alternatively described as the world's leading journals of science and technology, because of a rigorous selection process.
Accessibility
The index is available online within Web of Science, as part of its Core Collection (there are also CD and printed editions, covering a smaller number of journals). The database allows researchers to search through over 53 million records from thousands of academic journals that were published by publishers from around the world.
Specialty citation indexes
Clarivate previously marketed several subsets of this database, termed "Specialty Citation Indexes", such as the Neuroscience Citation Index and the Chemistry Citation Index, however these databases are no longer actively maintained.
The Chemistry Citation Index was first introduced by Eugene Garfield, a chemist by training. His original "search examples were based on [his] experience as a chemist". In 1992, an electronic and print form of the index was derived from a core of 330 chemistry journals, within which all areas were covered. Additional information was provided from articles selected from 4,000 other journals. All chemistry subdisciplines were covered: organic, inorganic, analytical, physical chemistry, polymer, computational, organometallic, materials chemistry, and electrochemistry.
By 2002, the core journal coverage increased to 500 and related article coverage increased to 8,000 other journals.
One 1980 study reported the overall citation indexing benefits for chemistry, examining
|
https://en.wikipedia.org/wiki/IPv4%20address%20exhaustion
|
IPv4 address exhaustion is the depletion of the pool of unallocated IPv4 addresses. Because the original Internet architecture had fewer than 4.3 billion addresses available, depletion has been anticipated since the late 1980s, when the Internet started experiencing dramatic growth. This depletion is one of the reasons for the development and deployment of its successor protocol, IPv6. IPv4 and IPv6 coexist on the Internet.
The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and by five regional Internet registries (RIRs) responsible in their designated territories for assignment to end users and local Internet registries, such as Internet service providers. The main market forces that accelerated IPv4 address depletion included the rapidly growing number of Internet users, always-on devices, and mobile devices.
The anticipated shortage has been the driving factor in creating and adopting several new technologies, including network address translation (NAT), Classless Inter-Domain Routing (CIDR) in 1993, and IPv6 in 1998.
The top-level exhaustion occurred on 31 January 2011. All RIRs have exhausted their address pools, except those reserved for IPv6 transition; this occurred on 15 April 2011 for the Asia-Pacific (APNIC), on 10 June 2014 for Latin America and the Caribbean (LACNIC), on 24 September 2015 for North America (ARIN), on 21 April 2017 for Africa (AfriNIC), and on 25 November 2019 for Europe, Middle East and Central Asia (RIPE NCC). These RIRs still allocate recovered addresses or addresses reserved for a special purpose. Individual ISPs still have pools of unassigned IP addresses, and could recycle addresses no longer needed by subscribers.
Vint Cerf co-created TCP/IP thinking it was an experiment and has admitted he thought 32 bits was enough.
IP addressing
Every node of an Internet Protocol (IP) network, such as a computer, router, or network printer, is assigned an IP address for each network interface, used
|
https://en.wikipedia.org/wiki/Huberta%20%28hippopotamus%29
|
Huberta (initially named Hubert; the sex was discovered after death) was a hippopotamus which travelled for a large distance across South Africa. In November 1928, Huberta left her waterhole in the St. Lucia Estuary in Zululand and over the next three years, travelled to the Eastern Cape. In that time, Huberta became a minor celebrity in South Africa and attracted crowds wherever she went. She was initially thought to be a male and was nicknamed Hubert by the press. The first report in the press was on 23 November 1928 in the Natal Mercury and reported the appearance of a hippo in Natal. The report was accompanied by the only photograph of Huberta in life.
Huberta stopped for a while at the mouth of the Mhlanga River about north of Durban, and a failed attempt was made to capture her and put her in Johannesburg Zoo. After this, she headed south to Durban where she visited a beach and a country club. Moving on to the Umgeni River, she became revered by Zulus and Xhosas alike.
Finally, Huberta arrived in East London in March 1931. Despite her having been declared to be protected royal game by the Natal Provincial Council, she was shot by farmers a month later. After a public outcry, the farmers were arrested and fined £25. Huberta's body was recovered and sent to a taxidermist in London. Upon her return to South Africa in 1932, she was greeted by 20,000 people and was displayed at the Amathole Museum (previously known as the Kaffrarian Museum) in King William's Town.
Huberta is the subject of the children's book Hubert The Traveling Hippopotamus by Edmund Lindop and illustrated by Jane Carlson. The book was published in 1961 by Little, Brown and Company.
|
https://en.wikipedia.org/wiki/Visite%20du%20Branchage
|
A Visite du Branchage is an inspection of roads in Jersey and Guernsey to ensure property owners have complied with the laws against vegetation encroaching onto the road.
Jersey
The Visite du Branchage takes place in each parish twice a year to check that occupiers of houses and land bordering on public roads have undertaken the 'branchage'.
The Loi (1914) sur la Voirie imposes a duty on all occupiers of property to ensure that encroachments are removed from the public highway.
The first Visite is between 21 June – 11 July and the second is between 1 – 21 September.
On the Visite du Branchage the connétable, assisted by the members of the Roads Committee, Roads Inspectors and the centeniers, will visit the roads of his parish accompanied by the vingteniers in their respective Vingtaines to ensure that the branchage has been completed. Occupiers of land may be fined up to £50 for each infraction unless -
the 'branchage' [hedges, branches and overhanging trees] has been trimmed back so as to give a clearance of 12 feet over main roads and by-roads;
the 'branchage' [hedges, branches and overhanging trees] has been trimmed back so as to give a clearance of 8 feet over footpaths; and all trimmings have been removed from the road.
If the branchage has not been completed the occupier will be required to undertake the work and, if it is not carried out, the parish may arrange for the work to be done and charge the occupier the cost of that work.
The Visite du Branchage applies to all public roads including main roads, by-roads and footpaths.
The Branchage Film Festival, takes its name from Visite du Branchage.
|
https://en.wikipedia.org/wiki/History%20of%20ancient%20numeral%20systems
|
Number systems have progressed from the use of fingers and tally marks, perhaps more than 40,000 years ago, to the use of sets of glyphs able to represent any conceivable number efficiently. The earliest known unambiguous notations for numbers emerged in Mesopotamia about 5000 or 6000 years ago.
Prehistory
Counting initially involves the fingers, given that digit-tallying is common in number systems that are emerging today, as is the use of the hands to express the numbers five and ten. In addition, the majority of the world's number systems are organized by tens, fives, and twenties, suggesting the use of the hands and feet in counting, and cross-linguistically, terms for these amounts are etymologically based on the hands and feet. Finally, there are neurological connections between the parts of the brain that appreciate quantity and the part that "knows" the fingers (finger gnosia), and these suggest that humans are neurologically predisposed to use their hands in counting. While finger-counting is typically not something that preserves archaeologically, some prehistoric hand stencils have been interpreted as finger-counting since of the 32 possible patterns the fingers can produce, only five (the ones typically used in counting from one to five) are found at Cosquer Cave, France.
Since the capacity and persistence of the fingers are limited, finger-counting is typically supplemented by means of devices with greater capacity and persistence, including tallies made of wood or other materials. Possible tally marks made by carving notches in wood, bone, and stone appear in the archaeological record at least forty thousand years ago. These tally marks may have been used for counting time, such as numbers of days or lunar cycles, or for keeping records of quantities, such as numbers of animals or other valuable commodities. However, there is currently no diagnostic technique that can reliably determine the social purpose or use of prehistoric linear marks inscribe
|
https://en.wikipedia.org/wiki/Langhans%20giant%20cell
|
Langhans giant cells (LGC) are giant cells found in granulomatous conditions.
They are formed by the fusion of epithelioid cells (macrophages), and contain nuclei arranged in a horseshoe-shaped pattern in the cell periphery.
Although traditionally their presence was associated with tuberculosis, they are not specific for tuberculosis or even for mycobacterial disease. In fact, they are found in nearly every form of granulomatous disease, regardless of etiology.
Terminology
Langhans giant cells are named after Theodor Langhans (1839–1915), a German pathologist.
Causes
In 2012, a research paper showed that when activated CD4+ T cells and monocytes are in close contact, interaction of CD40-CD40L between these two cells and subsequent IFNγ secretion by the T cells causes upregulation and secretion of fusion-related molecule DC-STAMP (dendritic cell-specific transmembrane protein) by the monocytes, which results in LGC formation.
Clinical significance
Langhans giant cells are often found in transbronchial lung biopsies or lymph node biopsies in patients with sarcoidosis. They are also commonly found in tuberculous granulomas of tuberculosis.
|
https://en.wikipedia.org/wiki/Karen%20Ashe
|
Karen K. Hsiao Ashe is a professor at the Department of Neurology and Neuroscience at the University of Minnesota (UMN) Medical School, where she holds the Edmund Wallace and Anne Marie Tulloch Chairs in Neurology and Neuroscience. She is the founding director of the N. Bud Grossman Center for Memory Research and Care, and her specific research interest is memory loss resulting from Alzheimer's disease and related dementias. Her research has included the development of an animal model of Alzheimer's.
In July 2022, concerns were raised that certain images in a 2006 Nature paper co-authored by Ashe's postdoctoral student Sylvain Lesné were manipulated. In May of 2023, the Star Tribune reported that Ashe was using new techniques to re-do the work reported in the 2006 Nature study, and that she stated "it's my responsibility to establish the truth of what we've published".
Personal life and education
Ashe's parents came to the United States from China to pursue PhDs; her father, C.C. Hsiao, taught aerospace engineering at the University of Minnesota, and her mother, Joyce, was a biochemist. She has three younger siblings.
Attending the St. Paul Academy and Summit School in the 1970s, Ashe's interest in the brain began in primary school, where she excelled in math, along with music. She obtained her undergraduate degree at Harvard University in 1975 in chemistry and physics, starting as a sophomore at the age of 17. She went on to earn her PhD in brain and cognitive sciences at MIT in 1981 and her MD from Harvard in 1982.
Ashe's husband, James is a neurologist; she has three children (two sons and a daughter).
Professional life
Early career
Between 1986 and 1989, she was a post-doctoral fellow at the University of California, San Francisco where she researched prion diseases and published with Stanley Prusiner. In 1989, she was the first author on a paper published in Nature, entitled "Linkage of a prion protein missense variant to Gerstmann‑Sträussler syndrome
|
https://en.wikipedia.org/wiki/Adulteration%20of%20Coffee%20Act%201718
|
The Adulteration of Coffee Act 1718 (5 Geo. 1. c. 11) was an Act of Parliament of the Parliament of Great Britain concerning the adulteration of coffee, which made it illegal to debase coffee.
History
It was passed in 1718. The Act provided a penalty of "against divers [diverse] evil-disposed persons who at the time or soon after roasting of coffee, make use of water, grease, butter, or such like material whereby the same is made unwholesome and greatly increased in weight, to the prejudice of His Majesty's Revenue, the health of his subjects, and to the loss of all fair and honest dealers."
When coffee fell out of fashion, in favour of tea, a similar law was then introduced, the Adulteration of Tea Act 1776.
When recent Governor of Ceylon Viscount Torrington presented a petition in 1854 to similar, reinforcing effect, namely to counter the use of chicory for mixing—as coffee was by 1854 subject to a duty of 75% on top of the London market price—he stressed another piece of legislation had strong effect. He also mentioned coffee as the main export item at that time of Ceylon. This reinforcement was the Act 43 Geo. 3. c. 129 (the Excise Act 1803) such that no vegetable substance resembling coffee was permitted on the premises of licensed coffee dealers.
The Act was repealed by the Statute Law Revision Act 1958.
See also
|
https://en.wikipedia.org/wiki/Courant%20bracket
|
In a field of mathematics known as differential geometry, the Courant bracket is a generalization of the Lie bracket from an operation on the tangent bundle to an operation on the direct sum of the tangent bundle and the vector bundle of p-forms.
The case p = 1 was introduced by Theodore James Courant in his 1990 doctoral dissertation as a structure that bridges Poisson geometry and pre-symplectic geometry, based on work with his advisor Alan Weinstein. The twisted version of the Courant bracket was introduced in 2001 by Pavol Severa, and studied in collaboration with Weinstein.
Today a complex version of the p=1 Courant bracket plays a central role in the field of generalized complex geometry, introduced by Nigel Hitchin in 2002. Closure under the Courant bracket is the integrability condition of a generalized almost complex structure.
Definition
Let X and Y be vector fields on an N-dimensional real manifold M and let ξ and η be p-forms. Then X+ξ and Y+η are sections of the direct sum of the tangent bundle and the bundle of p-forms. The Courant bracket of X+ξ and Y+η is defined to be
where is the Lie derivative along the vector field X, d is the exterior derivative and i is the interior product.
Properties
The Courant bracket is antisymmetric but it does not satisfy the Jacobi identity for p greater than zero.
The Jacobi identity
However, at least in the case p=1, the Jacobiator, which measures a bracket's failure to satisfy the Jacobi identity, is an exact form. It is the exterior derivative of a form which plays the role of the Nijenhuis tensor in generalized complex geometry.
The Courant bracket is the antisymmetrization of the Dorfman bracket, which does satisfy a kind of Jacobi identity.
Symmetries
Like the Lie bracket, the Courant bracket is invariant under diffeomorphisms of the manifold M. It also enjoys an additional symmetry under the vector bundle automorphism
where α is a closed p+1-form. In the p=1 case, which is the relevant case f
|
https://en.wikipedia.org/wiki/Pluteus%20cervinus
|
Pluteus cervinus, commonly known as the deer shield, deer mushroom, or fawn mushroom, is a species of fungus in the order Agaricales. Fruit bodies are agaricoid (mushroom-shaped). Pluteus cervinus is saprotrophic and fruit bodies are found on rotten logs, roots, tree stumps, sawdust, and other wood waste. The species is common in Europe and eastern North America, but rare and possibly introduced in western North America.
Etymology
The species epithet, cervinus, means "deer-like"" and refers to the colour of the cap (described as "rehfarbig" in Jacob Christian Schäffer's original 1774 description).
Description
The cap ranges from 3-15 cm in diameter. Initially it is bell-shaped and often wrinkled when young. Later it expands to a convex shape. The cap can be deer-brown, but varies from light ochre-brown to dark brown, with a variable admixture of grey or black. The centre of the cap may be darker. The cap surface is smooth and matt to silky-reflective. The cap skin shows dark radial fibres when seen through a lens, indicating that the microscopic cuticle structure is filamentous. The gills are initially white, but soon show a distinctive pinkish sheen, caused by the ripening spores. The stipe is 5–12 cm long and 0.5–2 cm in diameter, usually thicker at the base. It is white and covered with brown vertical fibrils. The flesh is soft and white. The fruit body has a mild to earthy radish smell and a mild taste at first, which may become slightly bitter.
Spores are elliptical, smooth and measure approximately 7.0–8.0 × 5.0–5.5 μm. Hyphae lack clamp connections. Cystidia are thick-walled with apical projections. The spore print is pinkish brown.
Pluteus cervinus grows on rotten wood and fruit bodies can be found most commonly in the autumn. They are said to be edible when young, but considered by some to be of poor quality and not often collected for the table.
Similar species
Similar species include Pluteus atromarginatus, which has a dark brown edge to the gills;
|
https://en.wikipedia.org/wiki/Stanford%20Institute%20for%20Theoretical%20Physics
|
The Stanford Institute for Theoretical Physics (SITP) is a research institute within the Physics Department at Stanford University. Led by 16 physics faculty members, the institute conducts research in High Energy and Condensed Matter theoretical physics.
Research
Research within SITP includes a strong focus on fundamental questions about the new physics underlying the Standard Models of particle physics and cosmology, and on the nature and applications of our basic frameworks (quantum field theory and string theory) for attacking these questions.
Principal areas of research include:
Biophysics
Condensed matter theory
Cosmology
Formal theory
Physics beyond the standard model
"Precision frontiers"
Quantum computing
Quantum gravity
Central questions include:
What governs particle theory beyond the scale of electroweak symmetry breaking?
How do string theory and holography resolve the basic puzzles of general relativity, including the deep issues arising in black hole physics and the study of cosmological horizons?
Which class of models of inflationary cosmology captures the physics of the early universe, and what preceded inflation?
Can physicists develop new techniques in quantum field theory and string theory to shed light on mysterious phases arising in many contexts in condensed matter physics (notably, in the high temperature superconductors)?
Faculty
Current faculty include:
Savas Dimopoulos, theorist focusing on physics beyond the standard model; winner of Sakurai Prize
Sebastian Doniach, condensed matter physicist
Daniel Fisher, biophysicist
Surya Ganguli, theoretical neuroscientist
Peter Graham, winner of 2017 New Horizons Prize
Sean Hartnoll, AdS/CFT, winner of New Horizons Prize
Patrick Hayden, quantum information theorist
Shamit Kachru, string theorist; Stanford Physics Department chair
Renata Kallosh, noted string theorist
Vedika Khemani, condensed matter theorist
Steven Kivelson, condensed matter theorist
Rober
|
https://en.wikipedia.org/wiki/Cray%20SV1
|
The Cray SV1 is a vector processor supercomputer from the Cray Research division of Silicon Graphics introduced in 1998. The SV1 has since been succeeded by the Cray X1 and X1E vector supercomputers. Like its predecessor, the Cray J90, the SV1 used CMOS processors, which lowered the cost of the system, and allowed the computer to be air-cooled. The SV1 was backwards compatible with J90 and Y-MP software, and ran the same UNIX-derived UNICOS operating system. The SV1 used Cray floating point representation, not the IEEE 754 floating point format used on the Cray T3E and some Cray T90 systems.
Unlike earlier Cray designs, the SV1 included a vector cache. It also introduced a feature called multi-streaming, in which one processor from each of four processor boards work together to form a virtual processor with four times the performance. The SV1 processor was clocked at 300 MHz. Later variants of the SV1, the SV1e and SV1ex, ran at 500 MHz, the latter also having faster memory and support for the SSD-I Solid-State Storage Device. Systems could include up to 32 processors with up to 512 shared memory buses.
Multiple SV1 cabinets could be clustered together using the GigaRing I/O channel, which also provided connection to HIPPI, FDDI, ATM, Ethernet and SCSI devices for network, disk, and tape services. In theory, up to 32 nodes could be clustered together, offering up to one teraflops in theoretical peak performance.
|
https://en.wikipedia.org/wiki/Integrated%20Ocean%20Observing%20System
|
The Integrated Ocean Observing System (IOOS) is an organization of systems that routinely and continuously provides quality controlled data and information on current and future states of the oceans and Great Lakes from the global scale of ocean basins to local scales of coastal ecosystems. It is a multidisciplinary system designed to provide data in forms and at rates required by decision makers to address seven societal goals.
IOOS is developing as a multi-scale system that incorporates two, interdependent components, a global ocean component, called the Global Ocean Observing System, with an emphasis on ocean-basin scale observations and a coastal component that focuses on local to Large Marine Ecosystem (LME) scales.
Large Marine Ecosystems (LMEs) in U.S. coastal waters and IOOS Regional Associations.
Many of IOOS' component regional systems are being dismantled for lack of federal funding, including the Gulf of Maine Ocean Observing System GoMOOS . This has resulted in the loss of long term data sets and information used by Coast Guard search and rescue operations.
Regional associations
The coastal component consists of Regional Coastal Ocean Observing Systems (RCOOSs) nested in a National Backbone of coastal observations. From a coastal perspective, the global ocean component is critical for providing data and information on basin scale forcings (e.g., ENSO events), as well as providing the data and information necessary to run coastal models (such as storm surge models).
Alaska Ocean Observing System (AOOS)
Central California Ocean Observing System (CeNCOOS)
Great Lakes Observing System (GLOS)
Gulf of Maine Ocean Observing System (GoMOOS)
Gulf of Mexico Coastal Ocean Observing System (GCOOS)
Pacific Islands Ocean Observing System (PacIOOS)
Mid-Atlantic Coastal Ocean Observing Regional Association (MACOORA)
Northwest Association of Networked Ocean Observing Systems (NANOOS)
Southern California Coastal Ocean Observing System (SCCOOS)
Southeast Coastal Ocean
|
https://en.wikipedia.org/wiki/Language%20localisation
|
Language localisation (or language localization) is the process of adapting a product's translation to a specific country or region. It is the second phase of a larger process of product translation and cultural adaptation (for specific countries, regions, cultures or groups) to account for differences in distinct markets, a process known as internationalisation and localisation.
Language localisation differs from translation activity because it involves a comprehensive study of the target culture in order to correctly adapt the product to local needs. Localisation can be referred to by the numeronym L10N (as in: "L", followed by the number 10, and then "N").
The localisation process is most generally related to the cultural adaptation and translation of software, video games, websites, and technical communication, as well as audio/voiceover, video, writing system, script or other multimedia content, and less frequently to any written translation (which may also involve cultural adaptation processes). Localisation can be done for regions or countries where people speak different languages or where the same language is spoken. For instance, different dialects of German, with different idioms, are spoken in Germany, Austria, Switzerland, and Belgium.
The overall process: internationalisation, globalisation, and localisation
The former Localization Industry Standards Association (LISA) said that globalisation "can best be thought of as a cycle rather than a single process". To globalise is to plan the design and development methods for a product in advance, keeping in mind a multicultural audience, in order to avoid increased costs and quality problems, save time, and smooth the localising effort for each region or country.
There are two primary technical processes that comprise globalisation: internationalisation and localisation.
The first phase, internationalisation, encompasses the planning and preparation stages for a product built to support global markets.
|
https://en.wikipedia.org/wiki/Rock-Ola
|
The Rock-Ola Manufacturing Corporation is an American developer and manufacturer of juke boxes and related machinery. It was founded in 1927 by Coin-Op pioneer David Cullen Rockola to manufacture slot machines, scales, and pinball machines. The firm later produced parking meters, furniture, arcade video games, and firearms, but became best known for its jukeboxes.
History
The Rock-Ola Scale Company was founded in 1927 by David Cullen Rockola to manufacture coin-operated entertainment machines. During the 1920s, Rockola was linked with Chicago organized crime and escaped a jail sentence by turning State's Evidence. Mr. Rockola added the hyphen because people often mispronounced his name. The name was changed to Rock-Ola Manufacturing Corporation in 1932. The company successfully expanded its production line through the Great Depression to include furniture. Starting in 1935, Rock-Ola sold more than 400,000 jukeboxes under the Rock-Ola brand name, which predated the rock and roll era by two decades, and is thought to have inspired the term.
Rock-Ola became a prime contractor for production of the M1 carbine for the US Military during World War II. Rock-Ola machined receivers, barrels, bolts, firing pins, extractors, triggers, trigger housings, sears, operating slides, gas cylinders, and recoil plates. Rock-Ola used its furniture machinery to manufacture stocks and handguards for its own production and for other prime contractors, and subcontracted production of other machined parts. Rock-Ola delivered 228,500 military carbines at $58 each before contracts were cancelled on May 31, 1944. Rock-Ola also produced approximately sixty "presentation" carbines as gifts to company executives and other officials. Presentation carbines were finished in polished blue rather than the dull Parkerizing used on military weapons, and were accompanied by a custom-made wooden case including the name of the recipient engraved on a brass plate. Some of the presentation carbines had no
|
https://en.wikipedia.org/wiki/Alpha%20toxin
|
Alpha toxin or alpha-toxin refers to several different protein toxins produced by bacteria, including:
Staphylococcus aureus alpha toxin, a membrane-disrupting toxin that creates pores causing hemolysis and tissue damage
Clostridium perfringens alpha toxin, a membrane-disrupting toxin with phospholipase C activity, which is directly responsible for gas gangrene and myonecrosis
Pseudomonas aeruginosa alpha toxin
Bacterial toxins
|
https://en.wikipedia.org/wiki/Exogenous%20bacteria
|
Exogenous bacteria are microorganisms introduced to closed biological systems from the external world. They exist in aquatic and terrestrial environments, as well as the atmosphere. Microorganisms in the external environment have existed on Earth for 3.5 billion years. Exogenous bacteria can be either benign or pathogenic. Pathogenic exogenous bacteria can enter a closed biological system and cause disease such as Cholera, which is induced by a waterborne microbe that infects the human intestine. Exogenous bacteria can be introduced into a closed ecosystem as well, and have mutualistic benefits for both the microbe and the host. A prominent example of this concept is bacterial flora, which consists of exogenous bacteria ingested and endogenously colonized during the early stages of life. Bacteria that are part of normal internal ecosystems, also known as bacterial flora, are called Endogenous Bacteria. A significant amount of prominent diseases are induced by exogenous bacteria such as gonorrhea, meningitis, tetanus, and syphilis. Pathogenic exogenous bacteria can enter a host via cutaneous transmission, inhalation, and consumption.
Difference with endogenous bacteria
Only a minority of bacteria species cause disease in humans; and many species colonize in the human body to create an ecosystem known as microbiota. Bacterial flora is endogenous bacteria, which is defined as bacteria that naturally reside in a closed system. Disease can occur when microbes included in normal bacteria flora enter a sterile area of the body such as the brain or muscle. This is considered an endogenous infection. A prime example of this is when the residential bacterium E. coli of the GI tract enters the urinary tract. This causes a urinary tract infection. Infections caused by exogenous bacteria occurs when microbes that are noncommensal enter a host. These microbes can enter a host via inhalation of aerosolized bacteria, ingestion of contaminated or ill-prepared foods, sexual activi
|
https://en.wikipedia.org/wiki/Allergic%20inflammation
|
Allergic inflammation is an important pathophysiological feature of several disabilities or medical conditions including allergic asthma, atopic dermatitis, allergic rhinitis and several ocular allergic diseases. Allergic reactions may generally be divided into two components; the early phase reaction, and the late phase reaction. While the contribution to the development of symptoms from each of the phases varies greatly between diseases, both are usually present and provide us a framework for understanding allergic disease.
The early phase of the allergic reaction typically occurs within minutes, or even seconds, following allergen exposure and is also commonly referred to as the immediate allergic reaction or as a Type I allergic reaction. The reaction is caused by the release of histamine and mast cell granule proteins by a process called degranulation, as well as the production of leukotrienes, prostaglandins and cytokines, by mast cells following the cross-linking of allergen specific IgE molecules bound to mast cell FcεRI receptors. These mediators affect nerve cells causing itching, smooth muscle cells causing contraction (leading to the airway narrowing seen in allergic asthma), goblet cells causing mucus production, and endothelial cells causing vasodilatation and edema.
The late phase of a Type 1 reaction (which develops 8–12 hours and is mediated by mast cells) should not be confused with delayed hypersensitivity Type IV allergic reaction (which takes 48–72 hours to develop and is mediated by T cells). The products of the early phase reaction include chemokines and molecules that act on endothelial cells and cause them to express Intercellular adhesion molecule (such as vascular cell adhesion molecule and selectins), which together result in the recruitment and activation of leukocytes from the blood into the site of the allergic reaction. Typically, the infiltrating cells observed in allergic reactions contain a high proportion of lymphocytes, a
|
https://en.wikipedia.org/wiki/Phenotype%20mixing
|
Phenotype mixing is a form of interaction between two viruses each of which holds its own unique genetic material. The two particles "share" coat proteins, therefore each has a similar assortment of identifying surface proteins, while having different genetic material.
In other words; non-genetic interaction in which virus particles released from a cell that is infected with two different viruses have components from both the infecting agents, but with a genome from one of them.
|
https://en.wikipedia.org/wiki/Outgoing%20longwave%20radiation
|
Longwave (LW) radiation, in the context of climate science, is electromagnetic thermal radiation emitted by Earth's surface, atmosphere, and clouds. Longwave radiation may also be referred to as terrestrial radiation, thermal infrared radiation, or thermal radiation. This radiation is in the infrared portion of the spectrum, but is distinct from (i.e., has a longer wavelength than) the shortwave (SW) near-infrared radiation found in sunlight.
Outgoing longwave radiation (OLR) is the longwave radiation emitted to space from the top of Earth's atmosphere. It may also be referred to as emitted terrestrial radiation. Outgoing longwave radiation plays an important role in planetary cooling.
Longwave radiation generally spans wavelengths ranging from 3–100 microns (μm). A cutoff of 4 μm is sometimes used to differentiate sunlight from longwave radiation. Less than 1% of sunlight has wavelengths greater than 4 μm. Over 99% of outgoing longwave radiation has wavelengths between 4 μm and 100 μm.
The flux of energy transported by outgoing longwave radiation is typically measured in units of watts per meter squared (W m−2). In the case of global energy flux, the W/m2 value is obtained by dividing the total energy flow over the surface of the globe (measured in watts) by the surface area of the Earth, .
Emitting outgoing longwave radiation is the only way Earth loses energy to space, i.e., the only way the planet cools itself. Radiative heating from absorbed sunlight, and radiative cooling to space via OLR power the heat engine that drives atmospheric dynamics.
The balance between OLR (energy lost) and incoming solar shortwave radiation (energy gained) determines whether the Earth is experiencing global heating or cooling (see Earth's energy budget).
Planetary energy balance
Outgoing longwave radiation (OLR) constitutes a critical component of Earth's energy budget.
The principle of conservation of energy says that energy cannot appear or disappear. Thus, any energy
|
https://en.wikipedia.org/wiki/Immune-mediated%20inflammatory%20diseases
|
An immune-mediated inflammatory disease (IMID) is any of a group of conditions or diseases that lack a definitive etiology, but which are characterized by common inflammatory pathways leading to inflammation, and which may result from, or be triggered by, a dysregulation of the normal immune response. All IMIDs can cause end organ damage, and are associated with increased morbidity and/or mortality.
Inflammation is an important and growing area of biomedical research and health care because inflammation mediates and is the primary driver of many medical disorders and autoimmune diseases, including ankylosing spondylitis, psoriasis, psoriatic arthritis, Behçet's disease, arthritis, inflammatory bowel disease (IBD), and allergy, as well as many cardiovascular, neuromuscular, and infectious diseases. Some current research even suggests that aging is a consequence, in part, of inflammatory processes.
Characterization
IMID is characterized by immune disregulation, and one underlying manifestation of this immune disregulation is the inappropriate activation of inflammatory cytokines, such as IL-12, IL-6 or TNF alpha, whose actions lead to pathological consequences.
See also
Immune mediated polygenic arthritis
Bibliography
Shurin, Michael R. and Yuri S. Smolkin (editors). Immune Mediated Diseases: From Theory to Therapy (Advances in Experimental Medicine and Biology). Springer, 2007.
|
https://en.wikipedia.org/wiki/End%20organ%20damage
|
End organ damage usually refers to damage occurring in major organs fed by the circulatory system (heart, kidneys, brain, eyes) which can sustain damage due to uncontrolled hypertension, hypotension, or hypovolemia.
Evidence of hypertensive damage
In the context of hypertension, features include:
Heart — evidence on electrocardiogram screening of the heart muscle thickening (but may also be seen on chest X-ray) suggesting left ventricular hypertrophy) or by echocardiography of less efficient function (left ventricular failure).
Brain- hypertensive encephalopathy, hemorrhagic stroke, subarachnoid hemorrhage, confusion, loss of consciousness, eclampsia, seizures, or transient ischemic attack.
Kidney — leakage of protein into the urine (albuminuria or proteinuria), or reduced renal function, hypertensive nephropathy, acute renal failure, or glomerulonephritis.
Eye — evidence upon fundoscopic examination of hypertensive retinopathy, retinal hemorrhage, papilledema and blindness.
Peripheral arteries — peripheral vascular disease and chronic lower limb ischemia.
Evidence of shock
In the context of poor end organ perfusion, features include:
Kidney — poor urine output (less than 0.5 mL/kg), low glomerular filtration rate.
Skin — pallor or mottled appearance, capillary refill > 2 secs, cool limbs.
Brain — obtundation or disorientation to time, person, and place. The Glasgow Coma Scale may be used to quantify altered consciousness.
Gut — absent bowel sounds, ileus
|
https://en.wikipedia.org/wiki/Bottom%20water
|
Bottom water is the lowermost water mass in a water body, by its bottom, with distinct characteristics, in terms of physics, chemistry, and ecology.
Oceanography
Bottom water consists of cold, dense water near the ocean floor. This water is characterized by low salinity and nutrient content. Generally, low salinity from seasonal ice melt and freshwater river output characterizes bottom water produced in the Antarctic. However, during colder months, the formation of sea ice is a crucial process that raises the salinity of bottom water through brine rejection. As saltwater freezes, salt is expelled from the ice into the surrounding water. The oxygen content in bottom water is high due to ocean circulation. In the Antarctic, salty and cold surface water sinks to lower depths due to its high density. As the surface water sinks, it carries oxygen from the surface with it and will spend an enormous amount of time circulating across the seafloor of ocean basins. Oxygen-rich water moving throughout the bottom layer of the ocean is an important source for the respiration of benthic organisms. Bottom waters flow very slowly, driven mainly by slope topography and differences in temperature and salinity, especially compared to wind-driven surface ocean currents.
Antarctic Bottom Water is the most dominant source of bottom water in southern parts of the Pacific Ocean, Indian Ocean, and North Atlantic Ocean. Antarctic Bottom Water sits underneath the North Atlantic Deep Water due to its colder temperature and higher density. Salinity can be used to compare the movement between fresh Antarctic Bottom Water (roughly 34.7 psu) and saltier North Atlantic Deep Water. Antarctic Bottom Water can be distinguished from other intermediate and deep water masses by its cold, low nutrient, high oxygen, and low salinity content.
The bottom water of the Arctic Ocean is more isolated, due to the topography of the Arctic Ocean floor and the surrounding Arctic shelves. Deep Western Boundary Cur
|
https://en.wikipedia.org/wiki/Immune%20dysregulation
|
Immune dysregulation is any proposed or confirmed breakdown or maladaptive change in molecular control of immune system processes. For example, dysregulation is a component in the pathogenesis of autoimmune diseases and some cancers. Immune system dysfunction, as seen in IPEX syndrome leads to immune dysfunction, polyendocrinopathy, enteropathy, X-linked (IPEX). IPEX typically presents during the first few months of life with diabetes mellitus, intractable diarrhea, failure to thrive, eczema, and hemolytic anemia. unrestrained or unregulated immune response.
IPEX syndrome
IPEX (Immune dysregulation, polyendocrinopathy, enteropathy, X-linked syndrome) is a syndrome caused by a genetic mutation in the FOXP3 gene, which encodes a major transcription factor of regulatory T cells (Tregs). Such a mutation leads to dysfunctional Tregs and, as a result, autoimmune diseases. The classic clinical manifestations are enteropathy, type I diabetes mellitus and eczema. Various other autoimmune diseases or hypersensitivity are common in other individuals with IPEX syndrome. In addition to autoimmune diseases, individuals experience higher immune reactivity (e.g. chronic dermatitis) and susceptibility to infections. Individuals also develop autoimmune diseases at a young age.
Other genetic syndromes associated with immune dysregulation
APECED
Autoimmune polyendocrinopathy-candidiasis-endodermal dystrophy (APECED) is a syndrome caused by a mutation in AIRE (autoimmune regulator). Typical manifestations of APECED are mucocutaneous candidiasis and multiple endocrine autoimmune diseases. APECED causes loss of central immune tolerance.
Omenn syndrome
Omenn syndrome manifests as GVHD (graft versus host disease)-like autoimmune disease. Immune dysregulation is caused by increased IgE production. The syndrome is caused by mutations in the RAG1, RAG2, IL2RG, IL7RA or RMRP genes. The number of immune cells is usually normal in this syndrome, but functionality is reduced
Wiskott-Aldri
|
https://en.wikipedia.org/wiki/Rabbit%202000
|
The Rabbit 2000 is a high-performance 8-bit microcontroller designed by Rabbit Semiconductor for embedded system applications. Rabbit Semiconductor has been bought by Digi International, which is since selling the Rabbit microcontrollers and hardware based on them. The instruction set is based on the original Z80 microprocessor, but with some additions of new instructions as well as deletions of some instructions. Among the Z80 instructions missing in the Rabbit, cpir is particularly notable, since it allows for much more efficient implementations of some often-used standard C functions such as strlen(), strnlen() and memchr(). According to the Rabbit documentation, it executes its instructions 5 times faster than the original Z80 microprocessor, that is, similarly to the Zilog eZ80.
The Rabbit 3000 is a variant of the Rabbit 2000 with the same core, but more powerful integrated peripherals. The Rabbit 3000A variant adds a small number of additional instructions for I/O and large integer arithmetic. The Rabbit 4000 again adds more integrated peripherals. The further derivatives, starting with the Rabbit 5000 have a substantially different architecture.
Most of the Rabbit microcontrollers come with built-in flash memory and SRAM. They also have ADC and timers built-in.
Compiler support
The Rabbit 2000 is supported by the free (GPL) Small Device C Compiler and Z88DK.
There also are the non-free Dynamic C provided by the makers of the Rabbit and the commercial third-party CROSS-C. The latter two are quite incomplete in their support of the C standard, and their Rabbit 2000 backends are no longer available in current compiler versions.
External links
Rabbit 2000 Documentations
Rabbit 2000 User Manual
Microcontrollers
|
https://en.wikipedia.org/wiki/Sha1sum
|
is a computer program that calculates and verifies SHA-1 hashes. It is commonly used to verify the integrity of files. It (or a variant) is installed by default on most Linux distributions. Typically distributed alongside are , , and , which use a specific SHA-2 hash function and , which uses the BLAKE2 cryptographic hash function.
The SHA-1 variants are proven vulnerable to collision attacks, and users should instead use, for example, a SHA-2 variant such as or the BLAKE2 variant to prevent tampering by an adversary.
It is included in GNU Core Utilities, Busybox (excluding ), and Toybox (excluding ). Ports to a wide variety of systems are available, including Microsoft Windows.
Examples
To create a file with a SHA-1 hash in it, if one is not provided:
$ sha1sum filename [filename2] ... > SHA1SUM
If distributing one file, the file extension may be appended to the filename e.g.:
$ sha1sum --binary my-zip.tar.gz > my-zip.tar.gz.sha1
The output contains one line per file of the form "{hash} SPACE (ASTERISK|SPACE) [{directory} SLASH] {filename}". (Note well, if the hash digest creation is performed in text mode instead of binary mode, then there will be two space characters instead of a single space character and an asterisk.) For example:
$ sha1sum -b my-zip.tar.gz
d5db29cd03a2ed055086cef9c31c252b4587d6d0 *my-zip.tar.gz
$ sha1sum -b subdir/filename2
55086cef9c87d6d031cd5db29cd03a2ed0252b45 *subdir/filename2
To verify that a file was downloaded correctly or that it has not been tampered with:
$ sha1sum -c SHA1SUM
filename: OK
filename2: OK
$ sha1sum -c my-zip.tar.gz.sha1
my-zip.tar.gz: OK
Hash file trees
can only create checksums of one or multiple files inside a directory, but not of a directory tree, i.e. of subdirectories, sub-subdirectories, etc. and the files they contain. This is possible by using in combination with the find command with the option, or by piping the output from into xargs. can create checksums of a directory tree.
To use with
|
https://en.wikipedia.org/wiki/Virtual%20Processor
|
Virtual Processor (VP) was a virtual machine from Tao Group.
History
The first version, VP1, was the basis of its parallel processing multimedia OS and platform, TAOS. VP1 supported a RISC-like instruction set with 16 32-bit registers, and had data types of 32- and 64-bit integers and 32- and 64-bit IEEE floating point numbers in registers, and also supported 8- and 16-bit integers in memory.
The second version, VP2, was released in 1998 as the basis of a new version of the portable multimedia platform, first known as Elate and then as intent. VP2 supported the same data types and data processing operations as VP1, but had additional features for better support of high level languages such as demarcation of subroutines, by-value parameters, and a very large theoretical maximum number of registers local to the subroutine for use as local variables.
The structure of VPCode, the Virtual Processor's machine code, was intended to be able to represent the constructs required when compiling languages such as C, C++ and Java, and to allow efficient translation into the machine code of any real 32- or 64-bit CPU.
|
https://en.wikipedia.org/wiki/Shergotty%20meteorite
|
The Shergotty meteorite (Named after Sherghati) is the first example of the shergottite Martian meteorite family. It was a meteorite which fell to Earth at Sherghati, in the Gaya district, Bihar, India on 25 August 1865, and was retrieved by witnesses almost immediately. Radiometric dating indicates that it solidified from a volcanic magma about 4.1 billion years ago. It is composed mostly of pyroxene and is thought to have undergone preterrestrial aqueous alteration for several centuries. Certain features within its interior are suggestive of being remnants of biofilm and their associated microbial communities.
See also
ALH84001 meteorite
Glossary of meteoritics
Life on Mars
List of meteorites on Mars
Nakhla meteorite
NWA 7034 meteorite
Yamato 000593 meteorite
|
https://en.wikipedia.org/wiki/Threshold%20cryptosystem
|
A threshold cryptosystem, the basis for the field of threshold cryptography, is a cryptosystem that protects information by encrypting it and distributing it among a cluster of fault-tolerant computers. The message is encrypted using a public key, and the corresponding private key is shared among the participating parties. With a threshold cryptosystem, in order to decrypt an encrypted message or to sign a message, several parties (more than some threshold number) must cooperate in the decryption or signature protocol.
History
Perhaps the first system with complete threshold properties for a trapdoor function (such as RSA) and a proof of security was published in 1994 by Alfredo De Santis, Yvo Desmedt, Yair Frankel, and Moti Yung.
Historically, only organizations with very valuable secrets, such as certificate authorities, the military, and governments made use of this technology. One of the earliest implementations was done in the 1990s by Certco for the planned deployment of the original Secure electronic transaction.
However, in October 2012, after a number of large public website password ciphertext compromises, RSA Security announced that it would release software to make the technology available to the general public.
In March 2019, the National Institute of Standards and Technology (NIST) conducted a workshop on threshold cryptography to establish consensus on applications, and define specifications. In July 2020, NIST published "Roadmap Toward Criteria for Threshold Schemes for Cryptographic Primitives" as NISTIR 8214A.
Methodology
Let be the number of parties. Such a system is called (t,n)-threshold, if at least t of these parties can efficiently decrypt the ciphertext, while fewer than t have no useful information. Similarly it is possible to define a (t,n)-threshold signature scheme, where at least t parties are required for creating a signature.
Application
The most common application is in the storage of secrets in multiple locations to prev
|
https://en.wikipedia.org/wiki/Phage%20monographs
|
Bacteriophage (phage) are viruses of bacteria and arguably are the most numerous "organisms" on Earth. The history of phage study is captured, in part, in the books published on the topic. This is a list of over 100 monographs on or related to phages.
List of phage monographs (descending date order)
Books published in the 2010s
Hyman, P., Abedon, S. T. 2018. Viruses of Microorganisms. Google Books
Abedon, S. T., García, P., Mullany, P., Aminov, R. 2017. Phage therapy: past, present and future. Google Books
Jassim, S.A.A., Limoges, R.G. 2017. Bacteriophages: Practical Applications for Nature's Biocontrol. Google Books
Dobretsov, N. T., 2018. Bacteriophages: The Enemies of Our Enemies as published as a special issue in Science First Hand consisting of nine articles.
Rakonjac, J., Das, B. Derda, R. 2017. Filamentous Bacteriophage in Bio/Nano/Technology, Bacterial Pathogenesis and Ecology. Google Books
Nicastro,J., Wong,S., Khazaei,Z., Lam,P., Blay,J., Slavcev,R.A. 2016. Bacteriophage Applications - Historical Perspective and Future Potential. Google Books
Allen, H. K., Abedon, S. T. 2015. Viral Ecology and Disturbances: Impact of Environmental Disruption on the Viruses of Microorganisms. Google Books
Wei, H. 2015. Phages and Therapy as published as a special issue in Virologica Sinica consisting of four reviews, three research articles, six letters, and one insight article.
Weitz, J.S., 2015. Quantitative Viral Ecology: Dynamics of Viruses and Their Microbial Hosts. Princeton University Press, Princeton, NJ. . Google Books
Borysowski, J., Międzybrodzki, R., Górski, A., eds. 2014. Phage Therapy: Current Research and Applications. Caister Academic Press, Norfolk, UK. Google Books
Hyman, P., Harrah, T. 2014. Bacteriophage Tail Fibers as a Basis for Structured Assemblies. Momentum Press/ASME, New York, NY. Google Books
Chanishvili, N. 2012. A Literature Review of the Practical Application of Bacteriophage Research. Nova Science Publishers, Hauppau
|
https://en.wikipedia.org/wiki/Paratype
|
In zoology and botany, a paratype is a specimen of an organism that helps define what the scientific name of a species and other taxon actually represents, but it is not the holotype (and in botany is also neither an isotype nor a syntype). Often there is more than one paratype. Paratypes are usually held in museum research collections.
The exact meaning of the term paratype when it is used in zoology is not the same as the meaning when it is used in botany. In both cases however, this term is used in conjunction with holotype.
Zoology
In zoological nomenclature, a paratype is officially defined as "Each specimen of a type series other than the holotype."
In turn, this definition relies on the definition of a "type series". A type series is the material (specimens of organisms) that was cited in the original publication of the new species or subspecies, and was not excluded from being type material by the author (this exclusion can be implicit, e.g., if an author mentions "paratypes" and then subsequently mentions "other material examined", the latter are not included in the type series), nor referred to as a variant, or only dubiously included in the taxon (e.g., a statement such as "I have before me a specimen which agrees in most respects with the remainder of the type series, though it may yet prove to be distinct" would exclude this specimen from the type series).
Thus, in a type series of five specimens, if one is the holotype, the other four will be paratypes.
A paratype may originate from a different locality than the holotype. A paratype cannot become a lectotype, though it is eligible (and often desirable) for designation as a neotype. The International Code of Zoological Nomenclature (ICZN) has not always required a type specimen, but any species or subspecies newly described after the end of 1999 must have a designated holotype or syntypes.
A related term is allotype, a term that indicates a specimen that exemplifies the opposite sex of the holoty
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Dynamics%20and%20Self-Organization
|
The Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, is a research institute for investigations of complex non-equilibrium systems, particularly in physics and biology.
Its founding history goes back to Ludwig Prandtl who in 1911 requested a Kaiser Wilhelm Institute to be founded for the investigation of aerodynamics and hydrodynamics. As a first step the Aeronautische Versuchsanstalt (now the DLR) was established in 1915 and then finally the Kaiser Wilhelm Institute for Flow Research was established in 1924. In 1948 it became part of the Max Planck Society. The Max Planck Society was founded in this institute. In 2003 it was renamed to Max Planck Institute for Dynamics and Self-Organization. It is one of 80 institutes in the Max Planck Society (Max Planck Gesellschaft).
History
The early history of the Max Planck Institute for Dynamics and Self-Organization is closely linked to the work of the famous physicist Ludwig Prandtl. Prandtl is regarded as the founder of fluid dynamics and especially made a name for himself with his boundary layer theory. In Göttingen Prandtl opened two research facilities that both exist until today: in 1915 the Aerodynamical Experimental Station, which concentrated on application-oriented topics in fluid dynamics and evolved into the Göttinger branch of the German Space Agency DLR, and in 1925 the Kaiser Wilhelm Institute for Fluid Dynamics. The latter mainly dealt with fundamental research in the field of fluid dynamics. After the Max Planck Society was founded the institute was renamed Max Planck Institute for Fluid Dynamics. In 2004 the institute again received a new name and is now called Max Planck Institute for Dynamics and Self-Organization.
The early years
In 1904 Ludwig Prandtl accepted a position as professor for Technical Physics at the University of Göttingen. Here he continued his research in fluid dynamics that he had started as professor in Hannover. One of his experimental setups was a
|
https://en.wikipedia.org/wiki/Reflected%20ligament
|
The reflected inguinal ligament (triangular fascia) is a layer of tendinous fibers of a triangular shape, formed by an expansion from the lacunar ligament and the inferior crus of the subcutaneous inguinal ring.
It passes medialward behind the spermatic cord, and expands into a somewhat fan-shaped band, lying behind the superior crus of the subcutaneous inguinal ring, and in front of the inguinal aponeurotic falx, and interlaces with the ligament of the other side of the linea alba.
See also
inguinal ligament
Additional Images
|
https://en.wikipedia.org/wiki/Intercrural%20fibres%20of%20superficial%20inguinal%20ring
|
The intercrural fibers (intercolumnar fibers) are a series of curved tendinous fibers, which arch across the lower part of the aponeurosis of the Obliquus externus, describing curves with the convexities downward.
They have received their name from stretching across between the two crura of the subcutaneous inguinal ring, and they are much thicker and stronger at the inferior crus, where they are connected to the inguinal ligament, than superiorly, where they are inserted into the linea alba.
The intercrural fibers increase the strength of the lower part of the aponeurosis, and prevent the divergence of the crura from one another; they are more strongly developed in the male than in the female.
Intercrural fascia
As they pass across the subcutaneous inguinal ring, they are connected together by delicate fibrous tissue, forming a fascia, called the intercrural fascia.
This intercrural fascia is continued down as a tubular prolongation around the spermatic cord and testis, and encloses them in a sheath; hence it is also called the external spermatic fascia.
The subcutaneous inguinal ring is seen as a distinct aperture only after the intercrural fascia has been removed.
See also
Superficial inguinal ring
|
https://en.wikipedia.org/wiki/Zahorski%20theorem
|
In mathematics, Zahorski's theorem is a theorem of real analysis. It states that a necessary and sufficient condition for a subset of the real line to be the set of points of non-differentiability of a continuous real-valued function, is that it be the union of a Gδ set and a set of zero measure.
This result was proved by in 1939 and first published in 1941.
|
https://en.wikipedia.org/wiki/Phytotelma
|
Phytotelma (plural phytotelmata) is a small water-filled cavity in a terrestrial plant. The water accumulated within these plants may serve as the habitat for associated fauna and flora.
A rich literature in German summarised by Thienemann (1954) developed many aspects of phytotelm biology. Reviews of the subject by Kitching (1971) and Maguire (1971) introduced the concept of phytotelmata to English-speaking readers. A multi-authored book edited by Frank and Lounibos (1983) dealt in 11 chapters with classification of phytotelmata, and with phytotelmata provided by bamboo internodes, banana leaf axils, bromeliad leaf axils, Nepenthes pitchers, Sarracenia pitchers, tree holes, and Heliconia flower bracts and leaf rolls.
A classification of phytotelmata by Kitching (2000) recognizes five principal types: bromeliad tanks, certain carnivorous plants such as pitcher plants, water-filled tree hollows, bamboo internodes, and axil water (collected at the base of leaves, petals or bracts); it concentrated on food webs. A review by Greeney (2001) identified seven forms: tree holes, leaf axils, flowers, modified leaves, fallen vegetative parts (e.g. leaves or bracts), fallen fruit husks, and stem rots.
Etymology
The word "phytotelma" derives from the ancient Greek roots phyto-, meaning 'plant', and telma, meaning 'pond'. Thus, the correct singular is phytotelma.
The term was coined by L. Varga in 1928.
The correct pronunciation is "phytotēlma" and "phytotēlmata" because of the Greek origin (the stressed vowels are here written as ē).
Ecology
Often the faunae associated with phytotelmata are unique: Different groups of microcrustaceans occur in phytotelmata, including ostracods (Elpidium spp. Metacypris bromeliarum), harpacticoid copepods (Bryocamptus spp, Moraria arboricola, Attheyella spp.) and cyclopoid copepods (Bryocyclops spp.,Tropocyclops jamaicensis).
In tropical and subtropical rainforest habitats, many species of frogs specialize on phytotelma as a readil
|
https://en.wikipedia.org/wiki/Cray%20J90
|
The Cray J90 series (code-named Jedi during development) was an air-cooled vector processor supercomputer first sold by Cray Research in 1994. The J90 evolved from the Cray Y-MP EL minisupercomputer, and is compatible with Y-MP software, running the same UNICOS operating system. The J90 supported up to 32 CMOS processors with a 10 ns (100 MHz) clock. It supported up to 4 GB of main memory and up to 48 GB/s of memory bandwidth, giving it considerably less performance than the contemporary Cray T90, but making it a strong competitor to other technical computers in its price range. All input/output in a J90 system was handled by an IOS (Input/Output Subsystem) called IOS Model V. The IOS-V was based on the VME64 bus and SPARC I/O processors (IOPs) running the VxWorks RTOS. The IOS was programmed to emulate the IOS Model E, used in the larger Cray Y-MP systems, in order to minimize changes in the UNICOS operating system. By using standard VME boards, a wide variety of commodity peripherals could be used.
The J90 was available in three basic configurations, the J98 with up to eight processors, the J916 with up to 16 processors, and the J932 with up to 32 processors.
Each J90 processor was composed of two chips - one for the scalar portion of the processor, and the other for the vector portion. The scalar chip was also notable for including a small (128 word) data cache to enhance scalar performance. (Cray machines have always had instruction caching.)
In 1997 the J90se (Scalar Enhanced) series became available, which doubled the scalar speed of the processors to 200 MHz; the vector chip remained at 100 MHz. Support was also added for the GigaRing I/O system found on the Cray T3E and Cray SV1, replacing IOS-V. Later, SV1 processors could be installed in a J90 or J90se, further increasing performance within the same frame.
External links
Fred Gannett's Cray FAQ
J90 at top500.org
Computer-related introductions in 1994
J90
Vector supercomputers
|
https://en.wikipedia.org/wiki/Cray%20X1
|
The Cray X1 is a non-uniform memory access, vector processor supercomputer manufactured and sold by Cray Inc. since 2003. The X1 is often described as the unification of the Cray T90, Cray SV1, and Cray T3E architectures into a single machine. The X1 shares the multistreaming processors, vector caches, and CMOS design of the SV1, the highly scalable distributed memory design of the T3E, and the high memory bandwidth and liquid cooling of the T90.
The X1 uses a 1.2 ns (800 MHz) clock cycle, and 8-wide vector pipes in MSP mode, offering a peak speed of 12.8 gigaflops per processor. Air-cooled models are available with up to 64 processors. Liquid-cooled systems scale to a theoretical maximum of 4096 processors, comprising 1024 shared-memory nodes connected in a two-dimensional torus network, in 32 frames. Such a system would supply a peak speed of 50 teraflops. The largest unclassified X1 system was the 512 processor system at Oak Ridge National Laboratory, though this has since been upgraded to an X1E system.
The X1 can be programmed either with widely used message passing software like MPI and PVM, or with shared-memory languages like Unified Parallel C programming language or Co-array Fortran. The X1 runs an operating system called UNICOS/mp which shares more with the SGI IRIX operating system than it does with the UNICOS found on prior generation Cray machines.
In 2005, Cray released the X1E upgrade, which uses dual-core processors, allowing two quad-processor nodes to fit on a node board. The processors are also upgraded to 1150 MHz. This upgrade almost triples the peak performance per board, but reduces the per-processor memory and interconnect bandwidth. X1 and X1E boards can be combined within the same system.
The X1 is notable for its development being partly funded by United States
Government's National Security Agency (under the code name
SV2).
The X1 was not a financially successful product and it seems doubtful that it
or its successors would have bee
|
https://en.wikipedia.org/wiki/Norton%20Guides
|
Norton Guides were a product family sold by Peter Norton Computing. The guides were written in 1985 by Warren Woodford for the x86 Assembly Language, C, BASIC, and Forth languages and made available to DOS users via a terminate-and-stay-resident (TSR) program that integrated with programming language editors on IBM PC type computers.
Norton Guides appears to be one of the first Online help systems and the first example of a commercial product where programming reference information was integrated into the software development environment. The format was later used by independent users to create simple hypertexts before this concept was more popular. Hypertext capabilities however were limited, links between entries were only possibly by "see also" references at the end of each entry.
The concept of providing "information at your fingertips", as he called it, via a TSR program was a signature technology developed by Woodford in 1980 and used in other programs he created in that era including MathStar, WordFinder/SynonymFinder and a TEMPEST WWMCCS workstation developed for Systematics General Corporation. Warren's Guides, aka the Norton Guides, were the last application of this type written by Woodford.
Norton Guides were compiled from ASCII source files with a tool called NGC. Morten Elling wrote an alternative guides compiler NGX in 1994 or earlier.
A utility to view Norton Guides .ng files is found at http://www.davep.org/norton-guides/
Editions
Norton on-line programmer's guides. An on-line reference library of programming data. Version 1.0. 4 5-1/4" floppy disks. System requirements: IBM PC or compatible computer; 128K RAM; DOS 2.0 or higher; one disk drive.
Links
http://www.davep.org/norton-guides/
http://x-hacker.org/ng/
Computer programming books
Hypertext
|
https://en.wikipedia.org/wiki/Internal%20RAM
|
Internal RAM, or IRAM or on-chip RAM (OCRAM), is the address range of RAM that is internal to the CPU. Some object files contain an .iram section.
Internal RAM (Random-Access Memory)
History of Random-Access Memory (RAM)
Earlier forms of what we have today as DRAM started as drum memory which was an early form of memory for computers. The drum would have to be pre-loaded with data and small heads in the drum would read and write the pre-loaded information. After drum memory came Magnetic-core memory which would store information using the polarity of ferrite donuts' magnetic fields. Through these early trial and errors of computing memory, the final result was Dynamic Random Access Memory which we use today in our devices. Dynamic Random Access Memory or (RAM) was first invented in 1968 by Robert Dennard. He was born in Texas and is an engineer who created one of the first models of (RAM) which was first called Dynamic Random Access Memory. His invention led to computers being able to reach a new era of technological advancement.
General Information about RAM
Random Access Memory is memory storage that if found in electronic devices such as computers. It hold data while the computer is on so that it can be quickly accessed by the CPU or (Central Processing Unit). Ram is different from regular storage units such as Hard Disks, Solid State Drives, and Solid State Hybrid Drives. While these types of drives hold much permanent information, RAM holds temporary, yet important, information for the computer to receive. While using very minimal programs such as a browser or having a couple of programs open, a RAM is working so that it can load up small tasks like these. However when opening up bigger programs and more tabs for a computer to work harder the information is shifted from the RAM to other drives such as the hard disk.
Technical Properties of RAM
Generally, IRAM is composed of very high speed SRAM located alongside of the CPU. It acts similar to a CPU cache
|
https://en.wikipedia.org/wiki/ChipWits
|
ChipWits is a programming game for the Macintosh written by Doug Sharp and Mike Johnston, and published by BrainPower software in 1984. Ports to the Apple II and Commodore 64 were published by Epyx in 1985.
The player uses a visual programming language to teach a virtual robot how to navigate various mazes of varying difficulty. The gameplay straddles the line between entertainment and programming education. The game was developed in MacFORTH and later ported to the Apple II and Commodore 64.
Reception
Computer Gaming World preferred Robot Odyssey to ChipWits but stated that both were "incredibly vivid simulation experiences". The magazine criticized ChipWits inability to save more than 16 robots or copy a robot to a new save slot, and cautioned that it "may be too simple for people familiar with programming". The magazine added that the criticism was "more a cry for a more complex Chipwits II game than condemnation of the current product".
ChipWits won numerous awards, including MACazine Best of '85 and MacUser's Editor's Choice 1985 Award, as well as being named The 8th Best Apple Game of All Time by Maclife.
Reviews
Games #66
Legacy
From 2006 to 2008, Mike Johnston and Doug Sharp developed and released ChipWits II, written in Adobe AIR. That version featured several innovations including an in-game tutorial, updated graphics, a soundtrack, isometric and 3D rendering, several new chips, and new missions. That version is no longer supported, but the original site is archived at .
In September 2021, ChipWits, Inc. was formed by Doug Sharp and Mark Roth to create a modern reboot of the game. The new version is being written in Unity and is in early access testing.
|
https://en.wikipedia.org/wiki/Distributed%20key%20generation
|
Distributed key generation (DKG) is a cryptographic process in which multiple parties contribute to the calculation of a shared public and private key set. Unlike most public key encryption models, distributed key generation does not rely on Trusted Third Parties. Instead, the participation of a threshold of honest parties determines whether a key pair can be computed successfully. Distributed key generation prevents single parties from having access to a private key. The involvement of many parties requires Distributed key generation to ensure secrecy in the presence of malicious contributions to the key calculation.
Distributed Key Generation is commonly used to decrypt shared ciphertexts or create group digital signatures.
History
Distributed key generation protocol was first specified by Torben Pedersen in 1991. This first model depended on the security of the Joint-Feldman Protocol for verifiable secret sharing during the secret sharing process.
In 1999, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, and Tal Rabin produced a series of security proofs demonstrating that Feldman verifiable secret sharing was vulnerable to malicious contributions to Pedersen's distributed key generator that would leak information about the shared private key. The same group also proposed an updated distributed key generation scheme preventing malicious contributions from impacting the value of the private key.
Methods
The distributed key generation protocol specified by Gennaro, Jarecki, Krawczyk, and Rabin assumes that a group of players has already been established by an honest party prior to the key generation. It also assumes the communication between parties is synchronous.
All parties use Pedersen's verifiable secret sharing protocol to share the results of two random polynomial functions.
Every party then verifies all the shares they received. If verification fails, the recipient broadcasts a complaint for the party whose share failed. Each accused party t
|
https://en.wikipedia.org/wiki/Luhn%20mod%20N%20algorithm
|
The Luhn mod N algorithm is an extension to the Luhn algorithm (also known as mod 10 algorithm) that allows it to work with sequences of values in any even-numbered base. This can be useful when a check digit is required to validate an identification string composed of letters, a combination of letters and digits or any arbitrary set of characters where is divisible by 2.
Informal explanation
The Luhn mod N algorithm generates a check digit (more precisely, a check character) within the same range of valid characters as the input string. For example, if the algorithm is applied to a string of lower-case letters (a to z), the check character will also be a lower-case letter. Apart from this distinction, it resembles very closely the original algorithm.
The main idea behind the extension is that the full set of valid input characters is mapped to a list of code-points (i.e., sequential integers beginning with zero). The algorithm processes the input string by converting each character to its associated code-point and then performing the computations in mod N (where is the number of valid input characters). Finally, the resulting check code-point is mapped back to obtain its corresponding check character.
Limitation
The Luhn mod N algorithm only works where is divisible by 2. This is because there is an operation to correct the value of a position after doubling its value which does not work where is not divisible by 2. For applications using the English alphabet this is not a problem, since a string of lower-case letters has 26 code-points, and adding Decimal characters adds a further 10, maintaining an divisible by 2.
Explanation
The second step in the Luhn algorithm re-packs the doubled value of a position into the original digit's base by adding together the individual digits in the doubled value when written in base . This step results in even numbers if the doubled value is less than or equal to , and odd numbers if the doubled value is greater th
|
https://en.wikipedia.org/wiki/Hitachi%20HD44780%20LCD%20controller
|
The Hitachi HD44780 LCD controller is an alphanumeric dot matrix liquid crystal display (LCD) controller developed by Hitachi in the 1980s. The character set of the controller includes ASCII characters, Japanese Kana characters, and some symbols in two 40 character lines. Using an extension driver, the device can display up to 80 characters. Numerous third-party displays are compatible with its 16-pin interface and instruction set, making it a popular and cheap LCD driver.
Architecture
The Hitachi HD44780 LCD controller is limited to monochrome text displays and is often used in copiers, fax machines, laser printers, industrial test equipment, and networking equipment, such as routers and storage devices.
Compatible LCD screens are manufactured in several standard configurations. Common sizes are one row of eight characters (8×1), and 16×2, 20×2 and 20×4 formats. Larger custom sizes are made with 32, 40 and 80 characters and with 1, 2, 4 or 8 lines. The most commonly manufactured larger configuration is 40×4 characters, which requires two individually addressable HD44780 controllers with expansion chips as a single HD44780 chip can only address up to 80 characters.
Character LCDs may have a backlight, which may be LED, fluorescent, or electroluminescent. The nominal operating voltage for LED backlights is 5V at full brightness, with dimming at lower voltages dependent on the details such as LED color. Non-LED backlights often require higher voltages.
Interface
Character LCDs use a 16-contact interface, commonly using pins or card edge connections on 0.1 inch (2.54 mm) centers. Those without backlights may have only 14 pins, omitting the two pins powering the light. This interface was designed to be easily hooked up to the Intel MCS-51 XRAM interface, using only two address pins, which allowed displaying text on LCD using simple MOVX commands, offering a cost effective option for adding text display to devices.
The predominant pinout is as follows (excepti
|
https://en.wikipedia.org/wiki/Generalized%20complex%20structure
|
In the field of mathematics known as differential geometry, a generalized complex structure is a property of a differential manifold that includes as special cases a complex structure and a symplectic structure. Generalized complex structures were introduced by Nigel Hitchin in 2002 and further developed by his students Marco Gualtieri and Gil Cavalcanti.
These structures first arose in Hitchin's program of characterizing geometrical structures via functionals of differential forms, a connection which formed the basis of Robbert Dijkgraaf, Sergei Gukov, Andrew Neitzke and Cumrun Vafa's 2004 proposal that topological string theories are special cases of a topological M-theory. Today generalized complex structures also play a leading role in physical string theory, as supersymmetric flux compactifications, which relate 10-dimensional physics to 4-dimensional worlds like ours, require (possibly twisted) generalized complex structures.
Definition
The generalized tangent bundle
Consider an N-manifold M. The tangent bundle of M, which will be denoted T, is the vector bundle over M whose fibers consist of all tangent vectors to M. A section of T is a vector field on M. The cotangent bundle of M, denoted T*, is the vector bundle over M whose sections are one-forms on M.
In complex geometry one considers structures on the tangent bundles of manifolds. In symplectic geometry one is instead interested in exterior powers of the cotangent bundle. Generalized geometry unites these two fields by treating sections of the generalized tangent bundle, which is the direct sum of the tangent and cotangent bundles, which are formal sums of a vector field and a one-form.
The fibers are endowed with a natural inner product with signature (N, N). If X and Y are vector fields and ξ and η are one-forms then the inner product of X+ξ and Y+η is defined as
A generalized almost complex structure is just an almost complex structure of the generalized tangent bundle which preserves the natu
|
https://en.wikipedia.org/wiki/QNS
|
QNS is a clinical laboratory abbreviation for quantity not sufficient.
This indicates that either:
There is not enough specimen for the lab tests ordered to be performed.
In the case of Vacutainers or other tubes with pre-added anticoagulant, the amount of blood invacuated into the tube at the time of phlebotomy was insufficient to attain the correct blood:anticoagulant ratio. This can cause false results in assays such as coagulation assays (causing falsely increased clotting times) or blood cell differentials (causing a false increase in poikilocytes, particularly burr cells.)
In either case, the most common and feasible way to correct the problem is to simply recollect the specimen.
Quantity not sufficient implies that the final volume of diluent is not sufficient for molecular testing.
Medical diagnosis
|
https://en.wikipedia.org/wiki/MUSIC%20%28algorithm%29
|
MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation and radio direction finding.
History
In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements.
Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of complex sinusoids in additive noise using a covariance approach. Schmidt (1977), while working at Northrop Grumman and independently Bienvenu and Kopp (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (MUltiple SIgnal Classification) and has been widely studied.
In a detailed evaluation based on thousands of simulations, the Massachusetts Institute of Technology's Lincoln Laboratory concluded in 1998 that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data).
Theory
MUSIC method assumes that a signal vector, , consists
|
https://en.wikipedia.org/wiki/Pisarenko%20harmonic%20decomposition
|
Pisarenko harmonic decomposition, also referred to as Pisarenko's method, is a method of frequency estimation. This method assumes that a signal, , consists of complex exponentials in the presence of white noise. Because the number of complex exponentials must be known a priori, it is somewhat limited in its usefulness.
Pisarenko's method also assumes that values of the autocorrelation matrix are either known or estimated. Hence, given the autocorrelation matrix, the dimension of the noise subspace is equal to one and is spanned by the eigenvector corresponding to the minimum eigenvalue. This eigenvector is orthogonal to each of the signal vectors.
The frequency estimates may be determined by setting the frequencies equal to the angles of the roots of the polynomial
or the location of the peaks in the frequency estimation function (or the pseudo-spectrum)
,
where is the noise eigenvector and
.
History
The method was first discovered in 1911 by Constantin Carathéodory, then rediscovered by Vladilen Fedorovich Pisarenko in 1973 while examining the problem of estimating the frequencies of complex signals in white noise. He found that the frequencies could be derived from the eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix.
See also
Multiple signal classification (MUSIC)
|
https://en.wikipedia.org/wiki/Arzak
|
Arzak is a restaurant in San Sebastián, Spain. It features New Basque Cuisine. In 2008, Arzak's owner and chef, Juan Mari Arzak, was awarded the Universal Basque award for "adapting gastronomy, one of the most important traditions of the Basque Country, to the new times and making of it one of the most innovative of the world".
Description
Arzak serves modernized Basque cuisine, inspired by cooking techniques and presentation found in nouvelle cuisine. The restaurant is inside an old multiple-storied brick building, with the restaurant on the main floor. Above the restaurant is a wine cellar with over 100,000 bottles of wine, as well as a test kitchen. The test kitchen, established in 2000, is where the Arzaks come up with new recipes and methods. It houses over 1500 ingredients that can potentially serve as inspiration for a dish. Assisted by chefs Igor Zalacain and Xabi Gutiérrez, the Arzak test kitchen comes up with 40 new dishes a year, stemming from recipes that are created and refined over a period of three to six months.
History
Built as a house in 1897 by the current owner's grandparents (José Maria Arzak Etxabe and Escolastica Lete), they turned it into a wine inn and tavern. At the time, the village of Alza was separate from San Sebastián. When the next generation took over (Juan Ramon Arzak and Francisca Arratibel), it was turned into a restaurant. Kitchen duties are now shared between Juan Mari Arzak and his daughter Elena Arzak.
Awards and honors
1989–present, Michelin Guide Three Stars
2003–present, Restaurant (magazine) S.Pellegrino World's 50 Best Restaurants
2013, 8th best restaurant in the world, S. Pellegrino
|
https://en.wikipedia.org/wiki/Mikie
|
Mikie, known as in Japan, is an arcade video game developed and released by Konami in 1984. The object of the game is to guide a student named Mikie around the school locations to collect hearts which make up a letter from his girlfriend while being chased by members of the school staff. In Japan, the game's setting was changed to an office in order to avoid controversy, while the original version of the game was released internationally. Centuri distributed the game in North America.
Gameplay
The game starts at the classroom where Toru Mikie (美紀 徹, Miki Tōru) gets out of his seat for the player to begin the game. Mikie must bump the students out of their seats to collect the hearts they're sitting on, while simultaneously avoiding the classroom teacher. Once all hearts are collected by the player he is allowed to leave the room and enter the school corridor.
The school corridor is where Mikie will be chased by the janitor and his classroom teacher, who follows him outside. This is the way to gain access to the rest of the school building, each room representing a different challenge or level. Mikie will be cued to the proper door to enter by a large, flashing "In" - opening any other door will result in Mikie being punched by a coiled boxing glove or hairy foot, stunning him. One of the doors, however, contains a scantily clad girl: opening this door is worth 5,000 points. Mikie can also pick up extra points by picking up lunch boxes and opening a grate that contains a burger and soda. In addition to head-butting, enemies (the janitor and the classroom teacher) can also be stunned by slamming doors in their faces.
The second room is the locker room, where the objective is to break the lockers to get the hearts, while being pursued by a janitor, a cook, and the classroom teacher. In addition to the head-butting, there are three bins of basketballs located around the room, which Mikie can pick up and throw using the action button. Each bin holds three basketball
|
https://en.wikipedia.org/wiki/Cray%20XT3
|
The Cray XT3 is a distributed memory massively parallel MIMD supercomputer designed by Cray Inc. with Sandia National Laboratories under the codename Red Storm. Cray turned the design into a commercial product in 2004. The XT3 derives much of its architecture from the previous Cray T3E system, and also from the Intel ASCI Red supercomputer.
XT3
The XT3 consists of between 192 and 32,768 processing elements (PEs), where each PE comprises a 2.4 or 2.6 GHz AMD Opteron processor with up to two cores, a custom "SeaStar" communications chip, and between 1 and 8 GB of RAM. The PowerPC 440 based SeaStar device provides a 6.4 gigabyte per second connection to the processor across HyperTransport, as well as six 8-gigabyte per second links to neighboring PEs. The PEs are arranged in a 3-dimensional torus topology, with 96 PEs in each cabinet.
The XT3 runs an operating system called UNICOS/lc that partitions the machine into three sections, the largest comprising the Compute nodes, and two smaller sections for Service nodes and IO nodes. In UNICOS/lc 1.x, the Compute PEs run a Sandia developed microkernel called Catamount, which is descended from the SUNMOS OS of the Intel Paragon; in UNICOS/lc 2.0, Catamount was replaced by a specially tuned version of Linux called Compute Node Linux (CNL). Service and IO PEs run the full version of SuSE Linux and are used for interactive logins, systems management, application compiling and job launch. I/O PEs use physically distinct hardware, in that the node boards include PCI-X slots for connections to Ethernet and Fibre Channel networks.
Though the performance of each XT3 model will vary with the speed and number of processors installed, the November 2007 Top500 results for the Red Storm machine, the largest XT3 machine installed at Sandia, measured 102.7 teraflops on the Linpack benchmark, placing it at #6 on the list. After upgrades in 2008 to install some XT4 nodes with quad-core Opterons, Red Storm achieved 248 teraflops to pla
|
https://en.wikipedia.org/wiki/Domino%20computer
|
A domino computer is a mechanical computer built using dominoes to represent mechanical amplification or logic gating of digital signals.
Basic phenomenon
Sequences of standing dominoes (so that each domino topples the next one) can be arranged to demonstrate digital concepts such as amplification and digital signals. Since digital information is conducted by a string of dominoes, this effect differs from phenomena where:
energy is conducted without amplification, thus dissipating; or
amplification is applied to non-digital signals, allowing noise effects to occur.
The Domino Day event shows many constructs, mainly for the purposes of entertainment. Some constructs may remind people of digital circuits, suggesting that not only telegraph-like tools can be shown, but also simple information processing modules can be constructed.
It is possible to use this phenomenon for constructing unconventional computing tools. The base phenomenon is sufficient to achieve this goal, but also sophisticated “mechanical synapses” can be used (see online ), to the analogy of electrical synapses or chemical synapses.
Logical aspects
The logic gate OR is very natural in dominoes.
The problem is which gate is able to be added to OR, and obtain a functionally complete set.
Note that no domino gate can produce output 1 with all inputs 0,
so there is no NOT gate, making it impossible to make an IMPLY gate without an external 'power source' sequence.
Once we admit it, NOT is realized and we have a complete set.
But it is however distant to lead in a sequence from one source to many gates in each suitable timing.
Let us suppose we do not have one.
A root breaking system is basically needed if one wants a logical connective with output 0 for input 1.
Let P$Q be the gate in which the sequence to be turned down by P is broken by that by Q.
Then P$Q is logically equivalent to P AND (NOT Q), if the input Q is earlier than P.
The set of OR and $ can represent any logical connectives in a
|
https://en.wikipedia.org/wiki/Compression%20Networks
|
Compression Networks is a digital content delivery system developed by TV/COM International that evolved into the current DVB-S standard for satellite broadcasting. The system provided MPEG2 video, audio, signalling, enhanced program guide, and conditional access for pay-television services like AlphaStar.
See also
EchoStar
Satellite television
Broadcasting
Video
|
https://en.wikipedia.org/wiki/Sales%20outsourcing
|
Sales outsourcing refers to indirect sales process through which the seller sells products or services to buyers while making some profits.
Purpose of indirect sales
The sole purpose of a contract sales organization is to provide sales resource to its clients, without taking title to their products. Sales outsourcing providers include manufacturers' representatives, contract sales organizations, sales agents or SO outsourcing consultants. One way of organising the sales effort, especially when product delivery is erratic, is to replace or supplement internal resources with functionality and expertise brought in from contract sales organisations.
SO outsourcing is quite different from large-scale service outsourcing, which has its advantages but also requires pro-active contract and relationship management. In addition to full sales outsourcing, many partial models are observed, particularly in large firms.
Advantages
SO is expected to be cheaper than the fully loaded cost of employing salespeople, but calculating the cost comparison over time is far from straightforward. Nevertheless, replacing fixed costs with variable costs is attractive to budget-holders. However, unlike many forms of outsourcing, the advantages of sales outsourcing does not often come from saving costs but rather increasing revenue or providing speed of response or flexibility.
The business case for sales outsourcing should also include consideration of the cost of controlling the contract. Difficulty in measuring the link between sales activity and sales performance leads to a preference for employed salespeople. However, the issues internally are often the same and the internal hire has many other corporate "distractions" that do not occur with external resources.
Companies may also choose sales outsourcing as a means of accessing the best sales skills. Although the pejorative term “rent-a-rep” is still used, there is some evidence that contractors are perceived as good performers a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.